878 resultados para onion routing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Urbanization is a continuing phenomenon in all the world. Grasslands, forests, etc. are being continually changed to residential, commercial and industrial complexes, roads and streets, and so on. One of the side effects of urbanization with which engineers and planners must deal with, is the increase of peak flows and volumes of runoff from rainfall events. As a result, the urban drainage and flood control systems must be designed to accommodate the peak flows from a variety of storms that may occur. Usually the peak flow, after development, is required not to exceed what would have occurred from the same storm under conditions existing prior to development. In order to do this it is necessary to design detention storage to hold back runoff and to release it downstream at controlled rates. In the first part of the work have been developed various simplified formulations that can be adopted for the design of stormwater detention facilities. In order to obtain a simplified hydrograph were adopted two approaches: the kinematic routing technique and the linear reservoir schematization. For the two approaches have been also obtained other two formulations depending if the IDF (intensity-duration-frequency) curve is described with two or three parameters. Other formulations have been developed taking into account if the outlet have a constant discharge or it depends on the water level in the pond. All these formulations can be easily applied when are known the characteristics of the drainage system and maximum discharge that these is in the outlet and has been defined a Return Period which characterize the IDF curve. In this way the volume of the detention pond can be calculated. In the second part of the work have been analyzed the design of detention ponds adopting continuous simulation models. The drainage systems adopted for the simulations, performed with SWMM5, are fictitious systems characterized by different sizes, and different shapes of the catchments and with a rainfall historical time series of 16 years recorded in Bologna. This approach suffers from the fact that continuous record of rainfall is often not available and when it is, the cost of such modelling can be very expensive, and that the majority of design practitioners are not prepared to use continuous long term modelling in the design of stormwater detention facilities. In the third part of the work have been analyzed statistical and stochastic methodologies in order to define the volume of the detention pond. In particular have been adopted the results of the long term simulation, performed with SWMM, to obtain the data to apply statistic and stochastic formulation. All these methodologies have been compared and correction coefficient have been proposed on the basis of the statistic and stochastic form. In this way engineers which have to design a detention pond can apply a simplified procedure appropriately corrected with the proposed coefficient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A reduced cancer risk associated with fruit and vegetable phytochemicals initially dictated chemopreventive approaches focused on specific green variety consumption or even single nutrient supplementations. However, these strategies not only failed to provide any health benefits but gave rise to detrimental effects. In parallel, public-health chemoprevention programmes were developed in the USA and Europe to increase whole vegetable consumption. Among these, the National Cancer Institute (NCI) sponsored plan “5 to 9 a day for a better health” was one of the most popular. This campaign promoted wide food choice through the consumption of at least 5 to 9 servings a day of colourful fruits and vegetables. In this study the effects of the diet suggested by NCI on transcription, translation and catalytic activity of both xenobiotic metabolizing (XME) and antioxidant enzymes were studied in the animal model. In fact, the boost of both antioxidant defences and “good” phase-II together with down-regulation of “bad” phase-I XMEs is still considered one of the most widely-used strategies of cancer control. Six male Sprague Dawley rats for each treatment group were used. According to the Italian Society of Human Nutrition, a serving of fruit, vegetables and leafy greens corresponds to 150, 250 and 50 g, respectively, in a 70 kg man. Proportionally, rats received one or five servings of lyophilized onion, tomato, peach, black grape or lettuce – for white, red, yellow, violet or green diet, respectively - or five servings of each green (“5 a day” diet) by oral gavage daily for 10 consecutive days. Liver subcellular fractions were tested for various cytochrome P450 (CYP) linked-monooxygenases, phase-II supported XMEs such as glutathione S-transferase (GST) and UDP-glucuronosyl transferase (UDPGT) as well as for some antioxidant enzymes. Hepatic transcriptional and translational effects were evaluated by reverse transcription-polymerase chain reaction (RT-PCR) and Western blot analysis, respectively. dROMs test was used to measure plasmatic oxidative stress. Routine haematochemical parameters were also monitored. While the five servings administration didn’t significantly vary XME catalytic activity, the lower dose caused a complex pattern of CYP inactivation with lettuce exerting particularly strong effects (a loss of up to 43% and 45% for CYP content and CYP2B1/2-linked XME, respectively; P<0.01). “5 a day” supplementation produced the most pronounced modulations (a loss of up to 60% for CYP2E1-linked XME and a reduction of CYP content of 54%; P<0.01). Testosterone hydroxylase activity confirmed these results. RT-PCR and Western blot analysis revealed that the “5 a day” diet XMEs inactivations were a result of both a transcriptional and a translational effect while lettuce didn’t exert such effects. All administrations brought out none or fewer modulation of phase-II supported XMEs. Apart from “5 a day” supplementation and the single serving of lettuce, which strongly induced DT- diaphorase (an increase of up to 141 and 171%, respectively; P<0.01), antioxidant enzymes were not significantly changed. RT-PCR analysis confirmed DT-diaphorase induction brought about by the administration of both “5 a day” diet and a single serving of lettuce. Furthermore, it unmasked a similar result for heme-oxygenase. dROMs test provided insight into a condition of high systemic oxidative stress as a consequence of animal diet supplementation with “5 a day” diet and a single serving of lettuce (an increase of up to 600% and 900%, respectively; P<0.01). Haematochemical parameters were mildly affected by such dietary manipulations. According to the classical chemopreventive theory, these results could be of particular relevance. In fact, even if antioxidant enzymes were only mildly affected, the phase-I inactivating ability of these vegetables would be a worthy strategy to cancer control. However, the recorded systemic considerable amount of reactive oxygen species and the complexity of these enzymes and their functions suggest caution in the widespread use of vegan/vegetarian diets as human chemopreventive strategies. In fact, recent literature rather suggests that only diets rich in fruits and vegetables and poor in certain types of fat, together with moderate caloric intake, could be associated with reduced cancer risk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il lavoro di tesi svolto riguarda la progettazione e lo sviluppo di un algoritmo per la pianificazione ottimizzata della distribuzione con viaggi sincronizzati; il metodo sviluppato è un algoritmo mateuristico. I metodi mateuristici nascono dall’integrazione di algoritmi esatti, utilizzati all’interno di un framework metaeuristico, scelto come paradigma di soluzione del problema. La combinazione di componenti esatte e algoritmi metaeuristici ha lo scopo di sfruttare i vantaggi di entrambi gli approcci: grazie all'uso di componenti esatte, è possibile operare in modo efficace e di concentrarsi su alcuni dei vincoli del problema, mentre, con l'utilizzo di un framework metaeuristico, si può efficacemente esplorare grandi aree dello spazio di ricerca in tempi accettabili. Il problema analizzato nel lavoro di tesi è un problema di trasporto, ovvero il Vehicle Routing Problem con finestre temporali e vincoli di sincronizzazione a coppie (VRPTWPS). Il problema richiede di individuare un piano di organizzazione ottimizzato per i viaggi di consegna merci presso un insieme di clienti; ogni cliente richiede che la consegna avvenga all’interno di orari predefiniti; un sottoinsieme di essi richiede, inoltre, che la consegna venga effettuata con la presenza di esattamente due addetti. La presenza di quest’ultimo vincolo richiede, dunque, che due incaricati, indipendentemente dai viaggi di visita che questi effettuano, si incontrino presso uno stesso cliente nello stesso istante. Il vincolo di sincronizzazione rende il problema difficile da risolvere in maniera ottimizzata con i tradizionali metodi di ricerca locale; da ciò nasce l’uso dei metodi mateuristici per la risoluzione ottimizzata del problema. Grazie all’utilizzo di algoritmi esatti, i metodi mateuristici riescono a trattare in maniera più efficace alcuni vincoli dei problemi da risolvere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most interesting challenge of the next years will be the Air Space Systems automation. This process will involve different aspects as the Air Traffic Management, the Aircrafts and Airport Operations and the Guidance and Navigation Systems. The use of UAS (Uninhabited Aerial System) for civil mission will be one of the most important steps in this automation process. In civil air space, Air Traffic Controllers (ATC) manage the air traffic ensuring that a minimum separation between the controlled aircrafts is always provided. For this purpose ATCs use several operative avoidance techniques like holding patterns or rerouting. The use of UAS in these context will require the definition of strategies for a common management of piloted and piloted air traffic that allow the UAS to self separate. As a first employment in civil air space we consider a UAS surveillance mission that consists in departing from a ground base, taking pictures over a set of mission targets and coming back to the same ground base. During all mission a set of piloted aircrafts fly in the same airspace and thus the UAS has to self separate using the ATC avoidance as anticipated. We consider two objective, the first consists in the minimization of the air traffic impact over the mission, the second consists in the minimization of the impact of the mission over the air traffic. A particular version of the well known Travelling Salesman Problem (TSP) called Time-Dependant-TSP has been studied to deal with traffic problems in big urban areas. Its basic idea consists in a cost of the route between two clients depending on the period of the day in which it is crossed. Our thesis supports that such idea can be applied to the air traffic too using a convenient time horizon compatible with aircrafts operations. The cost of a UAS sub-route will depend on the air traffic that it will meet starting such route in a specific moment and consequently on the avoidance maneuver that it will use to avoid that conflict. The conflict avoidance is a topic that has been hardly developed in past years using different approaches. In this thesis we purpose a new approach based on the use of ATC operative techniques that makes it possible both to model the UAS problem using a TDTSP framework both to use an Air Traffic Management perspective. Starting from this kind of mission, the problem of the UAS insertion in civil air space is formalized as the UAS Routing Problem (URP). For this reason we introduce a new structure called Conflict Graph that makes it possible to model the avoidance maneuvers and to define the arc cost function of the departing time. Two Integer Linear Programming formulations of the problem are proposed. The first is based on a TDTSP formulation that, unfortunately, is weaker then the TSP formulation. Thus a new formulation based on a TSP variation that uses specific penalty to model the holdings is proposed. Different algorithms are presented: exact algorithms, simple heuristics used as Upper Bounds on the number of time steps used, and metaheuristic algorithms as Genetic Algorithm and Simulated Annealing. Finally an air traffic scenario has been simulated using real air traffic data in order to test our algorithms. Graphic Tools have been used to represent the Milano Linate air space and its air traffic during different days. Such data have been provided by ENAV S.p.A (Italian Agency for Air Navigation Services).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In dieser Arbeit wird die Synthese, Charakterisierung und Manipulation anisotroper Kolloide aus flüssigkristallinen Polymeren beschrieben. Um Kolloide verschiedener Größe und aus verschiedenen Polymeren zu erhalten, wurden verschiedene Techniken verwendet. Einerseits wurden Kolloide aus nematischen und smektischen Polymeren mit Durchmessern meist im Bereich von 0,5 bis 3,5 Mikrometern hergestellt. Dazu wurden 16 verschiedene Acrylat- und Methacrylatmonomere synthetisiert und mittels Dispersionspolymerisation polymerisiert. Durch Variation der Polymerisationsbedingungen wurden Kolloide verschiedener Größe und Polydispersität erhalten. Durch Saatpolymerisation konnten zudem die Kugelgrößen bei gleichbleibend geringer Polydispersität erhöht werden. Polarisationsmikroskopie zeigt, dass die meisten Kolloide mit einer Größe zwischen ca. 2 bis 4 Mikrometern eine bipolare Direktorkonfiguration haben. Einige dieser Kolloide wurden mit einer optischen Pinzette mit zirkular polarisiertem Licht eingefangen und rotiert. Zum anderen wurden verschiedene flüssigkristalline Polymere (Polysiloxane, Hauptkettenpolymere und Polyacrylate) durch den Miniemulsionsprozess in Kolloide mit Durchmessern im Bereich von ca. 50 bis 300 nm überführt. Durch Variation der Emulgator- und Polymermenge sowie der Art des Emulgators konnte die Kugelgröße beeinflusst werden. Für die Polysiloxankolloide erfolgte die Aufklärung ihrer inneren Struktur mittels TEM und Kryo-TEM, da durch das Silizium im Polymerrückgrat ohne zusätzliches Anfärben ein Kontrast vorhanden ist. Die TEM-Aufnahmen zeigen deutlich die smektische Schichtstruktur innerhalb der Kolloide aus „verdünnten“ Copolysiloxanen und sind somit der erste direkte Beweis für die Mikrophasenseparation zwischen den Mesogenen und Polysiloxanketten, die bisher basierend auf Röntgenmessungen nur indirekt vorhergesagt wurde. Für die Copolysiloxane mit 2-Ring-Mesogenen wurden zwiebelartige Strukturen und für die Copolysiloxane mit 3-Ring-Mesogenen parallele Schichtstrukturen gefunden. Im ersten Fall folgt die smektische Schichtstruktur der Kugelsymmetrie des Kolloids, im zweiten Fall dominiert die Tendenz der smektischen Schichten, sich parallel anzuordnen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Here, we present the adaptation and optimization of (i) the solvothermal and (ii) the metal-organic chemical vapor deposition (MOCVD) approach as simple methods for the high-yield synthesis of MQ2 (M=Mo, W, Zr; Q = O, S) nanoparticles. Extensive characterization was carried out using X-ray diffraction (XRD), scanning and transmission electron micros¬copy (SEM/TEM) combined with energy dispersive X-ray analysis (EDXA), Raman spectroscopy, thermal analyses (DTA/TG), small angle X-ray scattering (SAXS) and BET measurements. After a general introduction to the state of the art, a simple route to nanostructured MoS2 based on the decomposition of the cluster-based precursor (NH4)2Mo3S13∙xH2O under solvothermal conditions (toluene, 653 K) is presented. Solvothermal decomposition results in nanostructured material that is distinct from the material obtained by decomposition of the same precursor in sealed quartz tubes at the same temperature. When carried out in the presence of the surfactant cetyltrimethyl¬ammonium bromide (CTAB), the decomposition product exhibits highly disordered MoS2 lamellae with high surface areas. The synthesis of WS2 onion-like nanoparticles by means of a single-step MOCVD process is discussed. Furthermore, the results of the successful transfer of the two-step MO¬CVD based synthesis of MoQ2 nanoparticles (Q = S, Se), comprising the formation of amorphous precursor particles and followed by the formation of fullerene-like particles in a subsequent annealing step to the W-S system, are presented. Based on a study of the temperature dependence of the reactions a set of conditions for the formation of onion-like structures in a one-step reaction could be derived. The MOCVD approach allows a selective synthesis of open and filled fullerene-like chalcogenide nanoparticles. An in situ heating stage transmission electron microscopy (TEM) study was employed to comparatively investigate the growth mechanism of MoS2 and WS2 nanoparticles obtained from MOCVD upon annealing. Round, mainly amorphous particles in the pristine sample trans¬form to hollow onion-like particles upon annealing. A significant difference between both compounds could be demonstrated in their crystallization conduct. Finally, the results of the in situ hea¬ting experiments are compared to those obtained from an ex situ annealing process under Ar. Eventually, a low temperature synthesis of monodisperse ZrO2 nanoparticles with diameters of ~ 8 nm is introduced. Whereas the solvent could be omitted, the synthesis in an autoclave is crucial for gaining nano-sized (n) ZrO2 by thermal decomposition of Zr(C2O4)2. The n-ZrO2 particles exhibits high specific surface areas (up to 385 m2/g) which make them promising candidates as catalysts and catalyst supports. Co-existence of m- and t-ZrO2 nano-particles of 6-9 nm in diameter, i.e. above the critical particle size of 6 nm, demonstrates that the particle size is not the only factor for stabilization of the t-ZrO2 modification at room temperature. In conclusion, synthesis within an autoclave (with and without solvent) and the MOCVD process could be successfully adapted to the synthesis of MoS2, WS2 and ZrO2 nanoparticles. A comparative in situ heating stage TEM study elucidated the growth mechanism of MoS2 and WS2 fullerene-like particles. As the general processes are similar, a transfer of this synthesis approach to other layered transition metal chalcogenide systems is to be expected. Application of the obtained nanomaterials as lubricants (MoS2, WS2) or as dental filling materials (ZrO2) is currently under investigation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-Processor SoC (MPSOC) design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. Scaling down of process technologies has increased process and dynamic variations as well as transistor wearout. Because of this, delay variations increase and impact the performance of the MPSoCs. The interconnect architecture inMPSoCs becomes a single point of failure as it connects all other components of the system together. A faulty processing element may be shut down entirely, but the interconnect architecture must be able to tolerate partial failure and variations and operate with performance, power or latency overhead. This dissertation focuses on techniques at different levels of abstraction to face with the reliability and variability issues in on-chip interconnection networks. By showing the test results of a GALS NoC testchip this dissertation motivates the need for techniques to detect and work around manufacturing faults and process variations in MPSoCs’ interconnection infrastructure. As a physical design technique, we propose the bundle routing framework as an effective way to route the Network on Chips’ global links. For architecture-level design, two cases are addressed: (I) Intra-cluster communication where we propose a low-latency interconnect with variability robustness (ii) Inter-cluster communication where an online functional testing with a reliable NoC configuration are proposed. We also propose dualVdd as an orthogonal way of compensating variability at the post-fabrication stage. This is an alternative strategy with respect to the design techniques, since it enforces the compensation at post silicon stage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"I computer del nuovo millennio saranno sempre più invisibili, o meglio embedded, incorporati agli oggetti, ai mobili, anche al nostro corpo. L'intelligenza elettronica sviluppata su silicio diventerà sempre più diffusa e ubiqua. Sarà come un'orchestra di oggetti interattivi, non invasivi e dalla presenza discreta, ovunque". [Mark Weiser, 1991] La visione dell'ubiquitous computing, prevista da Weiser, è ormai molto vicina alla realtà e anticipa una rivoluzione tecnologica nella quale l'elaborazione di dati ha assunto un ruolo sempre più dominante nella nostra vita quotidiana. La rivoluzione porta non solo a vedere l'elaborazione di dati come un'operazione che si può compiere attraverso un computer desktop, legato quindi ad una postazione fissa, ma soprattutto a considerare l'uso della tecnologia come qualcosa di necessario in ogni occasione, in ogni luogo e la diffusione della miniaturizzazione dei dispositivi elettronici e delle tecnologie di comunicazione wireless ha contribuito notevolmente alla realizzazione di questo scenario. La possibilità di avere a disposizione nei luoghi più impensabili sistemi elettronici di piccole dimensioni e autoalimentati ha contribuito allo sviluppo di nuove applicazioni, tra le quali troviamo le WSN (Wireless Sensor Network), ovvero reti formate da dispositivi in grado di monitorare qualsiasi grandezza naturale misurabile e inviare i dati verso sistemi in grado di elaborare e immagazzinare le informazioni raccolte. La novità introdotta dalle reti WSN è rappresentata dalla possibilità di effettuare monitoraggi con continuità delle più diverse grandezze fisiche, il che ha consentito a questa nuova tecnologia l'accesso ad un mercato che prevede una vastità di scenari indefinita. Osservazioni estese sia nello spazio che nel tempo possono essere inoltre utili per poter ricavare informazioni sull'andamento di fenomeni naturali che, se monitorati saltuariamente, non fornirebbero alcuna informazione interessante. Tra i casi d'interesse più rilevanti si possono evidenziare: - segnalazione di emergenze (terremoti, inondazioni) - monitoraggio di parametri difficilmente accessibili all'uomo (frane, ghiacciai) - smart cities (analisi e controllo di illuminazione pubblica, traffico, inquinamento, contatori gas e luce) - monitoraggio di parametri utili al miglioramento di attività produttive (agricoltura intelligente, monitoraggio consumi) - sorveglianza (controllo accessi ad aree riservate, rilevamento della presenza dell'uomo) Il vantaggio rappresentato da un basso consumo energetico, e di conseguenza un tempo di vita della rete elevato, ha come controparte il non elevato range di copertura wireless, valutato nell'ordine delle decine di metri secondo lo standard IEEE 802.15.4. Il monitoraggio di un'area di grandi dimensioni richiede quindi la disposizione di nodi intermedi aventi le funzioni di un router, il cui compito sarà quello di inoltrare i dati ricevuti verso il coordinatore della rete. Il tempo di vita dei nodi intermedi è di notevole importanza perché, in caso di spegnimento, parte delle informazioni raccolte non raggiungerebbero il coordinatore e quindi non verrebbero immagazzinate e analizzate dall'uomo o dai sistemi di controllo. Lo scopo di questa trattazione è la creazione di un protocollo di comunicazione che preveda meccanismi di routing orientati alla ricerca del massimo tempo di vita della rete. Nel capitolo 1 vengono introdotte le WSN descrivendo caratteristiche generali, applicazioni, struttura della rete e architettura hardware richiesta. Nel capitolo 2 viene illustrato l'ambiente di sviluppo del progetto, analizzando le piattaforme hardware, firmware e software sulle quali ci appoggeremo per realizzare il progetto. Verranno descritti anche alcuni strumenti utili per effettuare la programmazione e il debug della rete. Nel capitolo 3 si descrivono i requisiti di progetto e si realizza una mappatura dell'architettura finale. Nel capitolo 4 si sviluppa il protocollo di routing, analizzando i consumi e motivando le scelte progettuali. Nel capitolo 5 vengono presentate le interfacce grafiche utilizzate utili per l'analisi dei dati. Nel capitolo 6 vengono esposti i risultati sperimentali dell'implementazione fissando come obiettivo il massimo lifetime della rete.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer simulations play an ever growing role for the development of automotive products. Assembly simulation, as well as many other processes, are used systematically even before the first physical prototype of a vehicle is built in order to check whether particular components can be assembled easily or whether another part is in the way. Usually, this kind of simulation is limited to rigid bodies. However, a vehicle contains a multitude of flexible parts of various types: cables, hoses, carpets, seat surfaces, insulations, weatherstrips... Since most of the problems using these simulations concern one-dimensional components and since an intuitive tool for cable routing is still needed, we have chosen to concentrate on this category, which includes cables, hoses and wiring harnesses. In this thesis, we present a system for simulating one dimensional flexible parts such as cables or hoses. The modeling of bending and torsion follows the Cosserat model. For this purpose we use a generalized spring-mass system and describe its configuration by a carefully chosen set of coordinates. Gravity and contact forces as well as the forces responsible for length conservation are expressed in Cartesian coordinates. But bending and torsion effects can be dealt with more effectively by using quaternions to represent the orientation of the segments joining two neighboring mass points. This augmented system allows an easy formulation of all interactions with the best appropriate coordinate type and yields a strongly banded Hessian matrix. An energy minimizing process accounts for a solution exempt from the oscillations that are typical of spring-mass systems. The use of integral forces, similar to an integral controller, allows to enforce exactly the constraints. The whole system is numerically stable and can be solved at interactive frame rates. It is integrated in the DaimlerChrysler in-house Virtual Reality Software veo for use in applications such as cable routing and assembly simulation and has been well received by users. Parts of this work have been published at the ACM Solid and Physical Modeling Conference 2006 and have been selected for the special issue of the Computer-Aided-Design Journal to the conference.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Questo studio, che è stato realizzato in collaborazione con Hera, è un'analisi della gestione dei rifiuti a Bologna. La ricerca è stata effettuata su diversi livelli: un livello strategico il cui scopo è quello di identificare nuovi metodi per la raccolta dei rifiuti in funzione delle caratteristiche del territorio della città, un livello analitico che riguarda il miglioramento delle applicazioni informatiche di supporto, e livello ambientale che riguarda il calcolo delle emissioni in atmosfera di veicoli adibiti alla raccolta e al trasporto dei rifiuti. innanzitutto è stato necessario studiare Bologna e lo stato attuale dei servizi di raccolta dei rifiuti. È incrociando questi componenti che in questi ultimi tre anni sono state effettuate modifiche nel settore della gestione dei rifiuti. I capitoli seguenti sono inerenti le applicazioni informatiche a sostegno di tali attività: Siget e Optit. Siget è il programma di gestione del servizio, che attualmente viene utilizzato per tutte le attività connesse alla raccolta di rifiuti. È un programma costituito da moduli diversi, ma di sola la gestione dati. la sperimentazione con Optit ha aggiunto alla gestione dei dati la possibilità di avere tali dati in cartografia e di associare un algoritmo di routing. I dati archiviati in Siget hanno rappresentato il punto di partenza, l'input, e il raggiungimento di tutti punti raccolta l'obiettivo finale. L'ultimo capitolo è relativo allo studio dell'impatto ambientale di questi percorsi di raccolta dei rifiuti. Tale analisi, basata sulla valutazione empirica e sull'implementazione in Excel delle formule del Corinair mostra la fotografia del servizio nel 2010. Su questo aspetto Optit ha fornito il suo valore aggiunto, implementando nell'algoritmo anche le formule per il calcolo delle emissioni.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tesi partecipativa per disabili. Applicazione Android con interfaccia per l'inserimento di informazioni di routing e marker informativi su mappe OpenStreetMap e mediante la libreria Mapsforge.