910 resultados para Computer network protocols.
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
Demands for functionality enhancements, cost reductions and power savings clearly suggest the introduction of multiand many-core platforms in real-time embedded systems. However, when compared to uni-core platforms, the manycores experience additional problems, namely the lack of scalable coherence mechanisms and the necessity to perform migrations. These problems have to be addressed before such systems can be considered for integration into the realtime embedded domain. We have devised several agreement protocols which solve some of the aforementioned issues. The protocols allow the applications to plan and organise their future executions both temporally and spatially (i.e. when and where the next job will be executed). Decisions can be driven by several factors, e.g. load balancing, energy savings and thermal issues. All presented protocols are analytically described, with the particular emphasis on their respective real-time behaviours and worst-case performance. The underlying assumptions are based on the multi-kernel model and the message-passing paradigm, which constitutes the communication between the interacting instances.
Resumo:
PURPOSE: To assess the outcome and patterns of failure in patients with testicular lymphoma treated by chemotherapy (CT) and/or radiation therapy (RT). METHODS AND MATERIALS: Data from a series of 36 adult patients with Ann Arbor Stage I (n = 21), II (n = 9), III (n = 3), or IV (n = 3) primary testicular lymphoma, consecutively treated between 1980 and 1999, were collected in a retrospective multicenter study by the Rare Cancer Network. Median age was 64 years (range: 21-91 years). Full staging workup (chest X-ray, testicular ultrasound, abdominal ultrasound, and/or thoracoabdominal computer tomography, bone marrow assessment, full blood count, lactate dehydrogenase, and cerebrospinal fluid evaluation) was completed in 18 (50%) patients. All but one patient underwent orchidectomy, and spermatic cord infiltration was found in 9 patients. Most patients (n = 29) had CT, consisting in most cases of cyclophosphamide, doxorubicin, vincristine, and prednisolone (CHOP) with (n = 17) or without intrathecal CT. External RT was delivered to scrotum alone (n = 12) or testicular, iliac, and para-aortic regions (n = 8). The median RT dose was 31 Gy (range: 20-44 Gy) in a median of 17 fractions (10-24), using a median of 1.8 Gy (range: 1.5-2.5 Gy) per fraction. The median follow-up period was 42 months (range: 6-138 months). RESULTS: After a median period of 11 months (range: 1-76 months), 14 patients presented lymphoma progression, mostly in the central nervous system (CNS) (n = 8). Among the 17 patients who received intrathecal CT, 4 had a CNS relapse (p = NS). No testicular, iliac, or para-aortic relapse was observed in patients receiving RT to these regions. The 5-year overall, lymphoma-specific, and disease-free survival was 47%, 66%, and 43%, respectively. In univariate analyses, statistically significant factors favorably influencing the outcome were early-stage and combined modality treatment. Neither RT technique nor total dose influenced the outcome. Multivariate analysis revealed that the most favorable independent factors predicting the outcome were younger age, early-stage disease, and combined modality treatment. CONCLUSIONS: In this multicenter retrospective study, CNS was found to be the principal site of relapse, and no extra-CNS lymphoma progression was observed in the irradiated volumes. More effective CNS prophylaxis, including combined modalities, should be prospectively explored in this uncommon site of extranodal lymphoma.
Resumo:
Peer-reviewed
Resumo:
Internet today has become a vital part of day to day life, owing to the revolutionary changes it has brought about in various fields. Dependence on the Internet as an information highway and knowledge bank is exponentially increasing so that a going back is beyond imagination. Transfer of critical information is also being carried out through the Internet. This widespread use of the Internet coupled with the tremendous growth in e-commerce and m-commerce has created a vital need for infonnation security.Internet has also become an active field of crackers and intruders. The whole development in this area can become null and void if fool-proof security of the data is not ensured without a chance of being adulterated. It is, hence a challenge before the professional community to develop systems to ensure security of the data sent through the Internet.Stream ciphers, hash functions and message authentication codes play vital roles in providing security services like confidentiality, integrity and authentication of the data sent through the Internet. There are several ·such popular and dependable techniques, which have been in use widely, for quite a long time. This long term exposure makes them vulnerable to successful or near successful attempts for attacks. Hence it is the need of the hour to develop new algorithms with better security.Hence studies were conducted on various types of algorithms being used in this area. Focus was given to identify the properties imparting security at this stage. By making use of a perception derived from these studies, new algorithms were designed. Performances of these algorithms were then studied followed by necessary modifications to yield an improved system consisting of a new stream cipher algorithm MAJE4, a new hash code JERIM- 320 and a new message authentication code MACJER-320. Detailed analysis and comparison with the existing popular schemes were also carried out to establish the security levels.The Secure Socket Layer (SSL) I Transport Layer Security (TLS) protocol is one of the most widely used security protocols in Internet. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL I TLS. But recent attacks on RC4 and HMAC have raised questions about the reliability of these algorithms. Hence MAJE4 and MACJER-320 have been proposed as substitutes for them. Detailed studies on the performance of these new algorithms were carried out; it has been observed that they are dependable alternatives.
Resumo:
We present an intuitive geometric approach for analysing the structure and fragility of T1-weighted structural MRI scans of human brains. Apart from computing characteristics like the surface area and volume of regions of the brain that consist of highly active voxels, we also employ Network Theory in order to test how close these regions are to breaking apart. This analysis is used in an attempt to automatically classify subjects into three categories: Alzheimer’s disease, mild cognitive impairment and healthy controls, for the CADDementia Challenge.
Resumo:
The assessment of routing protocols for mobile wireless networks is a difficult task, because of the networks` dynamic behavior and the absence of benchmarks. However, some of these networks, such as intermittent wireless sensors networks, periodic or cyclic networks, and some delay tolerant networks (DTNs), have more predictable dynamics, as the temporal variations in the network topology can be considered as deterministic, which may make them easier to study. Recently, a graph theoretic model-the evolving graphs-was proposed to help capture the dynamic behavior of such networks, in view of the construction of least cost routing and other algorithms. The algorithms and insights obtained through this model are theoretically very efficient and intriguing. However, there is no study about the use of such theoretical results into practical situations. Therefore, the objective of our work is to analyze the applicability of the evolving graph theory in the construction of efficient routing protocols in realistic scenarios. In this paper, we use the NS2 network simulator to first implement an evolving graph based routing protocol, and then to use it as a benchmark when comparing the four major ad hoc routing protocols (AODV, DSR, OLSR and DSDV). Interestingly, our experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least. In order to make this model widely applicable, however, some practical issues still have to be addressed and incorporated into the model, like adaptive algorithms. We also discuss such issues in this paper, as a result of our experience.
Resumo:
The computers and network services became presence guaranteed in several places. These characteristics resulted in the growth of illicit events and therefore the computers and networks security has become an essential point in any computing environment. Many methodologies were created to identify these events; however, with increasing of users and services on the Internet, many difficulties are found in trying to monitor a large network environment. This paper proposes a methodology for events detection in large-scale networks. The proposal approaches the anomaly detection using the NetFlow protocol, statistical methods and monitoring the environment in a best time for the application. © 2010 Springer-Verlag Berlin Heidelberg.
Resumo:
This paper presents a NCAP embedded on DE2 kit with Nios II processor and uClinux to development of a network gateway with two interfaces, wireless (ZigBee) and wired (RS232) based on IEEE 1451. Both the communications, wireless and wired, were developed to be point-to-point and working with the same protocols, based on IEEE 1451.0-2007. The tests were made using a microcomputer, which through of browser was possible access the web page stored in the DE2 kit and send commands of control and monitoring to both TIMs (WTIM and STIM). The system describes a different form of development of the NCAP node to be applied in different environments with wired or wireless in the same node. © 2011 IEEE.
Resumo:
In this paper was proposed the development of an heterogeneous system using the microcontroller (AT90CANI28) where the protocol model CAN and the standard IEEE 802.15.4 are connected. This module is able to manage and monitor sensors and actuators using CAN and, through the wireless standard 802.15.4, communicate with the other network modules. © 2011 IEEE.