862 resultados para Deadlock Analysis, Distributed Systems, Concurrent Systems, Formal Languages


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are some approaches that take advantage of unused computational resources in the Internet nodes - users´ machines. In the last years , the peer-to-peer networks (P2P) have gaining a momentum mainly due to its support for scalability and fault tolerance. However, current P2P architectures present some problems such as nodes overhead due to messages routing, a great amount of nodes reconfigurations when the network topology changes, routing traffic inside a specific network even when the traffic is not directed to a machine of this network, and the lack of a proximity relationship among the P2P nodes and the proximity of these nodes in the IP network. Although some architectures use the information about the nodes distance in the IP network, they use methods that require dynamic information. In this work we propose a P2P architecture to fix the problems afore mentioned. It is composed of three parts. The first part consists of a basic P2P architecture, called SGrid, which maintains a relationship of nodes in the P2P network with their position in the IP network. Its assigns adjacent key regions to nodes of a same organization. The second part is a protocol called NATal (Routing and NAT application layer) that extends the basic architecture in order to remove from the nodes the responsibility of routing messages. The third part consists of a special kind of node, called LSP (Lightware Super-Peer), which is responsible for maintaining the P2P routing table. In addition, this work also presents a simulator that validates the architecture and a module of the Natal protocol to be used in Linux routers

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The metaheuristics techiniques are known to solve optimization problems classified as NP-complete and are successful in obtaining good quality solutions. They use non-deterministic approaches to generate solutions that are close to the optimal, without the guarantee of finding the global optimum. Motivated by the difficulties in the resolution of these problems, this work proposes the development of parallel hybrid methods using the reinforcement learning, the metaheuristics GRASP and Genetic Algorithms. With the use of these techniques, we aim to contribute to improved efficiency in obtaining efficient solutions. In this case, instead of using the Q-learning algorithm by reinforcement learning, just as a technique for generating the initial solutions of metaheuristics, we use it in a cooperative and competitive approach with the Genetic Algorithm and GRASP, in an parallel implementation. In this context, was possible to verify that the implementations in this study showed satisfactory results, in both strategies, that is, in cooperation and competition between them and the cooperation and competition between groups. In some instances were found the global optimum, in others theses implementations reach close to it. In this sense was an analyze of the performance for this proposed approach was done and it shows a good performance on the requeriments that prove the efficiency and speedup (gain in speed with the parallel processing) of the implementations performed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous works have studied the characteristics and peculiarities of P2P networks, especially security information aspects. Most works, in some way, deal with the sharing of resources and, in particular, the storage of files. This work complements previous studies and adds new definitions relating to this kind of systems. A system for safe storage of files (SAS-P2P) was specified and built, based on P2P technology, using the JXTA platform. This system uses standard X.509 and PKCS # 12 digital certificates, issued and managed by a public key infrastructure, which was also specified and developed based on P2P technology (PKIX-P2P). The information is stored in a special file with XML format which is especially prepared, facilitating handling and interoperability among applications. The intention of developing the SAS-P2P system was to offer a complementary service for Giga Natal network users, through which the participants in this network can collaboratively build a shared storage area, with important security features such as availability, confidentiality, authenticity and fault tolerance. Besides the specification, development of prototypes and testing of the SAS-P2P system, tests of the PKIX-P2P Manager module were also performed, in order to determine its fault tolerance and the effective calculation of the reputation of the certifying authorities participating in the system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of graphical objects three-dimensional (3D) multimedia applications is gaining more space in the media. Networks with high transmission rates, computers with large processing and graphics boost and popularize such three-dimensional applications. The areas of 3D applications ranging from military applications, entertainment applications geared up for education. Within the applications related to education, we highlight the applications that create virtual copies of cultural spaces such as museums. Through this copy, you can virtually visit a museum, see other users, communicate, exchange information on works, etc. Thereby allowing the visit museums physically distant remote users. A major problem of such virtual environments is its update. By dealing with various media (text, images, sounds, and 3D models), its subsequent handling and update on a virtual environment requires staff with specialized knowledge. Speaking of museums, they hardly have people on your team with this profile. Inside the GT-MV (Grupo de Trabalho de Museus Virtuais), funded by RNP (Rede Nacional de Ensino e Pesquisa) propose a portal for registration, amendment and seen collaborative virtual museums of Brazil. The update, be it related to work or physical space, a system with a national scale like this, would be impossible if done only by the project team. Within this scenario, we propose the modeling and implementation of a tool that allows editing of virtual spaces in an easy and intuitive as compared with available tools. Within the context of GT-MV, we apply the SAMVC (Sistema de Autoria de Museus Virtuais Colaborativos) to museums where curators build the museum from a 3D floor plan (2D). The system, from these twodimensional information, recreates the equivalent in three dimensions. With this, through little or no training, team members from each museum may be responsible for updating the system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital image processing is a field that demands great processing capacity. As such it becomes relevant to implement software that is based on the distribution of the processing into several nodes divided by computers belonging to the same network. Specifically discussed in this work are distributed algorithms of compression and expansion of images using the discrete cosine transform. The results show that the savings in processing time obtained due to the parallel algorithms in comparison to its sequential equivalents is a function that depends on the resolution of the image and the complexity of the involved calculation; that is efficiency is greater the longer the processing period is in terms of the time involved for the communication between the network points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pervasive applications use context provision middleware support as infrastructures to provide context information. Typically, those applications use communication publish/subscribe to eliminate the direct coupling between components and to allow the selective information dissemination based in the interests of the communicating elements. The use of composite events mechanisms together with such middlewares to aggregate individual low level events, originating from of heterogeneous sources, in high level context information relevant for the application. CES (Composite Event System) is a composite events mechanism that works simultaneously in cooperation with several context provision middlewares. With that integration, applications use CES to subscribe to composite events and CES, in turn, subscribes to the primitive events in the appropriate underlying middlewares and notifies the applications when the composed events happen. Furthermore, CES offers a language with a group of operators for the definition of composite events that also allows context information sharing

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To manage the complexity associated with the management of multimedia distributed systems, a solution must incorporate concepts of middleware in order to hide specific hardware and operating systems aspects. Applications in these systems can be implemented in different types of platforms, and the components of these systems must interact each with the other. Because of the variability of the state of the platforms implementation, a flexible approach should allow dynamic substitution of components in order to ensure the level of QoS of the running application . In this context, this work presents an approach in the layer of middleware that we are proposing for supporting dynamic substitution of components in the context the Cosmos framework , starting with the choice of target component, rising taking the decision, which, among components candidates will be chosen and concluding with the process defined for the exchange. The approach was defined considering the Cosmos QoS model and how it deals with dynamic reconfiguration

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of middleware technology in various types of systems, in order to abstract low-level details related to the distribution of application logic, is increasingly common. Among several systems that can be benefited from using these components, we highlight the distributed systems, where it is necessary to allow communications between software components located on different physical machines. An important issue related to the communication between distributed components is the provision of mechanisms for managing the quality of service. This work presents a metamodel for modeling middlewares based on components in order to provide to an application the abstraction of a communication between components involved in a data stream, regardless their location. Another feature of the metamodel is the possibility of self-adaptation related to the communication mechanism, either by updating the values of its configuration parameters, or by its replacement by another mechanism, in case of the restrictions of quality of service specified are not being guaranteed. In this respect, it is planned the monitoring of the communication state (application of techniques like feedback control loop), analyzing performance metrics related. The paradigm of Model Driven Development was used to generate the implementation of a middleware that will serve as proof of concept of the metamodel, and the configuration and reconfiguration policies related to the dynamic adaptation processes. In this sense was defined the metamodel associated to the process of a communication configuration. The MDD application also corresponds to the definition of the following transformations: the architectural model of the middleware in Java code, and the configuration model to XML

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays due to the security vulnerability of distributed systems, it is needed mechanisms to guarantee the security requirements of distributed objects communications. Middleware Platforms component integration platforms provide security functions that typically offer services for auditing, for guarantee messages protection, authentication, and access control. In order to support these functions, middleware platforms use digital certificates that are provided and managed by external entities. However, most middleware platforms do not define requirements to get, to maintain, to validate and to delegate digital certificates. In addition, most digital certification systems use X.509 certificates that are complex and have a lot of attributes. In order to address these problems, this work proposes a digital certification generic service for middleware platforms. This service provides flexibility via the joint use of public key certificates, to implement the authentication function, and attributes certificates to the authorization function. It also supports delegation. Certificate based access control is transparent for objects. The proposed service defines the digital certificate format, the store and retrieval system, certificate validation and support for delegation. In order to validate the proposed architecture, this work presents the implementation of the digital certification service for the CORBA middleware platform and a case study that illustrates the service functionalities

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work is the development of a methodology for electric load forecasting based on a neural network. Here, it is used Backpropagation algorithm with an adaptive process based on fuzzy logic. This methodology results in fast training, when compared to the conventional formulation of Backpropagation algorithm. Results are presented using data from a Brazilian Electric Company and the performance is very good for the proposal objective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is based on the analysis and implementation of a new drive system applied to refrigeration systems, complying with the restrictions imposed by the IEC standards (Harmonic/Flicker/EMI-Electromagnetic Interference restrictions), in order to obtain high efficiency, high power factor, reduced harmonic distortion in the input current and reduced electromagnetic interference, with excellent performance in temperature control of a refrigeration prototype system (automatic control, precision and high dynamic response). The proposal is replace the single-phase motor by a three-phase motor, in the conventional refrigeration system. In this way, a proper control technique can be applied, using a closed-loop (feedback control), that will allow an accurate adjustment of the desirable temperature. The proposed refrigeration prototype uses a 0.5Hp three-phase motor and an open (Belt-Drive) Bitzer IY type compressor. The input rectifier stage's features include the reduction in the input current ripple, the reduction in the output voltage ripple, the use of low stress devices, low volume for the EMI input filter, high input power factor (PF), and low total harmonic distortion (THD) in the input current, in compliance with the IEC61000-3-2 standards. The digital controller for the output three-phase inverter stage has been developed using a conventional voltage-frequency control (scalar V/f control), and a simplified stator oriented Vector control, in order to verify the feasibility and performance of the proposed digital controls for continuous temperature control applied at the refrigerator prototype. ©2008 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increased accessibility to high-performance computing resources has created a demand for user support through performance evaluation tools like the iSPD (iconic Simulator for Parallel and Distributed systems), a simulator based on iconic modelling for distributed environments such as computer grids. It was developed to make it easier for general users to create their grid models, including allocation and scheduling algorithms. This paper describes how schedulers are managed by iSPD and how users can easily adopt the scheduling policy that improves the system being simulated. A thorough description of iSPD is given, detailing its scheduler manager. Some comparisons between iSPD and Simgrid simulations, including runs of the simulated environment in a real cluster, are also presented. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital data sets constitute rich sources of information, which can be extracted and evaluated applying computational tools, for example, those ones for Information Visualization. Web-based applications, such as social network environments, forums and virtual environments for Distance Learning, are good examples for such sources. The great amount of data has direct impact on processing and analysis tasks. This paper presents the computational tool Mapper, defined and implemented to use visual representations - maps, graphics and diagrams - for supporting the decision making process by analyzing data stored in Virtual Learning Environment TelEduc-Unesp. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The control of molecular architectures has been exploited in layer-by-layer (LbL) films deposited on Au interdigitated electrodes, thus forming an electronic tongue (e-tongue) system that reached an unprecedented high sensitivity (down to 10-12 M) in detecting catechol. Such high sensitivity was made possible upon using units containing the enzyme tyrosinase, which interacted specifically with catechol, and by processing impedance spectroscopy data with information visualization methods. These latter methods, including the parallel coordinates technique, were also useful for identifying the major contributors to the high distinguishing ability toward catechol. Among several film architectures tested, the most efficient had a tyrosinase layer deposited atop LbL films of alternating layers of dioctadecyldimethylammonium bromide (DODAB) and 1,2-dipalmitoyl-sn-3-glycero-fosfo-rac-(1-glycerol) (DPPG), viz., (DODAB/DPPG)5/DODAB/Tyr. The latter represents a more suitable medium for immobilizing tyrosinase when compared to conventional polyelectrolytes. Furthermore, the distinction was more effective at low frequencies where double-layer effects on the film/liquid sample dominate the electrical response. Because the optimization of film architectures based on information visualization is completely generic, the approach presented here may be extended to designing architectures for other types of applications in addition to sensing and biosensing. © 2013 American Chemical Society.