945 resultados para Distributed computer-controlled systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed replication is the key to providing high availability, fault-tolerance, and enhanced performance. The thesis focuses on providing a toolkit to support the automatic construction of reliable distributed service replication systems. The toolkit frees programmers from dealing with network communications and replication control protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implementation of a business intelligence (BI) system is a complex undertaking requiring considerable resources. Yet there is a limited authoritative set of critical success factors (CSFs) for management reference because the BI market has been driven mainly by the IT industry and vendors. This research seeks to bridge the gap that exists between academia and practitioners by investigating the CSFs influencing BI systems success. The study followed a two-stage qualitative approach. Firstly, the authors utilised the Delphi method to conduct three rounds of studies. The study develops a CSFs framework crucial for BI systems implementation. Next, the framework and the associated CSFs are delineated through a series of case studies. The empirical findings substantiate the construct and applicability of the framework. More significantly, the research further reveals that those organisations which address the CSFs from a business orientation approach will be more likely to achieve better results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Security and privacy have been the major concern when people build computer networks and systems. Any computer network or system must be trustworthy to avoid the risk of losing control and retain confidence that it will not fail [1] Jun Ho Huh, John Lyle, Cornelius Namiluko and Andrew Martin, Managing application whitelists in trusted distributed systems. Future Generation Computer Systems,  27 2 (2011), pp. 211–226. [1]. Trust is the key factor to enable dynamic interaction and cooperation of various users, systems and services [2]. Trusted Computing aims at making computer networks, systems, and services available, predictable, traceable, controllable, assessable, sustainable, dependable, and security/privacy protectable. This special section focuses on the issues related to trusted computing, such as trusted computing models and specifications, trusted reliable and dependable systems, trustworthy services and applications, and trust standards and protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the constant grow of enterprises and the need to share information across departments and business areas becomes more critical, companies are turning to integration to provide a method for interconnecting heterogeneous, distributed and autonomous systems. Whether the sales application needs to interface with the inventory application, the procurement application connect to an auction site, it seems that any application can be made better by integrating it with other applications. Integration between applications can face several troublesome due the fact that applications may not have been designed and implemented having integration in mind. Regarding to integration issues, two tier software systems, composed by the database tier and by the “front-end” tier (interface), have shown some limitations. As a solution to overcome the two tier limitations, three tier systems were proposed in the literature. Thus, by adding a middle-tier (referred as middleware) between the database tier and the “front-end” tier (or simply referred application), three main benefits emerge. The first benefit is related with the fact that the division of software systems in three tiers enables increased integration capabilities with other systems. The second benefit is related with the fact that any modifications to the individual tiers may be carried out without necessarily affecting the other tiers and integrated systems and the third benefit, consequence of the others, is related with less maintenance tasks in software system and in all integrated systems. Concerning software development in three tiers, this dissertation focus on two emerging technologies, Semantic Web and Service Oriented Architecture, combined with middleware. These two technologies blended with middleware, which resulted in the development of Swoat framework (Service and Semantic Web Oriented ArchiTecture), lead to the following four synergic advantages: (1) allow the creation of loosely-coupled systems, decoupling the database from “front-end” tiers, therefore reducing maintenance; (2) the database schema is transparent to “front-end” tiers which are aware of the information model (or domain model) that describes what data is accessible; (3) integration with other heterogeneous systems is allowed by providing services provided by the middleware; (4) the service request by the “frontend” tier focus on ‘what’ data and not on ‘where’ and ‘how’ related issues, reducing this way the application development time by developers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The task of controlling urban traffic requires flexibility, adaptability and handling uncertain information spread through the intersection network. The use of fuzzy sets concepts convey these characteristics to improve system performance. This paper reviews a distributed traffic control system built upon a fuzzy distributed architecture previously developed by the authors. The emphasis of the paper is on the application of the system to control part of Campinas downtown area. Simulation experiments considering several traffic scenarios were performed to verify the capabilities of the system in controlling a set of coupled intersections. The performance of the proposed system is compared with conventional traffic control strategies under the same scenarios. The results obtained show that the distributed traffic control system outperforms conventional systems as far as average queues, average delay and maximum delay measures are concerned.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An overview is given on the possibility of controlling the status of circuit breakers (CB) in a substations with the use of a knowledge base that relates some of the operation magnitudes, mixing status variables with time variables and fuzzy sets. It is shown that even when all the magnitudes to be controlled cannot be included in the analysis, it is possible to control the desired status while supervising some important magnitudes as the voltage, power factor, and harmonic distortion, as well as the present status.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a technique for real-time crowd density estimation based on textures of crowd images. In this technique, the current image from a sequence of input images is classified into a crowd density class. Then, the classification is corrected by a low-pass filter based on the crowd density classification of the last n images of the input sequence. The technique obtained 73.89% of correct classification in a real-time application on a sequence of 9892 crowd images. Distributed processing was used in order to obtain real-time performance. © Springer-Verlag Berlin Heidelberg 2005.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During gray cast iron cutting, the great rate of mechanical energy from cutting forces is converted into heat. Considerable heat is generated, principally in three areas: the shear zone, rake face and at the clearance side of the cutting edge. Excessive heat will cause undesirable high temperature in the tool which leads to softening of the tool and its accelerated wear and breakage. Nowadays the advanced ceramics are widely used in cutting tools. In this paper a composition special of Si3N4 was sintering, characterized, cut and ground to make SNGN120408 and applyed in machining gray cast iron with hardness equal 205 HB in dry cutting conditions by using digital controlled computer lathe. The tool performance was analysed in function of cutting forces, flank wear, temperature and roughness. Therefore metal removing process is carried out for three different cutting speeds (300 m/min, 600 m/min, and 800 m/min), while a cutting depth of 1 mm and a feed rate of 0.33 mm/rev are kept constant. As a result of the experiments, the lowest main cutting force, which depends on cutting speed, is obtained as 264 N at 600 m/min while the highest main cutting force is recorded as 294 N at 300 m/min.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the abundant availability of protocols and application for peer-to-peer file sharing, several drawbacks are still present in the field. Among most notable drawbacks is the lack of a simple and interoperable way to share information among independent peer-to-peer networks. Another drawback is the requirement that the shared content can be accessed only by a limited number of compatible applications, making impossible their access to others applications and system. In this work we present a new approach for peer-to-peer data indexing, focused on organization and retrieval of metadata which describes the shared content. This approach results in a common and interoperable infrastructure, which provides a transparent access to data shared on multiple data sharing networks via a simple API. The proposed approach is evaluated using a case study, implemented as a cross-platform extension to Mozilla Firefox browser, and demonstrates the advantages of such interoperability over conventional distributed data access strategies. © 2009 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To simplify computer management, several system administrators are adopting advanced techniques to manage software configuration on grids, but the tight coupling between hardware and software makes every PC an individual managed entity, lowering the scalability and increasing the costs to manage hundreds or thousands of PCs. This paper discusses the feasibility of a distributed virtual machine environment, named Flexlab: a new approach for computer management that combines virtualization and distributed system architectures as the basis of a management system. Flexlab is able to extend the coverage of a computer management solution beyond client operating system limitations and also offers a convenient hardware abstraction, decoupling software and hardware, simplifying computer management. The results obtained in this work indicate that FlexLab is able to overcome the limitations imposed by the coupling between software and hardware, simplifying the management of homogeneous and heterogeneous grids. © 2009 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed Generators (DG) are generally modeled as PQ or PV buses in power flow studies. But in order to integrate DG units into the distribution systems and control the reactive power injection it is necessary to know the operation mode and the type of connection to the system. This paper presents a single-phase and a three-phase mathematical model to integrate DG in power flow calculations in distribution systems, especially suited for Smart Grid calculations. If the DG is in PV mode, each step of the power flow algorithm calculates the reactive power injection from the DG to the system to keep the voltage in the bus in a predefined level, if the DG is in PQ mode, the power injection is considered as a negative load. The method is tested on two well known test system, presenting single-phase results on 85 bus system, and three-phase results in the IEEE 34 bus test system. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A significant set of information stored in different databases around the world, can be shared through peer-topeer databases. With that, is obtained a large base of knowledge, without the need for large investments because they are used existing databases, as well as the infrastructure in place. However, the structural characteristics of peer-topeer, makes complex the process of finding such information. On the other side, these databases are often heterogeneous in their schemas, but semantically similar in their content. A good peer-to-peer databases systems should allow the user access information from databases scattered across the network and receive only the information really relate to your topic of interest. This paper proposes to use ontologies in peer-to-peer database queries to represent the semantics inherent to the data. The main contribution of this work is enable integration between heterogeneous databases, improve the performance of such queries and use the algorithm of optimization Ant Colony to solve the problem of locating information on peer-to-peer networks, which presents an improve of 18% in results. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a peer-to-peer network, the nodes interact with each other by sharing resources, services and information. Many applications have been developed using such networks, being a class of such applications are peer-to-peer databases. The peer-to-peer databases systems allow the sharing of unstructured data, being able to integrate data from several sources, without the need of large investments, because they are used existing repositories. However, the high flexibility and dynamicity of networks the network, as well as the absence of a centralized management of information, becomes complex the process of locating information among various participants in the network. In this context, this paper presents original contributions by a proposed architecture for a routing system that uses the Ant Colony algorithm to optimize the search for desired information supported by ontologies to add semantics to shared data, enabling integration among heterogeneous databases and the while seeking to reduce the message traffic on the network without causing losses in the amount of responses, confirmed by the improve of 22.5% in this amount. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of new technologies that use peer-to-peer networks grows every day, with the object to supply the need of sharing information, resources and services of databases around the world. Among them are the peer-to-peer databases that take advantage of peer-to-peer networks to manage distributed knowledge bases, allowing the sharing of information semantically related but syntactically heterogeneous. However, it is a challenge to ensure the efficient search for information without compromising the autonomy of each node and network flexibility, given the structural characteristics of these networks. On the other hand, some studies propose the use of ontology semantics by assigning standardized categorization of information. The main original contribution of this work is the approach of this problem with a proposal for optimization of queries supported by the Ant Colony algorithm and classification though ontologies. The results show that this strategy enables the semantic support to the searches in peer-to-peer databases, aiming to expand the results without compromising network performance. © 2011 IEEE.