935 resultados para Enterprise application integration (Computer systems)
Resumo:
This paper elaborates the routing of cable cycle through available routes in a building in order to link a set of devices, in a most reasonable way. Despite of the similarities to other NP-hard routing problems, the only goal is not only to minimize the cost (length of the cycle) but also to increase the reliability of the path (in case of a cable cut) which is assessed by a risk factor. Since there is often a trade-off between the risk and length factors, a criterion for ranking candidates and deciding the most reasonable solution is defined. A set of techniques is proposed to perform an efficient and exact search among candidates. A novel graph is introduced to reduce the search-space, and navigate the search toward feasible and desirable solutions. Moreover, admissible heuristic length estimation helps to early detection of partial cycles which lead to unreasonable solutions. The results show that the method provides solutions which are both technically and financially reasonable. Furthermore, it is proved that the proposed techniques are very efficient in reducing the computational time of the search to a reasonable amount.
Resumo:
The Open Provenance Model is a model of provenance that is designed to meet the following requirements: (1) To allow provenance information to be exchanged between systems, by means of a compatibility layer based on a shared provenance model. (2) To allow developers to build and share tools that operate on such a provenance model. (3) To define provenance in a precise, technology-agnostic manner. (4) To support a digital representation of provenance for any 'thing', whether produced by computer systems or not. (5) To allow multiple levels of description to coexist. (6) To define a core set of rules that identify the valid inferences that can be made on provenance representation. This document contains the specification of the Open Provenance Model (v1.1) resulting from a community-effort to achieve inter-operability in the Provenance Challenge series.
Resumo:
A description of a data item's provenance can be provided in dierent forms, and which form is best depends on the intended use of that description. Because of this, dierent communities have made quite distinct underlying assumptions in their models for electronically representing provenance. Approaches deriving from the library and archiving communities emphasise agreed vocabulary by which resources can be described and, in particular, assert their attribution (who created the resource, who modied it, where it was stored etc.) The primary purpose here is to provide intuitive metadata by which users can search for and index resources. In comparison, models for representing the results of scientific workflows have been developed with the assumption that each event or piece of intermediary data in a process' execution can and should be documented, to give a full account of the experiment undertaken. These occurrences are connected together by stating where one derived from, triggered, or otherwise caused another, and so form a causal graph. Mapping between the two approaches would be benecial in integrating systems and exploiting the strengths of each. In this paper, we specify such a mapping between Dublin Core and the Open Provenance Model. We further explain the technical issues to overcome and the rationale behind the approach, to allow the same method to apply in mapping similar schemes.
Resumo:
Attacks to devices connected to networks are one of the main problems related to the confidentiality of sensitive data and the correct functioning of computer systems. In spite of the availability of tools and procedures that harden or prevent the occurrence of security incidents, network devices are successfully attacked using strategies applied in previous events. The lack of knowledge about scenarios in which these attacks occurred effectively contributes to the success of new attacks. The development of a tool that makes this kind of information available is, therefore, of great relevance. This work presents a support system to the management of corporate security for the storage, retrieval and help in constructing attack scenarios and related information. If an incident occurs in a corporation, an expert must access the system to store the specific attack scenario. This scenario, made available through controlled access, must be analyzed so that effective decisions or actions can be taken for similar cases. Besides the strategy used by the attacker, attack scenarios also exacerbate vulnerabilities in devices. The access to this kind of information contributes to an increased security level of a corporation's network devices and a decreased response time to occurring incidents
Resumo:
The control of industrial processes has become increasingly complex due to variety of factory devices, quality requirement and market competition. Such complexity requires a large amount of data to be treated by the three levels of process control: field devices, control systems and management softwares. To use data effectively in each one of these levels is extremely important to industry. Many of today s industrial computer systems consist of distributed software systems written in a wide variety of programming languages and developed for specific platforms, so, even more companies apply a significant investment to maintain or even re-write their systems for different platforms. Furthermore, it is rare that a software system works in complete isolation. In industrial automation is common that, software had to interact with other systems on different machines and even written in different languages. Thus, interoperability is not just a long-term challenge, but also a current context requirement of industrial software production. This work aims to propose a middleware solution for communication over web service and presents an user case applying the solution developed to an integrated system for industrial data capture , allowing such data to be available simplified and platformindependent across the network
Resumo:
This paper aims to present, using a set of guidelines, how to apply the conservative distributed simulation paradigm (CMB protocol) to develop efficient applications. Using these guidelines, even a user with little experience on distributed simulation and computer architecture can have good performance on distributed simulations using conservative synchronization protocols for parallel processes.The set of guidelines is focus on a specific application domain, the performance evaluation of computer systems, considering models with coarse granularity and few logical processes and running over two platforms: parallel (high performance communication environment) and distributed (low performance communication environment).
Resumo:
This paper is based on the development and experimental analysis of a DCM Boost interleaved converter suitable for application in traction systems of electrical vehicles pulled by electrical motors (Trolleybus), which are powered by urban DC or AC distribution networks. This front-end structure is capable of providing significant improvements in trolleybuses systems and in the urban distribution network costs, and efficiency. The architecture of proposed converter is composed by five boost power cells in interleaving connection, operating in discontinuous conduction mode. Furthermore, the converter can operate as AC-DC converter, or as DC-DC converter providing the proper DC output voltage range required by DC or AC adjustable speed drivers. Therefore, when supplied by single-phase AC distribution networks, and operating as AC-DC converter, it is capable to provide high power factor, reduced harmonic distortion in the input current, complying with the restrictions imposed by the IEC 61000-3-4 standards. The digital controller has been implemented using a low cost FPGA and developed totally using a hardware description language VHDL and fixed point arithmetic. Thus, two control strategies are evaluated considering the compliance with input current restrictions imposed by IEC 61000-3-4 standards, the regular PWM modulation and a current correction PWM modulation. In order to verify the feasibility and performance of the proposed system, experimental results from a 15 kW low power scale prototype are presented, operating in DC and AC conditions.
Resumo:
With the increase of processing ability, storage and several kinds of communication existing such as Bluetooth, infrared, wireless networks, etc.., mobile devices are no longer only devices with specific function and have become tools with various functionalities. In the business field, the benefits that these kinds of devices can offer are considerable, because the portability allows tasks that previously could only be performed within the work environment, can be performed anywhere. In the context of oil exploration companies, mobile applications allow quick actions could be taken by petroleum engineers and technicians, using their mobile devices to avoid potential catastrophes like an unexpected stop or break of important equipment. In general, the configuration of equipment for oil extraction is performed on the work environment using computer systems in desktop platforms. After the obtained configuration, an employee goes to equipment to be configured and perform the modifications obtained on the use desktop system. This management process equipment for oil extraction takes long time and does not guarantee the maintenance in time to avoid problems. With the use of mobile devices, management and maintenance of equipment for oil extraction can be performed in a more agile time once it enables the engineer or technician oil can perform this configuration at the time and place where the request comes for example, near in the oil well where the equipment is located. The wide variety of mobile devices creates a big difficulty in developing mobile applications, since for one application can function in several types of devices, the application must be changed for each specific type of device, which makes the development quite costly. This paper defines and implements a software product line for designing sucker-rod pumping systems on mobile devices. This product line of software, called BMMobile, aims to produce products that are capable of performing calculations to determine the possible configurations for the equipment in the design suckerrod pumping, and managing the variabilities of the various products that can be generated. Besides, this work performs two evaluations. The first evaluation will verify the consistency of the products produced by the software product line. The second evaluation will verify the reuse of some products generated by SPL developed
Resumo:
The importance of non-functional requirements for computer systems is increasing. Satisfying these requirements requires special attention to the software architecture, since an unsuitable architecture introduces greater complexity in addition to the intrinsic complexity of the system. Some studies have shown that, despite requirements engineering and software architecture activities act on different aspects of development, they must be performed iteratively and intertwined to produce satisfactory software systems. The STREAM process presents a systematic approach to reduce the gap between requirements and architecture development, emphasizing the functional requirements, but using the non-functional requirements in an ad hoc way. However, non-functional requirements typically influence the system as a whole. Thus, the STREAM uses Architectural Patterns to refine the software architecture. These patterns are chosen by using non-functional requirements in an ad hoc way. This master thesis presents a process to improve STREAM in making the choice of architectural patterns systematic by using non-functional requirements, in order to guide the refinement of a software architecture
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This work presents a simplified architecture of a neurofuzzy controller for general purpose applications that tries to minimize the processing used in the several stages of hazy modeling of systems. The basic procedures of fuzzification and defuzzification are simplified to the maximum while the inference procedures are computed in a private way. The simplified architecture allows a fast and easy configuration of the neurofuzzy controller and the structuring rules that define the control actions is automatic. Th controller's Limits and performance are standardized and the control actions are previously calculated. For application, the industrial systems of fluid flow control will be considered.
Resumo:
A method for studying the technical and economic feasibility of absorption refrigeration systems in compact cogenerators is presented. The system studied consists of an internal combustion engine, an electric generator and a heat exchanger to recover residual heat from the refrigeration water and exhaust gases. As an application, a computer program simulates the cogeneration system in a building which already has 75 kW of installed electric power. The maximum electric and refrigeration demands are 45 kW and 76 kW respectively. This study simulates the system performance, utilizing diesel oil, sugar cane alcohol and natural gas as possible fuels. (C) 1997 Elsevier B.V. Ltd.
Resumo:
The communication between user and software is a basic stage in any Interaction System project. In interactive systems, this communication is established by the means of a graphical interface, whose objective is to supply a visual representation of the main entities and functions present in the Virtual Environment. New ways of interacting in computational systems have been minimizing the gap in the relationship between man and computer, and therefore enhancing its usability. The objective of this paper, therefore, is to present a proposal for a non-conventional user interface library called ARISupport, which supplies ARToolKit applications developers with an opportunity to create simple GUI interfaces, and provides some of the functionality used in Augmented Reality systems. © Springer-Verlag Berlin Heidelberg 2005.
Resumo:
This paper presents a technique for real-time crowd density estimation based on textures of crowd images. In this technique, the current image from a sequence of input images is classified into a crowd density class. Then, the classification is corrected by a low-pass filter based on the crowd density classification of the last n images of the input sequence. The technique obtained 73.89% of correct classification in a real-time application on a sequence of 9892 crowd images. Distributed processing was used in order to obtain real-time performance. © Springer-Verlag Berlin Heidelberg 2005.
Resumo:
Despite the abundant availability of protocols and application for peer-to-peer file sharing, several drawbacks are still present in the field. Among most notable drawbacks is the lack of a simple and interoperable way to share information among independent peer-to-peer networks. Another drawback is the requirement that the shared content can be accessed only by a limited number of compatible applications, making impossible their access to others applications and system. In this work we present a new approach for peer-to-peer data indexing, focused on organization and retrieval of metadata which describes the shared content. This approach results in a common and interoperable infrastructure, which provides a transparent access to data shared on multiple data sharing networks via a simple API. The proposed approach is evaluated using a case study, implemented as a cross-platform extension to Mozilla Firefox browser, and demonstrates the advantages of such interoperability over conventional distributed data access strategies. © 2009 IEEE.