890 resultados para Riemannian metrics
Resumo:
This paper presents a new multi-model technique of dentification in ANFIS for nonlinear systems. In this technique, the structure used is of the fuzzy Takagi-Sugeno of which the consequences are local linear models that represent the system of different points of operation and the precursors are membership functions whose adjustments are realized by the learning phase of the neuro-fuzzy ANFIS technique. The models that represent the system at different points of the operation can be found with linearization techniques like, for example, the Least Squares method that is robust against sounds and of simple application. The fuzzy system is responsible for informing the proportion of each model that should be utilized, using the membership functions. The membership functions can be adjusted by ANFIS with the use of neural network algorithms, like the back propagation error type, in such a way that the models found for each area are correctly interpolated and define an action of each model for possible entries into the system. In multi-models, the definition of action of models is known as metrics and, since this paper is based on ANFIS, it shall be denominated in ANFIS metrics. This way, ANFIS metrics is utilized to interpolate various models, composing a system to be identified. Differing from the traditional ANFIS, the created technique necessarily represents the system in various well defined regions by unaltered models whose pondered activation as per the membership functions. The selection of regions for the application of the Least Squares method is realized manually from the graphic analysis of the system behavior or from the physical characteristics of the plant. This selection serves as a base to initiate the linear model defining technique and generating the initial configuration of the membership functions. The experiments are conducted in a teaching tank, with multiple sections, designed and created to show the characteristics of the technique. The results from this tank illustrate the performance reached by the technique in task of identifying, utilizing configurations of ANFIS, comparing the developed technique with various models of simple metrics and comparing with the NNARX technique, also adapted to identification
Resumo:
The greater part of monitoring onshore Oil and Gas environment currently are based on wireless solutions. However, these solutions have a technological configuration that are out-of-date, mainly because analog radios and inefficient communication topologies are used. On the other hand, solutions based in digital radios can provide more efficient solutions related to energy consumption, security and fault tolerance. Thus, this paper evaluated if the Wireless Sensor Network, communication technology based on digital radios, are adequate to monitoring Oil and Gas onshore wells. Percent of packets transmitted with successful, energy consumption, communication delay and routing techniques applied to a mesh topology will be used as metrics to validate the proposal in the different routing techniques through network simulation tool NS-2
Resumo:
Ensuring the dependability requirements is essential for the industrial applications since faults may cause failures whose consequences result in economic losses, environmental damage or hurting people. Therefore, faced from the relevance of topic, this thesis proposes a methodology for the dependability evaluation of industrial wireless networks (WirelessHART, ISA100.11a, WIA-PA) on early design phase. However, the proposal can be easily adapted to maintenance and expansion stages of network. The proposal uses graph theory and fault tree formalism to create automatically an analytical model from a given wireless industrial network topology, where the dependability can be evaluated. The evaluation metrics supported are the reliability, availability, MTTF (mean time to failure), importance measures of devices, redundancy aspects and common cause failures. It must be emphasized that the proposal is independent of any tool to evaluate quantitatively the target metrics. However, due to validation issues it was used a tool widely accepted on academy for this purpose (SHARPE). In addition, an algorithm to generate the minimal cut sets, originally applied on graph theory, was adapted to fault tree formalism to guarantee the scalability of methodology in wireless industrial network environments (< 100 devices). Finally, the proposed methodology was validate from typical scenarios found in industrial environments, as star, line, cluster and mesh topologies. It was also evaluated scenarios with common cause failures and best practices to guide the design of an industrial wireless network. For guarantee scalability requirements, it was analyzed the performance of methodology in different scenarios where the results shown the applicability of proposal for networks typically found in industrial environments
Resumo:
In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method
Resumo:
This work deals with experimental studies about VoIP conections into WiFi 802.11b networks with handoff. Indoor and outdoor network experiments are realised to take measurements for the QoS parameters delay, throughput, jitter and packt loss. The performance parameters are obtained through the use of software tools Ekiga, Iperf and Wimanager that assure, respectvely, VoIP conection simulation, trafic network generator and metric parameters acquisition for, throughput, jitter and packt loss. The avarage delay is obtained from the measured throughput and the concept of packt virtual transmition time. The experimental data are validated based on de QoS level for each metric parameter accepted as adequated by the specialized literature
Resumo:
The evolution of automation in recent years made possible the continuous monitoring of the processes of industrial plants. With this advance, the amount of information that automation systems are subjected to increased significantly. The alarms generated by the monitoring equipment are a major contributor to this increase, and the equipments are usually deployed in industrial plants without a formal methodology, which entails an increase in the number of alarms generated, thus overloading the alarm system and therefore the operators of such plants. In this context, the works of alarm management comes up with the objective of defining a formal methodology for installation of new equipment and detect problems in existing settings. This thesis aims to propose a set of metrics for the evaluation of alarm systems already deployed, so that you can identify the health of this system by analyzing the proposed indices and comparing them with parameters defined in the technical norms of alarm management. In addition, the metrics will track the work of alarm management, verifying if it is improving the quality of the alarm system. To validate the proposed metrics, data from actual process plants of the petrochemical industry were used
Resumo:
The use of wireless sensor and actuator networks in industry has been increasing past few years, bringing multiple benefits compared to wired systems, like network flexibility and manageability. Such networks consists of a possibly large number of small and autonomous sensor and actuator devices with wireless communication capabilities. The data collected by sensors are sent directly or through intermediary nodes along the network to a base station called sink node. The data routing in this environment is an essential matter since it is strictly bounded to the energy efficiency, thus the network lifetime. This work investigates the application of a routing technique based on Reinforcement Learning s Q-Learning algorithm to a wireless sensor network by using an NS-2 simulated environment. Several metrics like energy consumption, data packet delivery rates and delays are used to validate de proposal comparing it with another solutions existing in the literature
Resumo:
This paper analyzes the performance of a parallel implementation of Coupled Simulated Annealing (CSA) for the unconstrained optimization of continuous variables problems. Parallel processing is an efficient form of information processing with emphasis on exploration of simultaneous events in the execution of software. It arises primarily due to high computational performance demands, and the difficulty in increasing the speed of a single processing core. Despite multicore processors being easily found nowadays, several algorithms are not yet suitable for running on parallel architectures. The algorithm is characterized by a group of Simulated Annealing (SA) optimizers working together on refining the solution. Each SA optimizer runs on a single thread executed by different processors. In the analysis of parallel performance and scalability, these metrics were investigated: the execution time; the speedup of the algorithm with respect to increasing the number of processors; and the efficient use of processing elements with respect to the increasing size of the treated problem. Furthermore, the quality of the final solution was verified. For the study, this paper proposes a parallel version of CSA and its equivalent serial version. Both algorithms were analysed on 14 benchmark functions. For each of these functions, the CSA is evaluated using 2-24 optimizers. The results obtained are shown and discussed observing the analysis of the metrics. The conclusions of the paper characterize the CSA as a good parallel algorithm, both in the quality of the solutions and the parallel scalability and parallel efficiency
Resumo:
The increasing demand for high performance wireless communication systems has shown the inefficiency of the current model of fixed allocation of the radio spectrum. In this context, cognitive radio appears as a more efficient alternative, by providing opportunistic spectrum access, with the maximum bandwidth possible. To ensure these requirements, it is necessary that the transmitter identify opportunities for transmission and the receiver recognizes the parameters defined for the communication signal. The techniques that use cyclostationary analysis can be applied to problems in either spectrum sensing and modulation classification, even in low signal-to-noise ratio (SNR) environments. However, despite the robustness, one of the main disadvantages of cyclostationarity is the high computational cost for calculating its functions. This work proposes efficient architectures for obtaining cyclostationary features to be employed in either spectrum sensing and automatic modulation classification (AMC). In the context of spectrum sensing, a parallelized algorithm for extracting cyclostationary features of communication signals is presented. The performance of this features extractor parallelization is evaluated by speedup and parallel eficiency metrics. The architecture for spectrum sensing is analyzed for several configuration of false alarm probability, SNR levels and observation time for BPSK and QPSK modulations. In the context of AMC, the reduced alpha-profile is proposed as as a cyclostationary signature calculated for a reduced cyclic frequencies set. This signature is validated by a modulation classification architecture based on pattern matching. The architecture for AMC is investigated for correct classification rates of AM, BPSK, QPSK, MSK and FSK modulations, considering several scenarios of observation length and SNR levels. The numerical results of performance obtained in this work show the eficiency of the proposed architectures
Resumo:
Bayesian networks are powerful tools as they represent probability distributions as graphs. They work with uncertainties of real systems. Since last decade there is a special interest in learning network structures from data. However learning the best network structure is a NP-Hard problem, so many heuristics algorithms to generate network structures from data were created. Many of these algorithms use score metrics to generate the network model. This thesis compare three of most used score metrics. The K-2 algorithm and two pattern benchmarks, ASIA and ALARM, were used to carry out the comparison. Results show that score metrics with hyperparameters that strength the tendency to select simpler network structures are better than score metrics with weaker tendency to select simpler network structures for both metrics (Heckerman-Geiger and modified MDL). Heckerman-Geiger Bayesian score metric works better than MDL with large datasets and MDL works better than Heckerman-Geiger with small datasets. The modified MDL gives similar results to Heckerman-Geiger for large datasets and close results to MDL for small datasets with stronger tendency to select simpler network structures
Resumo:
Macrófitas são importantes produtoras primárias do ecossistema aquático, e o desequilíbrio do ambiente pode ocasionar seu crescimento acelerado. Portanto, levantamentos de dados relacionados a macrófitas submersas são importantes para contribuir na gestão de corpos de água. Contudo, a amostragem dessa vegetação requer um enorme esforço físico. Nesse sentido, a técnica hidroacústica é apropriada para o estudo de macrófitas submersas. Assim, os objetivos deste trabalho foram avaliar os tipos de dados gerados pelo ecobatímetro e analisar como esses dados caracterizam a vegetação. Utilizou-se o ecobatímetro BioSonics DT-X acoplado a um GPS. A área de estudo é um trecho do Rio Uberaba, MG. A amostragem foi feita por meio de transectos, navegando de uma margem à outra. Depois de processar os dados, obteve-se informação a respeito de ocorrência de macrófitas submersas, profundidade, altura média das plantas, porcentagem da cobertura vegetal e posição. A partir desse conjunto de dados, foi possível extrair outras duas métricas: biovolume e altura efetiva do dossel. Os dados foram importados de um Sistema de Informação Geográfica e geraram-se mapas ilustrativos das variáveis estudadas. Além disso, quatro perfis foram selecionados para analisar a diferença entre as grandezas de representação de macrófitas. O ecobatímetro mostrou-se uma ferramenta eficaz no mapeamento de macrófitas submersas. Cada uma das medidas - altura do dossel, ECH ou biovolume - caracteriza de forma diferente a vegetação submersa. Dessa forma, a escolha do tipo de representação depende da aplicação desejada.
Resumo:
In the Einstein s theory of General Relativity the field equations relate the geometry of space-time with the content of matter and energy, sources of the gravitational field. This content is described by a second order tensor, known as energy-momentum tensor. On the other hand, the energy-momentum tensors that have physical meaning are not specified by this theory. In the 700s, Hawking and Ellis set a couple of conditions, considered feasible from a physical point of view, in order to limit the arbitrariness of these tensors. These conditions, which became known as Hawking-Ellis energy conditions, play important roles in the gravitation scenario. They are widely used as powerful tools for analysis; from the demonstration of important theorems concerning to the behavior of gravitational fields and geometries associated, the gravity quantum behavior, to the analysis of cosmological models. In this dissertation we present a rigorous deduction of the several energy conditions currently in vogue in the scientific literature, such as: the Null Energy Condition (NEC), Weak Energy Condition (WEC), the Strong Energy Condition (SEC), the Dominant Energy Condition (DEC) and Null Dominant Energy Condition (NDEC). Bearing in mind the most trivial applications in Cosmology and Gravitation, the deductions were initially made for an energy-momentum tensor of a generalized perfect fluid and then extended to scalar fields with minimal and non-minimal coupling to the gravitational field. We also present a study about the possible violations of some of these energy conditions. Aiming the study of the single nature of some exact solutions of Einstein s General Relativity, in 1955 the Indian physicist Raychaudhuri derived an equation that is today considered fundamental to the study of the gravitational attraction of matter, which became known as the Raychaudhuri equation. This famous equation is fundamental for to understanding of gravitational attraction in Astrophysics and Cosmology and for the comprehension of the singularity theorems, such as, the Hawking and Penrose theorem about the singularity of the gravitational collapse. In this dissertation we derive the Raychaudhuri equation, the Frobenius theorem and the Focusing theorem for congruences time-like and null congruences of a pseudo-riemannian manifold. We discuss the geometric and physical meaning of this equation, its connections with the energy conditions, and some of its several aplications.
Resumo:
Much has been researched and discussed in the importance played by knowledge in organizations. We are witnessing the establishment of the knowledge economy, but this "new economy" brings in itself a whole complex system of metrics and evaluations, and cannot be dissociated from it. Due to its importance, the initiatives of knowledge management must be continually assessed on their progress in order to verify whether they are moving towards achieving the goals of success. Thus, good measurement practices should include not only how the organization quantifies its knowledge capital, but also how resources are allocated to supply their growth. Thinking about the aspects listed above, this paper presents an approach to a model for Knowledge extraction using an ERP system, suggesting the establishment of a set of indicators for assessing organizational performance. The objective is to evaluate the implementation of projects of knowledge management and thus observe the general development of the organization.
Resumo:
In this dissertation we present some generalizations for the concept of distance by using more general value spaces, such as: fuzzy metrics, probabilistic metrics and generalized metrics. We show how such generalizations may be useful due to the possibility that the distance between two objects could carry more information about the objects than in the case where the distance is represented just by a real number. Also in this thesis we propose another generalization of distance which encompasses the notion of interval metric and generates a topology in a natural way. Several properties of this generalization are investigated, and its links with other existing generalizations
Resumo:
Over the years the use of application frameworks designed for the View and Controller layers of MVC architectural pattern adapted to web applications has become very popular. These frameworks are classified into Actions Oriented and Components Oriented , according to the solution strategy adopted by the tools. The choice of such strategy leads the system architecture design to acquire non-functional characteristics caused by the way the framework influences the developer to implement the system. The components reusability is one of those characteristics and plays a very important role for development activities such as system evolution and maintenance. The work of this dissertation consists to analyze of how the reusability could be influenced by the Web frameworks usage. To accomplish this, small academic management applications were developed using the latest versions of Apache Struts and JavaServer Faces frameworks, the main representatives of Java plataform Web frameworks of. For this assessment was used a software quality model that associates internal attributes, which can be measured objectively, to the characteristics in question. These attributes and metrics defined for the model were based on some work related discussed in the document