43 resultados para self-definition
Resumo:
A new method, based on linear correlation and phase diagrams was successfully developed for processes like the sedimentary process, where the deposition phase can have different time duration - represented by repeated values in a series - and where the erosion can play an important rule deleting values of a series. The sampling process itself can be the cause of repeated values - large strata twice sampled - or deleted values: tiny strata fitted between two consecutive samples. What we developed was a mathematical procedure which, based upon the depth chemical composition evolution, allows the establishment of frontiers as well as the periodicity of different sedimentary environments. The basic tool isn't more than a linear correlation analysis which allow us to detect the existence of eventual evolution rules, connected with cyclical phenomena within time series (considering the space assimilated to time), with the final objective of prevision. A very interesting discovery was the phenomenon of repeated sliding windows that represent quasi-cycles of a series of quasi-periods. An accurate forecast can be obtained if we are inside a quasi-cycle (it is possible to predict the other elements of the cycle with the probability related with the number of repeated and deleted points). We deal with an innovator methodology, reason why it's efficiency is being tested in some case studies, with remarkable results that shows it's efficacy. Keywords: sedimentary environments, sequence stratigraphy, data analysis, time-series, conditional probability.
Resumo:
Current Manufacturing Systems challenges due to international economic crisis, market globalization and e-business trends, incites the development of intelligent systems to support decision making, which allows managers to concentrate on high-level tasks management while improving decision response and effectiveness towards manufacturing agility. This paper presents a novel negotiation mechanism for dynamic scheduling based on social and collective intelligence. Under the proposed negotiation mechanism, agents must interact and collaborate in order to improve the global schedule. Swarm Intelligence (SI) is considered a general aggregation term for several computational techniques, which use ideas and inspiration from the social behaviors of insects and other biological systems. This work is primarily concerned with negotiation, where multiple self-interested agents can reach agreement over the exchange of operations on competitive resources. Experimental analysis was performed in order to validate the influence of negotiation mechanism in the system performance and the SI technique. Empirical results and statistical evidence illustrate that the negotiation mechanism influence significantly the overall system performance and the effectiveness of Artificial Bee Colony for makespan minimization and on the machine occupation maximization.
Resumo:
Power law (PL) distributions have been largely reported in the modeling of distinct real phenomena and have been associated with fractal structures and self-similar systems. In this paper, we analyze real data that follows a PL and a double PL behavior and verify the relation between the PL coefficient and the capacity dimension of known fractals. It is to be proved a method that translates PLs coefficients into capacity dimension of fractals of any real data.
Resumo:
Mestrado em Energias Sustentáveis
Resumo:
The positioning of the consumers in the power systems operation has been changed in the recent years, namely due to the implementation of competitive electricity markets. Demand response is an opportunity for the consumers’ participation in electricity markets. Smart grids can give an important support for the integration of demand response. The methodology proposed in the present paper aims to create an improved demand response program definition and remuneration scheme for aggregated resources. The consumers are aggregated in a certain number of clusters, each one corresponding to a distinct demand response program, according to the economic impact of the resulting remuneration tariff. The knowledge about the consumers is obtained from its demand price elasticity values. The illustrative case study included in the paper is based on a 218 consumers’ scenario.
Resumo:
The use of distribution networks in the current scenario of high penetration of Distributed Generation (DG) is a problem of great importance. In the competitive environment of electricity markets and smart grids, Demand Response (DR) is also gaining notable impact with several benefits for the whole system. The work presented in this paper comprises a methodology able to define the cost allocation in distribution networks considering large integration of DG and DR resources. The proposed methodology is divided into three phases and it is based on an AC Optimal Power Flow (OPF) including the determination of topological distribution factors, and consequent application of the MW-mile method. The application of the proposed tariffs definition methodology is illustrated in a distribution network with 33 buses, 66 DG units, and 32 consumers with DR capacity.
Resumo:
The implementation of competitive electricity markets has changed the consumers’ and distributed generation position power systems operation. The use of distributed generation and the participation in demand response programs, namely in smart grids, bring several advantages for consumers, aggregators, and system operators. The present paper proposes a remuneration structure for aggregated distributed generation and demand response resources. A virtual power player aggregates all the resources. The resources are aggregated in a certain number of clusters, each one corresponding to a distinct tariff group, according to the economic impact of the resulting remuneration tariff. The determined tariffs are intended to be used for several months. The aggregator can define the periodicity of the tariffs definition. The case study in this paper includes 218 consumers, and 66 distributed generation units.
Resumo:
The concept of demand response has drawing attention to the active participation in the economic operation of power systems, namely in the context of recent electricity markets and smart grid models and implementations. In these competitive contexts, aggregators are necessary in order to make possible the participation of small size consumers and generation units. The methodology proposed in the present paper aims to address the demand shifting between periods, considering multi-period demand response events. The focus is given to the impact in the subsequent periods. A Virtual Power Player operates the network, aggregating the available resources, and minimizing the operation costs. The illustrative case study included is based on a scenario of 218 consumers including generation sources.
Resumo:
6th Real-Time Scheduling Open Problems Seminar (RTSOPS 2015), Lund, Sweden.
Resumo:
Power law (PL) distributions have been largely reported in the modeling of distinct real phenomena and have been associated with fractal structures and self-similar systems. In this paper, we analyze real data that follows a PL and a double PL behavior and verify the relation between the PL coefficient and the capacity dimension of known fractals. It is to be proved a method that translates PLs coefficients into capacity dimension of fractals of any real data.
Resumo:
Advances in technology have produced more and more intricate industrial systems, such as nuclear power plants, chemical centers and petroleum platforms. Such complex plants exhibit multiple interactions among smaller units and human operators, rising potentially disastrous failure, which can propagate across subsystem boundaries. This paper analyzes industrial accident data-series in the perspective of statistical physics and dynamical systems. Global data is collected from the Emergency Events Database (EM-DAT) during the time period from year 1903 up to 2012. The statistical distributions of the number of fatalities caused by industrial accidents reveal Power Law (PL) behavior. We analyze the evolution of the PL parameters over time and observe a remarkable increment in the PL exponent during the last years. PL behavior allows prediction by extrapolation over a wide range of scales. In a complementary line of thought, we compare the data using appropriate indices and use different visualization techniques to correlate and to extract relationships among industrial accident events. This study contributes to better understand the complexity of modern industrial accidents and their ruling principles.
Resumo:
Background Musicians are frequently affected by playing-related musculoskeletal disorders (PRMD). Common solutions used by Western medicine to treat musculoskeletal pain include rehabilitation programs and drugs, but their results are sometimes disappointing. Objective To study the effects of self-administered exercises based on Tuina techniques on the pain intensity caused by PRMD of professional orchestra musicians, using numeric visual scale (NVS). Design, setting, participants and interventions We performed a prospective, controlled, single-blinded, randomized study with musicians suffering from PRMD. Participating musicians were randomly distributed into the experimental (n = 39) and the control (n = 30) groups. After an individual diagnostic assessment, specific Tuina self-administered exercises were developed and taught to the participants. Musicians were instructed to repeat the exercises every day for 3 weeks. Main outcome measures Pain intensity was measured by NVS before the intervention and after 1, 3, 5, 10, 15 and 20 d of treatment. The procedure was the same for the control group, however the Tuina exercises were executed in points away from the commonly-used acupuncture points. Results In the treatment group, but not the control group, pain intensity was significantly reduced on days 1, 3, 5, 10, 15 and 20. Conclusion The results obtained are consistent with the hypothesis that self-administered exercises based on Tuina techniques could help professional musicians controlling the pain caused by PRMD. Although our results are very promising, further studies are needed employing a larger sample size and double blinding designs.
Resumo:
The complexity of systems is considered an obstacle to the progress of the IT industry. Autonomic computing is presented as the alternative to cope with the growing complexity. It is a holistic approach, in which the systems are able to configure, heal, optimize, and protect by themselves. Web-based applications are an example of systems where the complexity is high. The number of components, their interoperability, and workload variations are factors that may lead to performance failures or unavailability scenarios. The occurrence of these scenarios affects the revenue and reputation of businesses that rely on these types of applications. In this article, we present a self-healing framework for Web-based applications (SHõWA). SHõWA is composed by several modules, which monitor the application, analyze the data to detect and pinpoint anomalies, and execute recovery actions autonomously. The monitoring is done by a small aspect-oriented programming agent. This agent does not require changes to the application source code and includes adaptive and selective algorithms to regulate the level of monitoring. The anomalies are detected and pinpointed by means of statistical correlation. The data analysis detects changes in the server response time and analyzes if those changes are correlated with the workload or are due to a performance anomaly. In the presence of per- formance anomalies, the data analysis pinpoints the anomaly. Upon the pinpointing of anomalies, SHõWA executes a recovery procedure. We also present a study about the detection and localization of anomalies, the accuracy of the data analysis, and the performance impact induced by SHõWA. Two benchmarking applications, exercised through dynamic workloads, and different types of anomaly were considered in the study. The results reveal that (1) the capacity of SHõWA to detect and pinpoint anomalies while the number of end users affected is low; (2) SHõWA was able to detect anomalies without raising any false alarm; and (3) SHõWA does not induce a significant performance overhead (throughput was affected in less than 1%, and the response time delay was no more than 2 milliseconds).