17 resultados para column classification
Resumo:
O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).
Resumo:
This paper describes a methodology that was developed for the classification of Medium Voltage (MV) electricity customers. Starting from a sample of data bases, resulting from a monitoring campaign, Data Mining (DM) techniques are used in order to discover a set of a MV consumer typical load profile and, therefore, to extract knowledge regarding to the electric energy consumption patterns. In first stage, it was applied several hierarchical clustering algorithms and compared the clustering performance among them using adequacy measures. In second stage, a classification model was developed in order to allow classifying new consumers in one of the obtained clusters that had resulted from the previously process. Finally, the interpretation of the discovered knowledge are presented and discussed.
Resumo:
The growing importance and influence of new resources connected to the power systems has caused many changes in their operation. Environmental policies and several well know advantages have been made renewable based energy resources largely disseminated. These resources, including Distributed Generation (DG), are being connected to lower voltage levels where Demand Response (DR) must be considered too. These changes increase the complexity of the system operation due to both new operational constraints and amounts of data to be processed. Virtual Power Players (VPP) are entities able to manage these resources. Addressing these issues, this paper proposes a methodology to support VPP actions when these act as a Curtailment Service Provider (CSP) that provides DR capacity to a DR program declared by the Independent System Operator (ISO) or by the VPP itself. The amount of DR capacity that the CSP can assure is determined using data mining techniques applied to a database which is obtained for a large set of operation scenarios. The paper includes a case study based on 27,000 scenarios considering a diversity of distributed resources in a 33 bus distribution network.
Resumo:
This research work has been focused in the study of gallinaceous feathers, a waste that may be valorised as sorbent, to remove the Dark Blue Astrazon 2RN (DBA) from Dystar. This study was focused on the following aspects: optimization of experimental conditions through factorial design methodology, kinetic studies into a continuous stirred tank adsorber (at pH 7 and 20ºC), equilibrium isotherms (at pH 5, 7 and 9 at 20 and 45ºC) and column studies (at 20ºC, at pH 5, 7 and 9). In order to evaluate the influence of the presence of other components in the sorption of the dyestuff, all experiments were performed both for the dyestuff in aqueous solution and in real textile effluent. The pseudo-first and pseudo-second order kinetic models were fitted to the experimental data, being the latter the best fit for the aqueous solution of dyestuff. For the real effluent both models fit the experimental results and there is no statistical difference between them. The Central Composite Design (CCD) was used to evaluate the effects of temperature (15 - 45ºC) and pH (5 - 9) over the sorption in aqueous solution. The influence of pH was more significant than temperature. The optimal conditions selected were 45ºC and pH 9. Both Langmuir and Freundlich models could fit the equilibrium data. In the concentration range studied, the highest sorbent capacity was obtained for the optimal conditions in aqueous solution, which corresponds to a maximum capacity of 47± 4 mg g-1. The Yoon-Nelson, Thomas and Yan’s models fitted well the column experimental data. The highest breakthrough time for 50% removal, 170 min, was obtained at pH 9 in aqueous solution. The presence of the dyeing agents in the real wastewater decreased the sorption of the dyestuff mostly for pH 9, which is the optimal pH. The effect of pH is less pronounced in the real effluent than in aqueous solution. This work shows that feathers can be used as sorbent in the treatment of textile wastewaters containing DBA.
Resumo:
Purpose: To describe and compare the content of instruments that assess environmental factors using the International Classification of Functioning, Disability and Health (ICF). Methods: A systematic search of PubMed, CINAHL and PEDro databases was conducted using a pre-determined search strategy. The identified instruments were screened independently by two investigators, and meaningful concepts were linked to the most precise ICF category according to published linking rules. Results: Six instruments were included, containing 526 meaningful concepts. Instruments had between 20% and 98% of items linked to categories in Chapter 1. The highest percentage of items from one instrument linked to categories in Chapters 2–5 varied between 9% and 50%. The presence or absence of environmental factors in a specific context is assessed in 3 instruments, while the other 3 assess the intensity of the impact of environmental factors. Discussion: Instruments differ in their content, type of assessment, and have several items linked to the same ICF category. Most instruments primarily assess products and technology (Chapter 1), highlighting the need to deepen the discussion on the theory that supports the measurement of environmental factors. This discussion should be thorough and lead to the development of methodologies and new tools that capture the underlying concepts of the ICF.
Resumo:
The goal of this work was the treatment of polluted waste gases in a bubble column reactor (BCR), in order to determinate the maximum value of reactor’s efficiency (RE), varying the inlet concentration (C in) of the pollutants. The gaseous mixtures studied were: (i) air with styrene and (ii) air with styrene and acetone. The liquid phase used to contain the biomass in the reactor was a basal salt medium (BSM), fundamental for the microorganisms’ development. The reactor used in this project consists of a glass column of 620mm height and inside diameter 75mm. In all essays there were continually measured: pH, dissolved oxygen and liquid’s temperature. Temperature and pH were controlled (T=24ºC, 7.0 ≤ pH ≤ 7.7). In all experiments the liquid volume (including the biomass) used in the reactor was kept constant (1.5L) as well as the total gas flowrate (1 L/min). Concerning the goal of the work, some parameters were calculated: the organic load (OL), removal efficiency (RE), elimination capacity (EC), biomass concentration (xf) and dry biomass concentration (Xdw). In a first series of experiments, the gas mixture used was air with styrene, varying its concentration from 191 mg.m-3 to 6500 mg.m-3.It was concluded that the RE maximum value (97%) was obtained for C in Sty = 4200 mg.m-3. For the maximum tested value of C in Sty, RE obtained was 20%. In a second step, the gaseous mixture included acetone, varying C in Sty between 225 mg.m-3 and 2659 mg.m-3 and C in Ac between 153mg.m-3 and 1389 mg.m-3. The aim of these tests was the determination of C in Ac for which RE was maximum, obtaining C in Ac = 750 mg.m-3. A third series of experiments was performed, in which C in Ac was maintained equal to that value and C in Sty was varied until higher values (5422 mg.m-3). RE maximum values obtained in this last series were 100% for styrene and 40% for acetone. One important conclusion is the fact that the microorganisms available degrade better styrene than acetone. On the ambit of this study, it was possible to identify the species available in biomass: Xanthobacter antotrophicus py2, Enterobacter aerogenes, Nocardia, Corynebacterium Spp., Rhodococcus rhodochrous e Pseudomonas Sp.
Resumo:
The main goal of this research study was the removal of Cu(II), Ni(II) and Zn(II) from aqueous solutions using peanut hulls. This work was mainly focused on the following aspects: chemical characterization of the biosorbent, kinetic studies, study of the pH influence in mono-component systems, equilibrium isotherms and column studies, both in mono and tri-component systems, and with a real industrial effluent from the electroplating industry. The chemical characterization of peanut hulls showed a high cellulose (44.8%) and lignin (36.1%) content, which favours biosorption of metal cations. The kinetic studies performed indicate that most of the sorption occurs in the first 30 min for all systems. In general, a pseudo-second order kinetics was followed, both in mono and tri-component systems. The equilibrium isotherms were better described by Freundlich model in all systems. Peanut hulls showed higher affinity for copper than for nickel and zinc when they are both present. The pH value between 5 and 6 was the most favourable for all systems. The sorbent capacity in column was 0.028 and 0.025 mmol g-1 for copper, respectively in mono and tri-component systems. A decrease of capacity for copper (50%) was observed when dealing with the real effluent. The Yoon-Nelson, Thomas and Yan’s models were fitted to the experimental data, being the latter the best fit.
Resumo:
This manuscript analyses the data generated by a Zero Length Column (ZLC) diffusion experimental set-up, for 1,3 Di-isopropyl benzene in a 100% alumina matrix with variable particle size. The time evolution of the phenomena resembles those of fractional order systems, namely those with a fast initial transient followed by long and slow tails. The experimental measurements are best fitted with the Harris model revealing a power law behavior.
Resumo:
Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.
Resumo:
In practice the robotic manipulators present some degree of unwanted vibrations. The advent of lightweight arm manipulators, mainly in the aerospace industry, where weight is an important issue, leads to the problem of intense vibrations. On the other hand, robots interacting with the environment often generate impacts that propagate through the mechanical structure and produce also vibrations. In order to analyze these phenomena a robot signal acquisition system was developed. The manipulator motion produces vibrations, either from the structural modes or from endeffector impacts. The instrumentation system acquires signals from several sensors that capture the joint positions, mass accelerations, forces and moments, and electrical currents in the motors. Afterwards, an analysis package, running off-line, reads the data recorded by the acquisition system and extracts the signal characteristics. Due to the multiplicity of sensors, the data obtained can be redundant because the same type of information may be seen by two or more sensors. Because of the price of the sensors, this aspect can be considered in order to reduce the cost of the system. On the other hand, the placement of the sensors is an important issue in order to obtain the suitable signals of the vibration phenomenon. Moreover, the study of these issues can help in the design optimization of the acquisition system. In this line of thought a sensor classification scheme is presented. Several authors have addressed the subject of the sensor classification scheme. White (White, 1987) presents a flexible and comprehensive categorizing scheme that is useful for describing and comparing sensors. The author organizes the sensors according to several aspects: measurands, technological aspects, detection means, conversion phenomena, sensor materials and fields of application. Michahelles and Schiele (Michahelles & Schiele, 2003) systematize the use of sensor technology. They identified several dimensions of sensing that represent the sensing goals for physical interaction. A conceptual framework is introduced that allows categorizing existing sensors and evaluates their utility in various applications. This framework not only guides application designers for choosing meaningful sensor subsets, but also can inspire new systems and leads to the evaluation of existing applications. Today’s technology offers a wide variety of sensors. In order to use all the data from the diversity of sensors a framework of integration is needed. Sensor fusion, fuzzy logic, and neural networks are often mentioned when dealing with problem of combing information from several sensors to get a more general picture of a given situation. The study of data fusion has been receiving considerable attention (Esteban et al., 2005; Luo & Kay, 1990). A survey of the state of the art in sensor fusion for robotics can be found in (Hackett & Shah, 1990). Henderson and Shilcrat (Henderson & Shilcrat, 1984) introduced the concept of logic sensor that defines an abstract specification of the sensors to integrate in a multisensor system. The recent developments of micro electro mechanical sensors (MEMS) with unwired communication capabilities allow a sensor network with interesting capacity. This technology was applied in several applications (Arampatzis & Manesis, 2005), including robotics. Cheekiralla and Engels (Cheekiralla & Engels, 2005) propose a classification of the unwired sensor networks according to its functionalities and properties. This paper presents a development of a sensor classification scheme based on the frequency spectrum of the signals and on a statistical metrics. Bearing these ideas in mind, this paper is organized as follows. Section 2 describes briefly the robotic system enhanced with the instrumentation setup. Section 3 presents the experimental results. Finally, section 4 draws the main conclusions and points out future work.
Resumo:
This chapter analyzes the signals captured during impacts and vibrations of a mechanical manipulator. Eighteen signals are captured and several metrics are calculated between them, such as the correlation, the mutual information and the entropy. A sensor classification scheme based on the multidimensional scaling technique is presented.
Resumo:
This paper analyzes the signals captured during impacts and vibrations of a mechanical manipulator. To test the impacts, a flexible beam is clamped to the end-effector of a manipulator that is programmed in a way such that the rod moves against a rigid surface. Eighteen signals are captured and theirs correlation are calculated. A sensor classification scheme based on the multidimensional scaling technique is presented.
Resumo:
STRIPPING is a software application developed for the automatic design of a randomly packing column where the transfer of volatile organic compounds (VOCs) from water to air can be performed and to simulate it’s behaviour in a steady-state. This software completely purges any need of experimental work for the selection of diameter of the column, and allows a choice, a priori, of the most convenient hydraulic regime for this type of operation. It also allows the operator to choose the model used for the calculation of some parameters, namely between the Eckert/Robbins model and the Billet model for estimating the pressure drop of the gaseous phase, and between the Billet and Onda/Djebbar’s models for the mass transfer. Illustrations of the graphical interface offered are presented.
Resumo:
In the present paper we assess the performance of information-theoretic inspired risks functionals in multilayer perceptrons with reference to the two most popular ones, Mean Square Error and Cross-Entropy. The information-theoretic inspired risks, recently proposed, are: HS and HR2 are, respectively, the Shannon and quadratic Rényi entropies of the error; ZED is a risk reflecting the error density at zero errors; EXP is a generalized exponential risk, able to mimic a wide variety of risk functionals, including the information-thoeretic ones. The experiments were carried out with multilayer perceptrons on 35 public real-world datasets. All experiments were performed according to the same protocol. The statistical tests applied to the experimental results showed that the ubiquitous mean square error was the less interesting risk functional to be used by multilayer perceptrons. Namely, mean square error never achieved a significantly better classification performance than competing risks. Cross-entropy and EXP were the risks found by several tests to be significantly better than their competitors. Counts of significantly better and worse risks have also shown the usefulness of HS and HR2 for some datasets.
Resumo:
This paper analyzes the signals captured during impacts and vibrations of a mechanical manipulator. The Fourier Transform of eighteen different signals are calculated and approximated by trendlines based on a power law formula. A sensor classification scheme based on the frequency spectrum behavior is presented.