16 resultados para product classification
em Instituto Politécnico do Porto, Portugal
Resumo:
O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).
Resumo:
This paper describes a methodology that was developed for the classification of Medium Voltage (MV) electricity customers. Starting from a sample of data bases, resulting from a monitoring campaign, Data Mining (DM) techniques are used in order to discover a set of a MV consumer typical load profile and, therefore, to extract knowledge regarding to the electric energy consumption patterns. In first stage, it was applied several hierarchical clustering algorithms and compared the clustering performance among them using adequacy measures. In second stage, a classification model was developed in order to allow classifying new consumers in one of the obtained clusters that had resulted from the previously process. Finally, the interpretation of the discovered knowledge are presented and discussed.
Resumo:
The growing importance and influence of new resources connected to the power systems has caused many changes in their operation. Environmental policies and several well know advantages have been made renewable based energy resources largely disseminated. These resources, including Distributed Generation (DG), are being connected to lower voltage levels where Demand Response (DR) must be considered too. These changes increase the complexity of the system operation due to both new operational constraints and amounts of data to be processed. Virtual Power Players (VPP) are entities able to manage these resources. Addressing these issues, this paper proposes a methodology to support VPP actions when these act as a Curtailment Service Provider (CSP) that provides DR capacity to a DR program declared by the Independent System Operator (ISO) or by the VPP itself. The amount of DR capacity that the CSP can assure is determined using data mining techniques applied to a database which is obtained for a large set of operation scenarios. The paper includes a case study based on 27,000 scenarios considering a diversity of distributed resources in a 33 bus distribution network.
Resumo:
Purpose: To describe and compare the content of instruments that assess environmental factors using the International Classification of Functioning, Disability and Health (ICF). Methods: A systematic search of PubMed, CINAHL and PEDro databases was conducted using a pre-determined search strategy. The identified instruments were screened independently by two investigators, and meaningful concepts were linked to the most precise ICF category according to published linking rules. Results: Six instruments were included, containing 526 meaningful concepts. Instruments had between 20% and 98% of items linked to categories in Chapter 1. The highest percentage of items from one instrument linked to categories in Chapters 2–5 varied between 9% and 50%. The presence or absence of environmental factors in a specific context is assessed in 3 instruments, while the other 3 assess the intensity of the impact of environmental factors. Discussion: Instruments differ in their content, type of assessment, and have several items linked to the same ICF category. Most instruments primarily assess products and technology (Chapter 1), highlighting the need to deepen the discussion on the theory that supports the measurement of environmental factors. This discussion should be thorough and lead to the development of methodologies and new tools that capture the underlying concepts of the ICF.
Resumo:
Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.
Resumo:
In practice the robotic manipulators present some degree of unwanted vibrations. The advent of lightweight arm manipulators, mainly in the aerospace industry, where weight is an important issue, leads to the problem of intense vibrations. On the other hand, robots interacting with the environment often generate impacts that propagate through the mechanical structure and produce also vibrations. In order to analyze these phenomena a robot signal acquisition system was developed. The manipulator motion produces vibrations, either from the structural modes or from endeffector impacts. The instrumentation system acquires signals from several sensors that capture the joint positions, mass accelerations, forces and moments, and electrical currents in the motors. Afterwards, an analysis package, running off-line, reads the data recorded by the acquisition system and extracts the signal characteristics. Due to the multiplicity of sensors, the data obtained can be redundant because the same type of information may be seen by two or more sensors. Because of the price of the sensors, this aspect can be considered in order to reduce the cost of the system. On the other hand, the placement of the sensors is an important issue in order to obtain the suitable signals of the vibration phenomenon. Moreover, the study of these issues can help in the design optimization of the acquisition system. In this line of thought a sensor classification scheme is presented. Several authors have addressed the subject of the sensor classification scheme. White (White, 1987) presents a flexible and comprehensive categorizing scheme that is useful for describing and comparing sensors. The author organizes the sensors according to several aspects: measurands, technological aspects, detection means, conversion phenomena, sensor materials and fields of application. Michahelles and Schiele (Michahelles & Schiele, 2003) systematize the use of sensor technology. They identified several dimensions of sensing that represent the sensing goals for physical interaction. A conceptual framework is introduced that allows categorizing existing sensors and evaluates their utility in various applications. This framework not only guides application designers for choosing meaningful sensor subsets, but also can inspire new systems and leads to the evaluation of existing applications. Today’s technology offers a wide variety of sensors. In order to use all the data from the diversity of sensors a framework of integration is needed. Sensor fusion, fuzzy logic, and neural networks are often mentioned when dealing with problem of combing information from several sensors to get a more general picture of a given situation. The study of data fusion has been receiving considerable attention (Esteban et al., 2005; Luo & Kay, 1990). A survey of the state of the art in sensor fusion for robotics can be found in (Hackett & Shah, 1990). Henderson and Shilcrat (Henderson & Shilcrat, 1984) introduced the concept of logic sensor that defines an abstract specification of the sensors to integrate in a multisensor system. The recent developments of micro electro mechanical sensors (MEMS) with unwired communication capabilities allow a sensor network with interesting capacity. This technology was applied in several applications (Arampatzis & Manesis, 2005), including robotics. Cheekiralla and Engels (Cheekiralla & Engels, 2005) propose a classification of the unwired sensor networks according to its functionalities and properties. This paper presents a development of a sensor classification scheme based on the frequency spectrum of the signals and on a statistical metrics. Bearing these ideas in mind, this paper is organized as follows. Section 2 describes briefly the robotic system enhanced with the instrumentation setup. Section 3 presents the experimental results. Finally, section 4 draws the main conclusions and points out future work.
Resumo:
This chapter analyzes the signals captured during impacts and vibrations of a mechanical manipulator. Eighteen signals are captured and several metrics are calculated between them, such as the correlation, the mutual information and the entropy. A sensor classification scheme based on the multidimensional scaling technique is presented.
Resumo:
This paper analyzes the signals captured during impacts and vibrations of a mechanical manipulator. To test the impacts, a flexible beam is clamped to the end-effector of a manipulator that is programmed in a way such that the rod moves against a rigid surface. Eighteen signals are captured and theirs correlation are calculated. A sensor classification scheme based on the multidimensional scaling technique is presented.
Resumo:
In the present paper we assess the performance of information-theoretic inspired risks functionals in multilayer perceptrons with reference to the two most popular ones, Mean Square Error and Cross-Entropy. The information-theoretic inspired risks, recently proposed, are: HS and HR2 are, respectively, the Shannon and quadratic Rényi entropies of the error; ZED is a risk reflecting the error density at zero errors; EXP is a generalized exponential risk, able to mimic a wide variety of risk functionals, including the information-thoeretic ones. The experiments were carried out with multilayer perceptrons on 35 public real-world datasets. All experiments were performed according to the same protocol. The statistical tests applied to the experimental results showed that the ubiquitous mean square error was the less interesting risk functional to be used by multilayer perceptrons. Namely, mean square error never achieved a significantly better classification performance than competing risks. Cross-entropy and EXP were the risks found by several tests to be significantly better than their competitors. Counts of significantly better and worse risks have also shown the usefulness of HS and HR2 for some datasets.
Resumo:
Coffee silverskin is a major roasting by-product that could be valued as a source of antioxidant compounds. The effect of the major variables (solvent polarity, temperature and extraction time) affecting the extraction yields of bioactive compounds and antioxidant activity of silverskin extracts was evaluated. The extracts composition varied significantly with the extraction conditions used. A factorial experimental design showed that the use of a hydroalcoholic solvent (50%:50%) at 40 °C for 60 min is a sustainable option to maximize the extraction yield of bioactive compounds and the antioxidant capacity of extracts. Using this set of conditions it was possible to obtain extracts containing total phenolics (302.5 ± 7.1 mg GAE/L), tannins (0.43 ± 0.06 mg TAE/L), and flavonoids (83.0 ± 1.4 mg ECE/L), exhibiting DPPHradical dot scavenging activity (326.0 ± 5.7 mg TE/L) and ferric reducing antioxidant power (1791.9 ± 126.3 mg SFE/L). These conditions allowed, in comparison with other “more effective” for some individual parameters, a cost reduction, saving time and energy.
Resumo:
BACKGROUND: Some studies have reported an inverse association between dairy product (DP) consumption and weight or fat mass loss. OBJECTIVES: The objective of our study was to assess the association between DP intake and abdominal obesity (AO) among Azorean adolescents. SUBJECTS/METHODS: This study was a cross-sectional analysis. A total of 903 adolescents (370 boys) aged 15--16 years was evaluated. Anthropometric measurements were collected (weight, height and waist circumference (WC)) and McCarthy’s cut-points were used to categorize WC. AO was defined when WC was X90th percentile. Adolescent food intake was assessed using a self-administered semiquantitative food frequency questionnaire and DP intake was categorized in o2 and X2 servings/day. Data were analyzed separately for girls and boys, and logistical regression was used to estimate the association between DPs and AO adjusting for potential confounders. RESULTS: The prevalence of AO was 54.9% (boys: 32.1% and girls: 70.7%, Po0.001). For boys and girls, DP consumption was 2.3±1.9 and 2.1±1.6 servings/day (P¼0.185), respectively. In both genders, the proportion of adolescents with WC o90th percentile was higher among individuals who reported a dairy intake of X2 servings/day compared with those with an intake o2 servings/day (boys: 71% vs 65% and girls: 36% vs 24%, Po0.05). After adjustments for confounders, two or more DP servings per day were a negative predictor of AO (odds ratio, 0.217; 95% confidence interval, 0.075 -- 0.633) only in boys. CONCLUSION: We found a protective association between DP intake and AO only in boys.
Resumo:
O desenvolvimento sustentável é um dos grandes desafios dos nossos tempos com inúmeras consequências em várias áreas da nossa sociedade. É uma questão abrangente e essencial para a sobrevivência do modo de vida tal como o conhecemos actualmente. A construção sustentável tem um papel muito importante no desenvolvimento, não só ao nível económico mas também social e cultural. Embora não contemple a energia incorporada, a avaliação do ciclo de vida (ACV), no sector da construção, é um dos métodos mais comuns para avaliar o nível de sustentabilidade. Este trabalho visa os metais como uma das mais promissoras e actuais respostas do sector da construção às crescentes preocupações em relação ao desenvolvimento sustentável. O ferro e derivados são normalmente a base das construções metálicas, residindo no seu potencial de reutilização e reciclagem um dos seus principais factores de sustentabilidade. As estruturas metálicas apresentam características especificas que se coadunam com os requisitos da construção sustentável e que tornam este tipo de construção extremamente versátil e interessante. Neste trabalho, é efectuada uma abordagem sobre a construção metálica ao longo de três partes. A primeira parte é constituída por uma introdução histórica ao ferro e seus derivados enunciando exemplos de construções até aos nossos dias, e pela classificação dos vários tipos de metais e ligas metálicas. Na segunda parte, é abordado o conceito de sustentável e o seu enquadramento no sector da construção, e é feita uma introdução à metodologia de avaliação de ciclo de vida. Na terceira parte, é abordado um exemplo prático de uma estrutura metálica em que são elaboradas e comparadas três soluções. Na origem da diversidade dos elementos comparativos estão o tipo de aço, a origem da energia utilizada no seu fabrico e o tipo de solução técnica adoptada. O objectivo deste trabalho é compreender as repercussões do conceito de sustentabilidade no sector da construção, e desenvolver um método simplificado de avaliação dos impactos ambientais e económicos de soluções metálicas.
Resumo:
This paper analyzes the signals captured during impacts and vibrations of a mechanical manipulator. The Fourier Transform of eighteen different signals are calculated and approximated by trendlines based on a power law formula. A sensor classification scheme based on the frequency spectrum behavior is presented.
Resumo:
Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), 2013
Resumo:
The purpose of this work was to develop a reliable alternative method for the determination of the dithiocarbamate pesticide mancozeb (MCZ) in formulations. Furthermore, a method for the analysis of MCZ's major degradation product, ethylenethiourea (ETU), was also proposed. Cyclic voltammetry was used to characterize the electrochemical behavior of MCZ and ETU, and square-wave adsorptive stripping voltammetry (SWAdSV) was employed for MCZ quantification in commercial formulations. It was found that both MCZ and ETU are irreversibly reduced (− 0.6 V and − 0.5 V vs Ag/AgCl, respectively) at the surface of a glassy carbon electrode in a mainly diffusion-controlled process, presenting maximum peak current intensities at pH 7.0 (in phosphate buffered saline electrolyte). Several parameters of the SWAdSV technique were optimized and linear relationships between concentration and peak current intensity were established between 10–90 μmol L− 1 and 10–110 μmol L− 1 for MCZ and ETU, respectively. The limits of detection were 7.0 μmol L− 1 for MCZ and 7.8 μmol L− 1 for ETU. The optimized method for MCZ was successfully applied to the quantification of this pesticide in two commercial formulations. The developed procedures provided accurate and precise results and could be interesting alternatives to the established methods for quality control of the studied products, as well as for analysis of MCZ and ETU in environmental samples.