948 resultados para COMBINING CLASSIFIERS
Resumo:
The first and second authors would like to thank the support of the PhD grants with references SFRH/BD/28817/2006 and SFRH/PROTEC/49517/2009, respectively, from Fundação para a Ciência e Tecnol ogia (FCT). This work was partially done in the scope of the project “Methodologies to Analyze Organs from Complex Medical Images – Applications to Fema le Pelvic Cavity”, wi th reference PTDC/EEA- CRO/103320/2008, financially supported by FCT.
Resumo:
In the last decade, local image features have been widely used in robot visual localization. To assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image to those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, we compare several candidate combiners with respect to their performance in the visual localization task. A deeper insight into the potential of the sum and product combiners is provided by testing two extensions of these algebraic rules: threshold and weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance. The voting method, whilst competitive to the algebraic rules in their standard form, is shown to be outperformed by both their modified versions.
Resumo:
Conferência: 39th Annual Conference of the IEEE Industrial-Electronics-Society (IECON), Vienna, Austria, Nov 10-14, 2013
Resumo:
A deteção e seguimento de pessoas tem uma grande variedade de aplicações em visão computacional. Embora tenha sido alvo de anos de investigação, continua a ser um tópico em aberto, e ainda hoje, um grande desafio a obtenção de uma abordagem que inclua simultaneamente exibilidade e precisão. O trabalho apresentado nesta dissertação desenvolve um caso de estudo sobre deteção e seguimento automático de faces humanas, em ambiente de sala de reuniões, concretizado num sistema flexível de baixo custo. O sistema proposto é baseado no sistema operativo GNU's Not Unix (GNU) linux, e é dividido em quatro etapas, a aquisição de vídeo, a deteção da face, o tracking e reorientação da posição da câmara. A aquisição consiste na captura de frames de vídeo das três câmaras Internet Protocol (IP) Sony SNC-RZ25P, instaladas na sala, através de uma rede Local Area Network (LAN) também ele já existente. Esta etapa fornece os frames de vídeo para processamento à detecção e tracking. A deteção usa o algoritmo proposto por Viola e Jones, para a identificação de objetos, baseando-se nas suas principais características, que permite efetuar a deteção de qualquer tipo de objeto (neste caso faces humanas) de uma forma genérica e em tempo real. As saídas da deteção, quando é identificado com sucesso uma face, são as coordenadas do posicionamento da face, no frame de vídeo. As coordenadas da face detetada são usadas pelo algoritmo de tracking, para a partir desse ponto seguir a face pelos frames de vídeo subsequentes. A etapa de tracking implementa o algoritmo Continuously Adaptive Mean-SHIFT (Camshift) que baseia o seu funcionamento na pesquisa num mapa de densidade de probabilidade, do seu valor máximo, através de iterações sucessivas. O retorno do algoritmo são as coordenadas da posição e orientação da face. Estas coordenadas permitem orientar o posicionamento da câmara de forma que a face esteja sempre o mais próximo possível do centro do campo de visão da câmara. Os resultados obtidos mostraram que o sistema de tracking proposto é capaz de reconhecer e seguir faces em movimento em sequências de frames de vídeo, mostrando adequabilidade para aplicação de monotorização em tempo real.
Resumo:
In this work we employed a hybrid method, combining RF-magnetron sputtering with evaporation, for the deposition of tailor made metallic precursors, with varying number of Zn/Sn/Cu (ZTC) periods and compared two approaches to sulphurization. Two series of samples with 1×, 2× and 4× ZTC periods have been prepared. One series of precursors was sulphurized in a tubular furnace directly exposed to a sulphur vapour and N2+5% H2 flux at a pressure of 5.0×10+4 Pa. A second series of identical precursors was sulphurized in the same furnace but inside a graphite box where sulphur pellets have been evaporated again in the presence of N2+5% H2 and at the same pressure as for the sulphur flux experiments. The morphological and chemical analyses revealed a small grain structure but good average composition for all three films sulphurized in the graphite box. As for the three films sulphurized in sulphur flux grain growth was seen with the increase of the number of ZTC periods whilst, in terms of composition, they were slightly Zn poor. The films' crystal structure showed that Cu2ZnSnS4 is the dominant phase. However, in the case of the sulphur flux films SnS2 was also detected. Photoluminescence spectroscopy studies showed an asymmetric broad band emission whichoccurs in the range of 1–1.5 eV. Clearly the radiative recombination efficiency is higher in the series of samples sulphurized in sulphur flux. We have found that sulphurization in sulphur flux leads to better film morphology than when the process is carried out in a graphite box in similar thermodynamic conditions. Solar cells have been prepared and characterized showing a correlation between improved film morphology and cell performance. The best cells achieved an efficiency of 2.4%.
Resumo:
The principal topic of this work is the application of data mining techniques, in particular of machine learning, to the discovery of knowledge in a protein database. In the first chapter a general background is presented. Namely, in section 1.1 we overview the methodology of a Data Mining project and its main algorithms. In section 1.2 an introduction to the proteins and its supporting file formats is outlined. This chapter is concluded with section 1.3 which defines that main problem we pretend to address with this work: determine if an amino acid is exposed or buried in a protein, in a discrete way (i.e.: not continuous), for five exposition levels: 2%, 10%, 20%, 25% and 30%. In the second chapter, following closely the CRISP-DM methodology, whole the process of construction the database that supported this work is presented. Namely, it is described the process of loading data from the Protein Data Bank, DSSP and SCOP. Then an initial data exploration is performed and a simple prediction model (baseline) of the relative solvent accessibility of an amino acid is introduced. It is also introduced the Data Mining Table Creator, a program developed to produce the data mining tables required for this problem. In the third chapter the results obtained are analyzed with statistical significance tests. Initially the several used classifiers (Neural Networks, C5.0, CART and Chaid) are compared and it is concluded that C5.0 is the most suitable for the problem at stake. It is also compared the influence of parameters like the amino acid information level, the amino acid window size and the SCOP class type in the accuracy of the predictive models. The fourth chapter starts with a brief revision of the literature about amino acid relative solvent accessibility. Then, we overview the main results achieved and finally discuss about possible future work. The fifth and last chapter consists of appendices. Appendix A has the schema of the database that supported this thesis. Appendix B has a set of tables with additional information. Appendix C describes the software provided in the DVD accompanying this thesis that allows the reconstruction of the present work.
Resumo:
Wireless Sensor Networks (WSN) are being used for a number of applications involving infrastructure monitoring, building energy monitoring and industrial sensing. The difficulty of programming individual sensor nodes and the associated overhead have encouraged researchers to design macro-programming systems which can help program the network as a whole or as a combination of subnets. Most of the current macro-programming schemes do not support multiple users seamlessly deploying diverse applications on the same shared sensor network. As WSNs are becoming more common, it is important to provide such support, since it enables higher-level optimizations such as code reuse, energy savings, and traffic reduction. In this paper, we propose a macro-programming framework called Nano-CF, which, in addition to supporting in-network programming, allows multiple applications written by different programmers to be executed simultaneously on a sensor networking infrastructure. This framework enables the use of a common sensing infrastructure for a number of applications without the users having to worrying about the applications already deployed on the network. The framework also supports timing constraints and resource reservations using the Nano-RK operating system. Nano- CF is efficient at improving WSN performance by (a) combining multiple user programs, (b) aggregating packets for data delivery, and (c) satisfying timing and energy specifications using Rate- Harmonized Scheduling. Using representative applications, we demonstrate that Nano-CF achieves 90% reduction in Source Lines-of-Code (SLoC) and 50% energy savings from aggregated data delivery.
Resumo:
Stock market indices SMIs are important measures of financial and economical performance. Considerable research efforts during the last years demonstrated that these signals have a chaotic nature and require sophisticated mathematical tools for analyzing their characteristics. Classical methods, such as the Fourier transform, reveal considerable limitations in discriminating different periods of time. This paper studies the dynamics of SMI by combining the wavelet transform and the multidimensional scaling MDS . Six continuous wavelets are tested for analyzing the information content of the stock signals. In a first phase, the real Shannon wavelet is adopted for performing the evaluation of the SMI dynamics, while their comparison is visualized by means of the MDS. In a second phase, the other wavelets are also tested, and the corresponding MDS plots are analyzed.
Resumo:
Interactive products are appealing objects in a technology-driven society and the offer in the market is wide and varied. Most of the existing interactive products only provide either light or sound experiences. Therefore, the goal of this project was to develop a product aimed for children combining both features. This project was developed by a team of four thirdyear students with different engineering backgrounds and nationalities during the European Project Semester at ISEP (EPS@ISEP) in 2012. This paper presents the process that led to the development of an interactive sound table that combines nine identical interaction blocks, a control block and a sound block. Each interaction block works independently and is composed of four light emitting diodes (LED) and one infrared (IR) sensor. The control is performed by an Arduino microcontroller and the sound block includes a music shield and a pair of loud speakers. A number of tests were carried out to assess whether the controller, IR sensors, LED, music shield and speakers work together properly and if the ensemble was a viable interactive light and sound device for children.
Resumo:
Constraints nonlinear optimization problems can be solved using penalty or barrier functions. This strategy, based on solving the problems without constraints obtained from the original problem, have shown to be e ective, particularly when used with direct search methods. An alternative to solve the previous problems is the lters method. The lters method introduced by Fletcher and Ley er in 2002, , has been widely used to solve problems of the type mentioned above. These methods use a strategy di erent from the barrier or penalty functions. The previous functions de ne a new one that combine the objective function and the constraints, while the lters method treat optimization problems as a bi-objective problems that minimize the objective function and a function that aggregates the constraints. Motivated by the work of Audet and Dennis in 2004, using lters method with derivative-free algorithms, the authors developed works where other direct search meth- ods were used, combining their potential with the lters method. More recently. In a new variant of these methods was presented, where it some alternative aggregation restrictions for the construction of lters were proposed. This paper presents a variant of the lters method, more robust than the previous ones, that has been implemented with a safeguard procedure where values of the function and constraints are interlinked and not treated completely independently.
Resumo:
A construction project is a group of discernible tasks or activities that are conduct-ed in a coordinated effort to accomplish one or more objectives. Construction projects re-quire varying levels of cost, time and other resources. To plan and schedule a construction project, activities must be defined sufficiently. The level of detail determines the number of activities contained within the project plan and schedule. So, finding feasible schedules which efficiently use scarce resources is a challenging task within project management. In this context, the well-known Resource Constrained Project Scheduling Problem (RCPSP) has been studied during the last decades. In the RCPSP the activities of a project have to be scheduled such that the makespan of the project is minimized. So, the technological precedence constraints have to be observed as well as limitations of the renewable resources required to accomplish the activities. Once started, an activity may not be interrupted. This problem has been extended to a more realistic model, the multi-mode resource con-strained project scheduling problem (MRCPSP), where each activity can be performed in one out of several modes. Each mode of an activity represents an alternative way of combining different levels of resource requirements with a related duration. Each renewable resource has a limited availability for the entire project such as manpower and machines. This paper presents a hybrid genetic algorithm for the multi-mode resource-constrained pro-ject scheduling problem, in which multiple execution modes are available for each of the ac-tivities of the project. The objective function is the minimization of the construction project completion time. To solve the problem, is applied a two-level genetic algorithm, which makes use of two separate levels and extend the parameterized schedule generation scheme. It is evaluated the quality of the schedules and presents detailed comparative computational re-sults for the MRCPSP, which reveal that this approach is a competitive algorithm.
Resumo:
A mathematical model is proposed for the evolution of temperature, chemical composition, and energy release in bubbles, clouds, and emulsion phase during combustion of gaseous premixtures of air and propane in a bubbling fluidized bed. The analysis begins as the bubbles are formed at the orifices of the distributor, until they explode inside the bed or emerge at the free surface of the bed. The model also considers the freeboard region of the fluidized bed until the propane is thoroughly burned. It is essentially built upon the quasi-global mechanism of Hautman et al. (1981) and the mass and heat transfer equations from the two-phase model of Davidson and Harrison (1963). The focus is not on a new modeling approach, but on combining the classical models of the kinetics and other diffusional aspects to obtain a better insight into the events occurring inside a fluidized bed reactor. Experimental data are obtained to validate the model by testing the combustion of commercial propane, in a laboratory-scale fluidized bed, using four sand particle sizes: 400–500, 315–400, 250–315, and 200–250 µm. The mole fractions of CO2, CO, and O2 in the flue gases and the temperature of the fluidized bed are measured and compared with the numerical results.
Resumo:
With advancement in computer science and information technology, computing systems are becoming increasingly more complex with an increasing number of heterogeneous components. They are thus becoming more difficult to monitor, manage, and maintain. This process has been well known as labor intensive and error prone. In addition, traditional approaches for system management are difficult to keep up with the rapidly changing environments. There is a need for automatic and efficient approaches to monitor and manage complex computing systems. In this paper, we propose an innovative framework for scheduling system management by combining Autonomic Computing (AC) paradigm, Multi-Agent Systems (MAS) and Nature Inspired Optimization Techniques (NIT). Additionally, we consider the resolution of realistic problems. The scheduling of a Cutting and Treatment Stainless Steel Sheet Line will be evaluated. Results show that proposed approach has advantages when compared with other scheduling systems
Resumo:
ABSTRACT OBJECTIVE To estimate the required number of public beds for adults in intensive care units in the state of Rio de Janeiro to meet the existing demand and compare results with recommendations by the Brazilian Ministry of Health. METHODS The study uses a hybrid model combining time series and queuing theory to predict the demand and estimate the number of required beds. Four patient flow scenarios were considered according to bed requests, percentage of abandonments and average length of stay in intensive care unit beds. The results were plotted against Ministry of Health parameters. Data were obtained from the State Regulation Center from 2010 to 2011. RESULTS There were 33,101 medical requests for 268 regulated intensive care unit beds in Rio de Janeiro. With an average length of stay in regulated ICUs of 11.3 days, there would be a need for 595 active beds to ensure system stability and 628 beds to ensure a maximum waiting time of six hours. Deducting current abandonment rates due to clinical improvement (25.8%), these figures fall to 441 and 417. With an average length of stay of 6.5 days, the number of required beds would be 342 and 366, respectively; deducting abandonment rates, 254 and 275. The Brazilian Ministry of Health establishes a parameter of 118 to 353 beds. Although the number of regulated beds is within the recommended range, an increase in beds of 122.0% is required to guarantee system stability and of 134.0% for a maximum waiting time of six hours. CONCLUSIONS Adequate bed estimation must consider reasons for limited timely access and patient flow management in a scenario that associates prioritization of requests with the lowest average length of stay.
Resumo:
Behavioral biometrics is one of the areas with growing interest within the biosignal research community. A recent trend in the field is ECG-based biometrics, where electrocardiographic (ECG) signals are used as input to the biometric system. Previous work has shown this to be a promising trait, with the potential to serve as a good complement to other existing, and already more established modalities, due to its intrinsic characteristics. In this paper, we propose a system for ECG biometrics centered on signals acquired at the subject's hand. Our work is based on a previously developed custom, non-intrusive sensing apparatus for data acquisition at the hands, and involved the pre-processing of the ECG signals, and evaluation of two classification approaches targeted at real-time or near real-time applications. Preliminary results show that this system leads to competitive results both for authentication and identification, and further validate the potential of ECG signals as a complementary modality in the toolbox of the biometric system designer.