859 resultados para Fuzzy c-means algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSN) are a special kind of ad-hoc networks that is usually deployed in a monitoring field in order to detect some physical phenomenon. Due to the low dependability of individual nodes, small radio coverage and large areas to be monitored, the organization of nodes in small clusters is generally used. Moreover, a large number of WSN nodes is usually deployed in the monitoring area to increase WSN dependability. Therefore, the best cluster head positioning is a desirable characteristic in a WSN. In this paper, we propose a hybrid clustering algorithm based on community detection in complex networks and traditional K-means clustering technique: the QK-Means algorithm. Simulation results show that QK-Means detect communities and sub-communities thus lost message rate is decreased and WSN coverage is increased. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O presente trabalho trata da aplicação do filtro Kalman-Bucy (FKB), organizado como uma deconvolução (FKBD), para extração da função refletividade a partir de dados sísmicos. Isto significa que o processo é descrito como estocástico não-estacionário, e corresponde a uma generalização da teoria de Wiener-Kolmogorov. A descrição matemática do FKB conserva a relação com a do filtro Wiener-Hopf (FWH) que trata da contra-parte com um processo estocástico estacionário. A estratégia de ataque ao problema é estruturada em partes: (a) Critério de otimização; (b) Conhecimento a priori; (c) Algoritmo; e (d) Qualidade. O conhecimento a priori inclui o modelo convolucional, e estabelece estatísticas para as suas componentes do modelo (pulso-fonte efetivo, função refletividade, ruídos geológico e local). Para demostrar a versatilidade, a aplicabilidade e limitações do método, elaboramos experimentos sistemáticos de deconvolução sob várias situações de nível de ruídos aditivos e de pulso-fonte efetivo. Demonstramos, em primeiro lugar, a necessidade de filtros equalizadores e, em segundo lugar, que o fator de coerência espectral é uma boa medida numérica da qualidade do processo. Justificamos também o presente estudo para a aplicação em dados reais, como exemplificado.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ground Delay Programs (GDP) are sometimes cancelled before their initial planned duration and for this reason aircraft are delayed when it is no longer needed. Recovering this delay usually leads to extra fuel consumption, since the aircraft will typically depart after having absorbed on ground their assigned delay and, therefore, they will need to cruise at more fuel consuming speeds. Past research has proposed speed reduction strategy aiming at splitting the GDP-assigned delay between ground and airborne delay, while using the same fuel as in nominal conditions. Being airborne earlier, an aircraft can speed up to nominal cruise speed and recover part of the GDP delay without incurring extra fuel consumption if the GDP is cancelled earlier than planned. In this paper, all GDP initiatives that occurred in San Francisco International Airport during 2006 are studied and characterised by a K-means algorithm into three different clusters. The centroids for these three clusters have been used to simulate three different GDPs at the airport by using a realistic set of inbound traffic and the Future Air Traffic Management Concepts Evaluation Tool (FACET). The amount of delay that can be recovered using this cruise speed reduction technique, as a function of the GDP cancellation time, has been computed and compared with the delay recovered with the current concept of operations. Simulations have been conducted in calm wind situation and without considering a radius of exemption. Results indicate that when aircraft depart early and fly at the slower speed they can recover additional delays, compared to current operations where all delays are absorbed prior to take-off, in the event the GDP cancels early. There is a variability of extra delay recovered, being more significant, in relative terms, for those GDPs with a relatively low amount of demand exceeding the airport capacity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increase in load demand for various sectors, protection and safety of the network are key factors that have to be taken into consideration over the electric grid and distribution network. A phasor Measuring unit is an Intelligent electronics device that collects the data in the form of a real-time synchrophasor with a precise time tag using GPS (Global positioning system) and transfers the data to the grid command to monitor and assess the data. The measurements made by PMU have to be very precise to protect the relays and measuring equipment according to the IEEE 60255-118-1(2018). As a device PMU is very expensive to research and develop new functionalities there is a need to find an alternative to working with. Hence many open source virtual libraries are available to replicate the exact function of PMU in the virtual environment(Software) to continue the research on multiple objectives, providing the very least error results when verified. In this thesis, I executed performance and compliance verification of the virtual PMU which was developed using the I-DFT (Interpolated Discrete Fourier transforms) C-class algorithm in MATLAB. In this thesis, a test environment has been developed in MATLAB and tested the virtually developed PMU on both steady state and dynamic state for verifying the latest standard compliance(IEEE-60255-118-1).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large parity-violating longitudinal single-spin asymmetries A(L)(e+) = 0.86(-0.14)(+0.30) and Ae(L)(e-) = 0.88(-0.71)(+0.12) are observed for inclusive high transverse momentum electrons and positrons in polarized p + p collisions at a center-of-mass energy of root s = 500 GeV with the PHENIX detector at RHIC. These e(+/-) come mainly from the decay of W(+/-) and Z(0) bosons, and their asymmetries directly demonstrate parity violation in the couplings of the W(+/-) to the light quarks. The observed electron and positron yields were used to estimate W(+/-) boson production cross sections for the e(+/-) channels of sigma(pp -> W(+)X) X BR(W(+) -> e(+) nu(e)) = 144.1 +/- 21.2(stat)(-10.3)(+3.4)(syst) +/- 21.6(norm) pb, and sigma(pp -> W(-)X) X BR(W(-) -> e(-) (nu) over bar (e)) = 3.17 +/- 12.1(stat)(-8.2)(+10.1)(syst) +/- 4.8(norm) pb.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Audiometer systems provide enormous amounts of detailed TV watching data. Several relevant and interdependent factors may influence TV viewers' behavior. In this work we focus on the time factor and derive Temporal Patterns of TV watching, based on panel data. Clustering base attributes are originated from 1440 binary minute-related attributes, capturing the TV watching status (watch/not watch). Since there are around 2500 panel viewers a data reduction procedure is first performed. K-Means algorithm is used to obtain daily clusters of viewers. Weekly patterns are then derived which rely on daily patterns. The obtained solutions are tested for consistency and stability. Temporal TV watching patterns provide new insights concerning Portuguese TV viewers' behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado em Engenharia Informática - Área de Especialização em Arquiteturas, Sistemas e Redes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo desta dissertação foi estudar um conjunto de empresas cotadas na bolsa de valores de Lisboa, para identificar aquelas que têm um comportamento semelhante ao longo do tempo. Para isso utilizamos algoritmos de Clustering tais como K-Means, PAM, Modelos hierárquicos, Funny e C-Means tanto com a distância euclidiana como com a distância de Manhattan. Para selecionar o melhor número de clusters identificado por cada um dos algoritmos testados, recorremos a alguns índices de avaliação/validação de clusters como o Davies Bouldin e Calinski-Harabasz entre outros.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die konservative Behandlung von Skoliosen mit TLSO ist noch immer umstritten. Im Rahmen dieser Bachelorarbeit wird daher ein Konzept zur mobilen Aktivitätsmessung in der TLSO-Versorgung entwickelt, welches zur Klärung dieser Kontroverse mit beitragen könnte. Ziel soll die Messung von täglichen Aktivitäten wie Laufen, Stehen und Rennen mit einer möglichst hohen Erkennungsgenauigkeit der einzelnen Aktivitäten sein. Die Sensorkombination soll dabei in oder an der Orthese angebracht sein und von dem Patienten während der vorgeschriebenen Tragezeit getragen werden. Um einen Lösungsansatz für das Problem zu generieren, wird auf der Grundlage von Pahl und Beitz eine Anforderungsliste und die Beschreibung eines Zielsystems erstellt und gewählte Lösungskombinationen bewertet. Der Vergleich gängiger Methoden zur Aktivitätsmessung wird zeigen, dass Methoden wie Druck, EMG und EKG zur Aktivitätsmessung in der TLSO-Versorgung nicht geeignet sind, Temperatur und Gyroskope als Ergänzung eingesetzt werden können und Akzelerometer am Besten zur Aktivitätsmessung geeignet sind. Abhängig ist ihre Leistung allerdings stark von den zur Auswertung genutzten Algorithmen. Mit Fuzzy C und schwellwertbasierten Algorithmen könnten Erkennungsgenauigkeiten von mehr als 98% erreicht werden. Diese Ergebnisse basieren auf Angaben aus einschlägiger Literarug und müssen durch praktische Tests untermauert werden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of the Internet now has a specific purpose: to find information. Unfortunately, the amount of data available on the Internet is growing exponentially, creating what can be considered a nearly infinite and ever-evolving network with no discernable structure. This rapid growth has raised the question of how to find the most relevant information. Many different techniques have been introduced to address the information overload, including search engines, Semantic Web, and recommender systems, among others. Recommender systems are computer-based techniques that are used to reduce information overload and recommend products likely to interest a user when given some information about the user's profile. This technique is mainly used in e-Commerce to suggest items that fit a customer's purchasing tendencies. The use of recommender systems for e-Government is a research topic that is intended to improve the interaction among public administrations, citizens, and the private sector through reducing information overload on e-Government services. More specifically, e-Democracy aims to increase citizens' participation in democratic processes through the use of information and communication technologies. In this chapter, an architecture of a recommender system that uses fuzzy clustering methods for e-Elections is introduced. In addition, a comparison with the smartvote system, a Web-based Voting Assistance Application (VAA) used to aid voters in finding the party or candidate that is most in line with their preferences, is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aquest projecte presenta un estudi científic dels mètodes de generació de dades sintètiques dins de l’àrea de la privadesa de dades. Aquests mètodes permeten controlar la transferència de dades sensibles a terceres parts i la utilitat estadística de les dades que es generen sintèticament. S’han introduït tots els conceptes bàsics necessaris per a situar al lector i s’ha analitzat un dels mètodes existents més amplament utilitzat (IPSO). Seguidament, s’ha proposat un nou mètode per a la generació de dades sintètiques (FCRM) que es basa en Fuzzy c-Regression i permet controlar l’equilibri entre pèrdua d’informació i risc de revelació mitjançant un paràmetre c.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The automatic diagnostic discrimination is an application of artificial intelligence techniques that can solve clinical cases based on imaging. Diffuse liver diseases are diseases of wide prominence in the population and insidious course, yet early in its progression. Early and effective diagnosis is necessary because many of these diseases progress to cirrhosis and liver cancer. The usual technique of choice for accurate diagnosis is liver biopsy, an invasive and not without incompatibilities one. It is proposed in this project an alternative non-invasive and free of contraindications method based on liver ultrasonography. The images are digitized and then analyzed using statistical techniques and analysis of texture. The results are validated from the pathology report. Finally, we apply artificial intelligence techniques as Fuzzy k-Means or Support Vector Machines and compare its significance to the analysis Statistics and the report of the clinician. The results show that this technique is significantly valid and a promising alternative as a noninvasive diagnostic chronic liver disease from diffuse involvement. Artificial Intelligence classifying techniques significantly improve the diagnosing discrimination compared to other statistics.