891 resultados para Fuzzy C-Means clustering
Resumo:
Este trabalho teve por objetivo estudar as causas de variação nos preços de bovinos da raça nelore pertencentes a rebanhos de seleção, os quais foram comercializados em leilões, para verificar as influências das avaliações genéticas e dos julgamentos de exterior sobre esses preços. Para tanto, foram computados os preços de venda de 426 bovinos da referida raça em 12 leilões ocorridos em diversas localidades brasileiras (regiões Centro-Oeste, Norte e Sudeste), entre os anos de 2002 e 2005. O valor médio foi de R$ 3.325,49, sendo o mínimo de R$ 1.400,00 e o máximo de R$ 10.500,00. Esses dados foram digitados juntamente com outras informações que eram apresentadas nos catálogos dos leilões. As informações registradas incluíram o sexo de cada animal, o nome do leilão e as DEPs informadas nos catálogos. Além da avaliação da influência das informações dos catálogos, também foi avaliada a influência das informações dos reprodutores, pais dos animais vendidos nos leilões, envolvendo suas DEPs publicadas em um sumário de reprodutores da raça e as pontuações de suas progênies em julgamentos. Os métodos estatísticos aplicados foram análises de variâncias e análises de agrupamento (método K-médias). Como resultado, foi observado que animais com superioridade genética em características relacionadas a desempenho ponderal, considerando-se os efeitos diretos e maternos, foram valorizados ao serem comercializados nos leilões. Em contra-partida, a pontuação dos reprodutores nos julgamentos não teve influência significativa sobre os preços médios de venda de suas progênies nos leilões.
Resumo:
Background: Since establishing universal free access to antiretroviral therapy in 1996, the Brazilian Health System has increased the number of centers providing HIV/AIDS outpatient care from 33 to 540. There had been no formal monitoring of the quality of these services until a survey of 336 AIDS health centers across 7 Brazilian states was undertaken in 2002. Managers of the services were asked to assess their clinics according to parameters of service inputs and service delivery processes. This report analyzes the survey results and identifies predictors of the overall quality of service delivery. Methods: The survey involved completion of a multiple-choice questionnaire comprising 107 parameters of service inputs and processes of delivering care, with responses assessed according to their likely impact on service quality using a 3-point scale. K-means clustering was used to group these services according to their scored responses. Logistic regression analysis was performed to identify predictors of high service quality. Results: The questionnaire was completed by 95.8% (322) of the managers of the sites surveyed. Most sites scored about 50% of the benchmark expectation. K-means clustering analysis identified four quality levels within which services could be grouped: 76 services (24%) were classed as level 1 (best), 53 (16%) as level 2 (medium), 113 (35%) as level 3 (poor), and 80 (25%) as level 4 (very poor). Parameters of service delivery processes were more important than those relating to service inputs for determining the quality classification. Predictors of quality services included larger care sites, specialization for HIV/AIDS, and location within large municipalities. Conclusion: The survey demonstrated highly variable levels of HIV/AIDS service quality across the sites. Many sites were found to have deficiencies in the processes of service delivery processes that could benefit from quality improvement initiatives. These findings could have implications for how HIV/AIDS services are planned in Brazil to achieve quality standards, such as for where service sites should be located, their size and staffing requirements. A set of service delivery indicators has been identified that could be used for routine monitoring of HIV/AIDS service delivery for HIV/AIDS in Brazil (and potentially in other similar settings).
Resumo:
Large parity-violating longitudinal single-spin asymmetries A(L)(e+) = 0.86(-0.14)(+0.30) and Ae(L)(e-) = 0.88(-0.71)(+0.12) are observed for inclusive high transverse momentum electrons and positrons in polarized p + p collisions at a center-of-mass energy of root s = 500 GeV with the PHENIX detector at RHIC. These e(+/-) come mainly from the decay of W(+/-) and Z(0) bosons, and their asymmetries directly demonstrate parity violation in the couplings of the W(+/-) to the light quarks. The observed electron and positron yields were used to estimate W(+/-) boson production cross sections for the e(+/-) channels of sigma(pp -> W(+)X) X BR(W(+) -> e(+) nu(e)) = 144.1 +/- 21.2(stat)(-10.3)(+3.4)(syst) +/- 21.6(norm) pb, and sigma(pp -> W(-)X) X BR(W(-) -> e(-) (nu) over bar (e)) = 3.17 +/- 12.1(stat)(-8.2)(+10.1)(syst) +/- 4.8(norm) pb.
Resumo:
Os avanços tecnológicos e científicos, na área da saúde, têm vindo a aliar áreas como a Medicina e a Matemática, cabendo à ciência adequar de forma mais eficaz os meios de investigação, diagnóstico, monitorização e terapêutica. Os métodos desenvolvidos e os estudos apresentados nesta dissertação resultam da necessidade de encontrar respostas e soluções para os diferentes desafios identificados na área da anestesia. A índole destes problemas conduz, necessariamente, à aplicação, adaptação e conjugação de diferentes métodos e modelos das diversas áreas da matemática. A capacidade para induzir a anestesia em pacientes, de forma segura e confiável, conduz a uma enorme variedade de situações que devem ser levadas em conta, exigindo, por isso, intensivos estudos. Assim, métodos e modelos de previsão, que permitam uma melhor personalização da dosagem a administrar ao paciente e por monitorizar, o efeito induzido pela administração de cada fármaco, com sinais mais fiáveis, são fundamentais para a investigação e progresso neste campo. Neste contexto, com o objetivo de clarificar a utilização em estudos na área da anestesia de um ajustado tratamento estatístico, proponho-me abordar diferentes análises estatísticas para desenvolver um modelo de previsão sobre a resposta cerebral a dois fármacos durante sedação. Dados obtidos de voluntários serão utilizados para estudar a interação farmacodinâmica entre dois fármacos anestésicos. Numa primeira fase são explorados modelos de regressão lineares que permitam modelar o efeito dos fármacos no sinal cerebral BIS (índice bispectral do EEG – indicador da profundidade de anestesia); ou seja estimar o efeito que as concentrações de fármacos têm na depressão do eletroencefalograma (avaliada pelo BIS). Na segunda fase deste trabalho, pretende-se a identificação de diferentes interações com Análise de Clusters bem como a validação do respetivo modelo com Análise Discriminante, identificando grupos homogéneos na amostra obtida através das técnicas de agrupamento. O número de grupos existentes na amostra foi, numa fase exploratória, obtido pelas técnicas de agrupamento hierárquicas, e a caracterização dos grupos identificados foi obtida pelas técnicas de agrupamento k-means. A reprodutibilidade dos modelos de agrupamento obtidos foi testada através da análise discriminante. As principais conclusões apontam que o teste de significância da equação de Regressão Linear indicou que o modelo é altamente significativo. As variáveis propofol e remifentanil influenciam significativamente o BIS e o modelo melhora com a inclusão do remifentanil. Este trabalho demonstra ainda ser possível construir um modelo que permite agrupar as concentrações dos fármacos, com base no efeito no sinal cerebral BIS, com o apoio de técnicas de agrupamento e discriminantes. Os resultados desmontram claramente a interacção farmacodinâmica dos dois fármacos, quando analisamos o Cluster 1 e o Cluster 3. Para concentrações semelhantes de propofol o efeito no BIS é claramente diferente dependendo da grandeza da concentração de remifentanil. Em suma, o estudo demostra claramente, que quando o remifentanil é administrado com o propofol (um hipnótico) o efeito deste último é potenciado, levando o sinal BIS a valores bastante baixos.
Resumo:
In recent decades, all over the world, competition in the electric power sector has deeply changed the way this sector’s agents play their roles. In most countries, electric process deregulation was conducted in stages, beginning with the clients of higher voltage levels and with larger electricity consumption, and later extended to all electrical consumers. The sector liberalization and the operation of competitive electricity markets were expected to lower prices and improve quality of service, leading to greater consumer satisfaction. Transmission and distribution remain noncompetitive business areas, due to the large infrastructure investments required. However, the industry has yet to clearly establish the best business model for transmission in a competitive environment. After generation, the electricity needs to be delivered to the electrical system nodes where demand requires it, taking into consideration transmission constraints and electrical losses. If the amount of power flowing through a certain line is close to or surpasses the safety limits, then cheap but distant generation might have to be replaced by more expensive closer generation to reduce the exceeded power flows. In a congested area, the optimal price of electricity rises to the marginal cost of the local generation or to the level needed to ration demand to the amount of available electricity. Even without congestion, some power will be lost in the transmission system through heat dissipation, so prices reflect that it is more expensive to supply electricity at the far end of a heavily loaded line than close to an electric power generation. Locational marginal pricing (LMP), resulting from bidding competition, represents electrical and economical values at nodes or in areas that may provide economical indicator signals to the market agents. This article proposes a data-mining-based methodology that helps characterize zonal prices in real power transmission networks. To test our methodology, we used an LMP database from the California Independent System Operator for 2009 to identify economical zones. (CAISO is a nonprofit public benefit corporation charged with operating the majority of California’s high-voltage wholesale power grid.) To group the buses into typical classes that represent a set of buses with the approximate LMP value, we used two-step and k-means clustering algorithms. By analyzing the various LMP components, our goal was to extract knowledge to support the ISO in investment and network-expansion planning.
Resumo:
Die konservative Behandlung von Skoliosen mit TLSO ist noch immer umstritten. Im Rahmen dieser Bachelorarbeit wird daher ein Konzept zur mobilen Aktivitätsmessung in der TLSO-Versorgung entwickelt, welches zur Klärung dieser Kontroverse mit beitragen könnte. Ziel soll die Messung von täglichen Aktivitäten wie Laufen, Stehen und Rennen mit einer möglichst hohen Erkennungsgenauigkeit der einzelnen Aktivitäten sein. Die Sensorkombination soll dabei in oder an der Orthese angebracht sein und von dem Patienten während der vorgeschriebenen Tragezeit getragen werden. Um einen Lösungsansatz für das Problem zu generieren, wird auf der Grundlage von Pahl und Beitz eine Anforderungsliste und die Beschreibung eines Zielsystems erstellt und gewählte Lösungskombinationen bewertet. Der Vergleich gängiger Methoden zur Aktivitätsmessung wird zeigen, dass Methoden wie Druck, EMG und EKG zur Aktivitätsmessung in der TLSO-Versorgung nicht geeignet sind, Temperatur und Gyroskope als Ergänzung eingesetzt werden können und Akzelerometer am Besten zur Aktivitätsmessung geeignet sind. Abhängig ist ihre Leistung allerdings stark von den zur Auswertung genutzten Algorithmen. Mit Fuzzy C und schwellwertbasierten Algorithmen könnten Erkennungsgenauigkeiten von mehr als 98% erreicht werden. Diese Ergebnisse basieren auf Angaben aus einschlägiger Literarug und müssen durch praktische Tests untermauert werden.
Resumo:
Aquest projecte presenta un estudi científic dels mètodes de generació de dades sintètiques dins de l’àrea de la privadesa de dades. Aquests mètodes permeten controlar la transferència de dades sensibles a terceres parts i la utilitat estadística de les dades que es generen sintèticament. S’han introduït tots els conceptes bàsics necessaris per a situar al lector i s’ha analitzat un dels mètodes existents més amplament utilitzat (IPSO). Seguidament, s’ha proposat un nou mètode per a la generació de dades sintètiques (FCRM) que es basa en Fuzzy c-Regression i permet controlar l’equilibri entre pèrdua d’informació i risc de revelació mitjançant un paràmetre c.
Mejora diagnóstica de hepatopatías de afectación difusa mediante técnicas de inteligencia artificial
Resumo:
The automatic diagnostic discrimination is an application of artificial intelligence techniques that can solve clinical cases based on imaging. Diffuse liver diseases are diseases of wide prominence in the population and insidious course, yet early in its progression. Early and effective diagnosis is necessary because many of these diseases progress to cirrhosis and liver cancer. The usual technique of choice for accurate diagnosis is liver biopsy, an invasive and not without incompatibilities one. It is proposed in this project an alternative non-invasive and free of contraindications method based on liver ultrasonography. The images are digitized and then analyzed using statistical techniques and analysis of texture. The results are validated from the pathology report. Finally, we apply artificial intelligence techniques as Fuzzy k-Means or Support Vector Machines and compare its significance to the analysis Statistics and the report of the clinician. The results show that this technique is significantly valid and a promising alternative as a noninvasive diagnostic chronic liver disease from diffuse involvement. Artificial Intelligence classifying techniques significantly improve the diagnosing discrimination compared to other statistics.
Resumo:
We present in this paper the results of the application of several visual methods on a group of locations, dated between VI and I centuries BC, of the ager Tarraconensis (Tarragona, Spain) a Hinterland of the roman colony of Tarraco. The difficulty in interpreting the diverse results in a combined way has been resolved by means of the use of statistical methods, such as Principal Components Analysis (PCA) and K-means clustering analysis. These methods have allowed us to carry out site classifications in function of the landscape's visual structure that contains them and of the visual relationships that could be given among them.
Resumo:
Työn tavoitteena on löytää menetelmä tai malli, jolla suurta joukkoa erilaatuisia ideoita pystytään lajittelemaan ja löytämään tästä joukosta relevantit ja parhaat ideat jatkoke-hittelyyn. Työn empiirisenä aineistona on käytetty kahdella eri toimialalla toimivan yrityksen yhteisessä ideaistunnossa syntyneitä ideoita. Työn teoreettisessa osuudessa on esitelty innovaation määritelmä sekä eri tapoja luoki-tella innovaatioita. Lisäksi teoreettisessa osuudessa on käsitelty avoimen innovaation periaatetta ja sen kenties yhä kasvavaa merkitystä. Työssä on käsitelty myös ideoiden hyödyntämiseen vaikuttavia seikkoja. Näkökulmiksi työssä on valittu yrityksen sisäiset ja ympäristöstä johtuvat seikat, kuten yrityksen osaaminen sekä asiakkaatja markkinat. Teoriaosuuden päättää lyhyt katsaus klusterointiin, sen eri menetelmiin, sekä portfolion hallintaan. Empiirisessä osuudessa yli 50 idean joukosta kyettiin löytämään muutamia ideoita, jotka ovat eri tavoilla toteutettavissa istuntoon osallistuneiden yritysten toimesta.
Resumo:
Segmentointi on perinteisesti ollut erityisesti kuluttajamarkkinoinnin työkalu, mutta siirtymä tuotteista palveluihin on lisännyt segmentointitarvetta myös teollisilla markkinoilla. Tämän tutkimuksen tavoite on löytää selkeästi toisistaan erottuvia asiakasryhmiä suomalaisen liikkeenjohdon konsultointiyritys Synocus Groupin tarjoaman case-materiaalin pohjalta. K-means-klusteroinnin avulla löydetään kolme potentiaalista markkinasegmenttiä perustuen siihen, mitkä tarjoamaelementit 105 valikoitua suomalaisen kone- ja metallituoteteollisuuden asiakasta ovat maininneet tärkeimmiksi. Ensimmäinen klusteri on hintatietoiset asiakkaat, jotka laskevat yksikkökohtaisia hintoja. Toinen klusteri koostuu huolto-orientoituneista asiakkaista, jotka laskevat tuntikustannuksia ja maksimoivat konekannan käyttötunteja. Tälle kohderyhmälle kannattaisi ehkä markkinoida teknisiä palveluja ja huoltosopimuksia. Kolmas klusteri on tuottavuussuuntautuneet asiakkaat, jotka ovat kiinnostuneita suorituskyvyn kehittämisestä ja laskevat tonnikohtaisia kustannuksia. He tavoittelevat alempia kokonaiskustannuksia lisääntyneen suorituskyvyn, pidemmän käyttöiän ja alempien huoltokustannusten kautta.
Resumo:
Ce mémoire de maîtrise présente une nouvelle approche non supervisée pour détecter et segmenter les régions urbaines dans les images hyperspectrales. La méthode proposée n ́ecessite trois étapes. Tout d’abord, afin de réduire le coût calculatoire de notre algorithme, une image couleur du contenu spectral est estimée. A cette fin, une étape de réduction de dimensionalité non-linéaire, basée sur deux critères complémentaires mais contradictoires de bonne visualisation; à savoir la précision et le contraste, est réalisée pour l’affichage couleur de chaque image hyperspectrale. Ensuite, pour discriminer les régions urbaines des régions non urbaines, la seconde étape consiste à extraire quelques caractéristiques discriminantes (et complémentaires) sur cette image hyperspectrale couleur. A cette fin, nous avons extrait une série de paramètres discriminants pour décrire les caractéristiques d’une zone urbaine, principalement composée d’objets manufacturés de formes simples g ́eométriques et régulières. Nous avons utilisé des caractéristiques texturales basées sur les niveaux de gris, la magnitude du gradient ou des paramètres issus de la matrice de co-occurrence combinés avec des caractéristiques structurelles basées sur l’orientation locale du gradient de l’image et la détection locale de segments de droites. Afin de réduire encore la complexité de calcul de notre approche et éviter le problème de la ”malédiction de la dimensionnalité” quand on décide de regrouper des données de dimensions élevées, nous avons décidé de classifier individuellement, dans la dernière étape, chaque caractéristique texturale ou structurelle avec une simple procédure de K-moyennes et ensuite de combiner ces segmentations grossières, obtenues à faible coût, avec un modèle efficace de fusion de cartes de segmentations. Les expérimentations données dans ce rapport montrent que cette stratégie est efficace visuellement et se compare favorablement aux autres méthodes de détection et segmentation de zones urbaines à partir d’images hyperspectrales.
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
Biclustering is simultaneous clustering of both rows and columns of a data matrix. A measure called Mean Squared Residue (MSR) is used to simultaneously evaluate the coherence of rows and columns within a submatrix. In this paper a novel algorithm is developed for biclustering gene expression data using the newly introduced concept of MSR difference threshold. In the first step high quality bicluster seeds are generated using K-Means clustering algorithm. Then more genes and conditions (node) are added to the bicluster. Before adding a node the MSR X of the bicluster is calculated. After adding the node again the MSR Y is calculated. The added node is deleted if Y minus X is greater than MSR difference threshold or if Y is greater than MSR threshold which depends on the dataset. The MSR difference threshold is different for gene list and condition list and it depends on the dataset also. Proper values should be identified through experimentation in order to obtain biclusters of high quality. The results obtained on bench mark dataset clearly indicate that this algorithm is better than many of the existing biclustering algorithms
Resumo:
Multispectral analysis is a promising approach in tissue classification and abnormality detection from Magnetic Resonance (MR) images. But instability in accuracy and reproducibility of the classification results from conventional techniques keeps it far from clinical applications. Recent studies proposed Independent Component Analysis (ICA) as an effective method for source signals separation from multispectral MR data. However, it often fails to extract the local features like small abnormalities, especially from dependent real data. A multisignal wavelet analysis prior to ICA is proposed in this work to resolve these issues. Best de-correlated detail coefficients are combined with input images to give better classification results. Performance improvement of the proposed method over conventional ICA is effectively demonstrated by segmentation and classification using k-means clustering. Experimental results from synthetic and real data strongly confirm the positive effect of the new method with an improved Tanimoto index/Sensitivity values, 0.884/93.605, for reproduced small white matter lesions