904 resultados para Network Analysis Methods


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Väestön ikääntyminen pakottaa yhteiskunnan ja julkisen terveydenhuollon muutoksiin. Jotta ikääntyvien ihmisten kotona asuminen voidaan mahdollistaa, palvelujärjestelmän pitää mukautua muuttuvaan tilanteeseen. Tämän diplomityön tarkoituksena on tunnistaa asiakaslähtöisiä lähellä asiakasta tarjottavia palvelukokonaisuuksia. Tutkimuksen teoreettinen viitekehys muodostuu asiakasarvon luomisesta ja palvelutarjoamista. Tarkasteluryhmänä on Etelä-Karjalan alueen 60–90-vuotiaat ja käytetty aineisto on kerätty vastaajilta postitse lähetetyllä kyselyllä. Tutkimus on eksploratiivinen ja tulosten tulkinnassa on hyödynnetty määrällisen tutkimuksen ja verkostoanalyysin menetelmiä. Työn keskeisimmät tulokset ovat tunnistetut asiakassegmentit ja heidän tarpeidensa pohjalta muodostetut palvelupaketit. Tulokset indikoivat asiakkaiden tarpeita ja tuloksia on analysoitu myös tuottajan näkökulmasta. Empiiristen tulosten lisäksi teoriaviitekehystä on kehitetty eteenpäin, jotta palvelukeskeiset teoriat voidaan ymmärtää yritysten näkökulman lisäksi asiakkaan näkökulmasta.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research is an analysis of the value and content of local service offerings that enable longer periods of living at home for elderly people. Mobile health care and new distribution services have provided an interesting solution in this context. The research aim to shed light on the research question, ‘How do we bundle services based on different customer needs?’ A research process consisting of three main phases was applied for this purpose. During this process, elderly customers were segmented, the importance of services was rated and service offerings were defined. Value creation and service offering provides theoretical framework for the research. The target group is South Karelia’s 60 to 90-year old individuals and the data has been acquired via a postal questionnaire. Research has been conducted as exploratory research utilizing the methods of quantitative and social network analysis. The main results of the report are identified customer segments and service packages that fits to the segments’ needs. The results indicate the needs of customers and the results are additionally analysed from the producer’s point of view. In addition to the empirical results, the used theory framework has been developed further in order for the service-related theories to be seen from the customer’s point of view and not just from the producer’s point of view.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the design of electrical machines, efficiency improvements have become very important. However, there are at least two significant cases in which the compactness of electrical machines is critical and the tolerance of extremely high losses is valued: vehicle traction, where very high torque density is desired at least temporarily; and direct-drive wind turbine generators, whose mass should be acceptably low. As ever higher torque density and ever more compact electrical machines are developed for these purposes, thermal issues, i.e. avoidance of over-temperatures and damage in conditions of high heat losses, are becoming of utmost importance. The excessive temperatures of critical machine components, such as insulation and permanent magnets, easily cause failures of the whole electrical equipment. In electrical machines with excitation systems based on permanent magnets, special attention must be paid to the rotor temperature because of the temperature-sensitive properties of permanent magnets. The allowable temperature of NdFeB magnets is usually significantly less than 150 ˚C. The practical problem is that the part of the machine where the permanent magnets are located should stay cooler than the copper windings, which can easily tolerate temperatures of 155 ˚C or 180 ˚C. Therefore, new cooling solutions should be developed in order to cool permanent magnet electrical machines with high torque density and because of it with high concentrated losses in stators. In this doctoral dissertation, direct and indirect liquid cooling techniques for permanent magnet synchronous electrical machines (PMSM) with high torque density are presented and discussed. The aim of this research is to analyse thermal behaviours of the machines using the most applicable and accurate thermal analysis methods and to propose new, practical machine designs based on these analyses. The Computational Fluid Dynamics (CFD) thermal simulations of the heat transfer inside the machines and lumped parameter thermal network (LPTN) simulations both presented herein are used for the analyses. Detailed descriptions of the simulated thermal models are also presented. Most of the theoretical considerations and simulations have been verified via experimental measurements on a copper tooth-coil (motorette) and on various prototypes of electrical machines. The indirect liquid cooling systems of a 100 kW axial flux (AF) PMSM and a 110 kW radial flux (RF) PMSM are analysed here by means of simplified 3D CFD conjugate thermal models of the parts of both machines. In terms of results, a significant temperature drop of 40 ̊C in the stator winding and 28 ̊C in the rotor of the AF PMSM was achieved with the addition of highly thermally conductive materials into the machine: copper bars inserted in the teeth, and potting material around the end windings. In the RF PMSM, the potting material resulted in a temperature decrease of 6 ̊C in the stator winding, and in a decrease of 10 ̊C in the rotor embedded-permanentmagnets. Two types of unique direct liquid cooling systems for low power machines are analysed herein to demonstrate the effectiveness of the cooling systems in conditions of highly concentrated heat losses. LPTN analysis and CFD thermal analysis (the latter being particularly useful for unique design) were applied to simulate the temperature distribution within the machine models. Oil-immersion cooling provided good cooling capability for a 26.6 kW PMSM of a hybrid vehicle. A direct liquid cooling system for the copper winding with inner stainless steel tubes was designed for an 8 MW directdrive PM synchronous generator. The design principles of this cooling solution are described in detail in this thesis. The thermal analyses demonstrate that the stator winding and the rotor magnet temperatures are kept significantly below their critical temperatures with demineralized water flow. A comparison study of the coolant agents indicates that propylene glycol is more effective than ethylene glycol in arctic conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tämän tutkimuksen tarkoituksena oli tunnistaa organisaation sisäisiin tietäysverkostoihin ja työntekijöiden verkostorooleihin vaikuttavia tekijöitä. Tutkimusongelmaa tarkasteltiin tietojohtamisen tietoperustaisen näkemyksen, sosiaalisen pääoman ja verkostotutkimuksen teoreettisesta viitekehyksestä. Tutkimus toteutettiin tapaustutkimuksena suomalaisessa teollisuusorganisaatiossa. Tutkimuksen empiirisessä osassa käytettiin sekä kvantitatiivista että kvalitatiivista tutkimusmenetelmää. Kvantitatiivinen tutkimusaineisto kerättiin strukturoidulla kyselylomakkeella ja analysoitiin sosiaalisella verkostoanalyysillä. Kvalitatiivinen tutkimusaineisto kerättiin haastatteluilla ja analysoitiin abduktiivisesti sisällönanalyysimenetelmällä. Tutkimuksen tulosten mukaan tietämysverkostoihin ja verkostorooleihin vaikuttavat sekä ulkoiset että sisäiset tekijät. Ulkoisia tekijöitä ovat ympäristöön ja olosuhteisiin vaikuttavat tekijät. Sisäisiä tekijöitä ovat puolestaan henkilön luonteenpiirteet, osaaminen, motivaatio sekä tietämys. Tämän tutkimuksen tulosten mukaan sisäiset tekijät selittävät työntekijöiden välisiä eroja. Työntekijöiden käyttäytymiseen, motivaatioon, asenteisiin ja osaamiseen voidaan vaikuttaa henkilöstöjohtamisen menetelmillä. Ihmisen persoonallisuus sen sijaan pysyy suhteellisen muuttumattomana. Tietojohtamisen, tietämysverkostojen ja verkostoroolien aikaisemmissa tutkimuksissa ei ole kuitenkaan tarkasteltu persoonallisuuden piirteiden vaikutusta tietämyksen siirtämiseen.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tutkielma käyttää automaattista kuviontunnistusalgoritmia ja yleisiä kahden liukuvan keskiarvon leikkauspiste –sääntöjä selittääkseen Stuttgartin pörssissä toimivien yksityissijoittajien myynti-osto –epätasapainoa ja siten vastatakseen kysymykseen ”käyttävätkö yksityissijoittajat teknisen analyysin menetelmiä kaupankäyntipäätöstensä perustana?” Perusolettama sijoittajien käyttäytymisestä ja teknisen analyysin tuottavuudesta tehtyjen tutkimusten perusteella oli, että yksityissijoittajat käyttäisivät teknisen analyysin metodeja. Empiirinen tutkimus, jonka aineistona on DAX30 yhtiöiden data vuosilta 2009 – 2013, ei tuottanut riittävän selkeää vastausta tutkimuskysymykseen. Heikko todistusaineisto näyttää kuitenkin osoittavan, että yksityissijoittajat muuttavat kaupankäyntikäyttäytymistänsä eräiden kuvioiden ja leikkauspistesääntöjen ohjastamaan suuntaan.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study presents an understanding of how a U.S. based, international MBA school has been able to achieve competitive advantage within a relatively short period of time. A framework is built to comprehend how the dynamic capability and value co-creation theories are connected and to understand how the dynamic capabilities have enabled value co-creation to happen between the school and its students, leading to such competitive advantage for the school. The data collection method followed a qualitative single-case study with a process perspective. Seven semi-structured interviews were made in September and October of 2015; one current employee of the MBA school was interviewed, with the other six being graduates and/or former employees of the MBA school. In addition, the researcher has worked as a recruiter at the MBA school, enabling to build bridges and a coherent whole of the empirical findings. Data analysis was conducted by first identifying themes from interviews, after which a narrative was written and a causal network model was built. Thus, a combination of thematic analysis, narrative and grounded theory were used as data analysis methods. This study finds that value co-creation is enabled by the dynamic capabilities of the MBA school; also capabilities would not be dynamic if value co-creation did not take place. Thus, this study presents that even though the two theories represent different level analyses, they are intertwined and together they can help to explain competitive advantage. The MBA case school’s dynamic capabilities are identified to be the sales & marketing capabilities and international market creation capabilities, thus the study finds that the MBA school does not only co-create value with existing students (customers) in the school setting, but instead, most of the value co-creation happens between the school and the student cohorts (network) already in the recruiting phase. Therefore, as a theoretical implication, the network should be considered as part of the context. The main value created seem to lie in the MBA case school’s international setting & networks. MBA schools around the world can learn from this study; schools should try to find their own niche and specialize, based on their own values and capabilities. With a differentiating focus and a unique and practical content, the schools can and should be well-marketed and proactively sold in order to receive more student applications and enhance competitive advantage. Even though an MBA school can effectively be treated as a business, as the study shows, the main emphasis should still be on providing quality education. Good content with efficient marketing can be the winning combination for an MBA school.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this Master’s thesis is to find a method for classifying spare part criticality in the case company. Several approaches exist for criticality classification of spare parts. The practical problem in this thesis is the lack of a generic analysis method for classifying spare parts of proprietary equipment of the case company. In order to find a classification method, a literature review of various analysis methods is required. The requirements of the case company also have to be recognized. This is achieved by consulting professionals in the company. The literature review states that the analytic hierarchy process (AHP) combined with decision tree models is a common method for classifying spare parts in academic literature. Most of the literature discusses spare part criticality in stock holding perspective. This is relevant perspective also for a customer orientated original equipment manufacturer (OEM), as the case company. A decision tree model is developed for classifying spare parts. The decision tree classifies spare parts into five criticality classes according to five criteria. The criteria are: safety risk, availability risk, functional criticality, predictability of failure and probability of failure. The criticality classes describe the level of criticality from non-critical to highly critical. The method is verified for classifying spare parts of a full deposit stripping machine. The classification can be utilized as a generic model for recognizing critical spare parts of other similar equipment, according to which spare part recommendations can be created. Purchase price of an item and equipment criticality were found to have no effect on spare part criticality in this context. Decision tree is recognized as the most suitable method for classifying spare part criticality in the company.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

IT outsourcing (ITO) refers to the shift of IT/IS activities from internal to external of an organization. In prior research, the governance of ITO is recognized with persistent strategic importance for practice, because it is tightly related to ITO success. Under the rapid transformation of global market, the evolving practice of ITO requires updated knowledge on effective governance. However, research on ITO governance is still under developed due to the lack of integrated theoretical frameworks and the variety of empirical settings besides dyadic client-vendor relationships. Especially, as multi-sourcing has become an increasingly common practice in ITO, its new governance challenges must be attended by both ITO researchers and practitioners. To address this research gap, this study aims to understand multi-sourcing governance with an integrated theoretical framework incorporating both governance structure and governance mechanisms. The focus is on the emerging deviations among formal, perceived and practiced governance. With an interpretive perspective, a single case study is conducted with mixed methods of Social Network Analysis (SNA) and qualitative inquiries. The empirical setting embraces one client firm and its two IT suppliers for IT infrastructure services. The empirical material is analyzed at three levels: within one supplier firm, between the client and one supplier, and among all three firms. Empirical evidences, at all levels, illustrate various deviations in governance mechanisms, with which emerging governance structures are shaped. This dissertation contributes to the understanding of ITO governance in three domains: the governance of ITO in general, the governance of multi-sourcing in particular, and research methodology. For ITO governance in general, this study has identified two research strands of governance structure and governance mechanisms, and integrated both concepts under a unified framework. The composition of four research papers contributes to multi-sourcing research by illustrating the benefits of zooming in and out across the multilateral relationships with different aspects and scopes. Methodologically, the viability and benefit of mixed-method is illustrated and confirmed for both researchers and practitioners.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The evolving antimicrobial resistance coupled with a recent increase in incidence highlights the importance of reducing gonococcal transmission. Establishing novel risk factors associated with gonorrhea facilitates the development of appropriate prevention and disease control strategies. Sexual Network Analysis (NA), a novel research technique used to further understand sexually transmitted infections, was used to identify network-based risk factors in a defined region in Ontario, Canada experiencing an increase in the incidence of gonorrhea. Linear network structures were identified as important reservoirs of gonococcal transmission. Additionally, a significant association between a central network position and gonorrhea was observed. The central participants were more likely to be younger, report a greater number of risk factors, engage in anonymous sex, have multiple sex partners in the past six months and have sex with the same sex. The network-based risk factors identified through sexual NA, serving as a method of analyzing local surveillance data, support the development of strategies aimed at reducing gonococcal spread.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Medical fields requires fast, simple and noninvasive methods of diagnostic techniques. Several methods are available and possible because of the growth of technology that provides the necessary means of collecting and processing signals. The present thesis details the work done in the field of voice signals. New methods of analysis have been developed to understand the complexity of voice signals, such as nonlinear dynamics aiming at the exploration of voice signals dynamic nature. The purpose of this thesis is to characterize complexities of pathological voice from healthy signals and to differentiate stuttering signals from healthy signals. Efficiency of various acoustic as well as non linear time series methods are analysed. Three groups of samples are used, one from healthy individuals, subjects with vocal pathologies and stuttering subjects. Individual vowels/ and a continuous speech data for the utterance of the sentence "iruvarum changatimaranu" the meaning in English is "Both are good friends" from Malayalam language are recorded using a microphone . The recorded audio are converted to digital signals and are subjected to analysis.Acoustic perturbation methods like fundamental frequency (FO), jitter, shimmer, Zero Crossing Rate(ZCR) were carried out and non linear measures like maximum lyapunov exponent(Lamda max), correlation dimension (D2), Kolmogorov exponent(K2), and a new measure of entropy viz., Permutation entropy (PE) are evaluated for all three groups of the subjects. Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. The results shows that nonlinear dynamical methods seem to be a suitable technique for voice signal analysis, due to the chaotic component of the human voice. Permutation entropy is well suited due to its sensitivity to uncertainties, since the pathologies are characterized by an increase in the signal complexity and unpredictability. Pathological groups have higher entropy values compared to the normal group. The stuttering signals have lower entropy values compared to the normal signals.PE is effective in charaterising the level of improvement after two weeks of speech therapy in the case of stuttering subjects. PE is also effective in characterizing the dynamical difference between healthy and pathological subjects. This suggests that PE can improve and complement the recent voice analysis methods available for clinicians. The work establishes the application of the simple, inexpensive and fast algorithm of PE for diagnosis in vocal disorders and stuttering subjects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Warships are generally sleek, slender with V shaped sections and block coefficient below 0.5, compared to fuller forms and higher values for commercial ships. They normally operate in the higher Froude number regime, and the hydrodynamic design is primarily aimed at achieving higher speeds with the minimum power. Therefore the structural design and analysis methods are different from those for commercial ships. Certain design guidelines have been given in documents like Naval Engineering Standards and one of the new developments in this regard is the introduction of classification society rules for the design of warships.The marine environment imposes subjective and objective uncertainties on ship structure. The uncertainties in loads, material properties etc.,. make reliable predictions of ship structural response a difficult task. Strength, stiffness and durability criteria for warship structures can be established by investigations on elastic analysis, ultimate strength analysis and reliability analysis. For analysis of complicated warship structures, special means and valid approximations are required.Preliminary structural design of a frigate size ship has been carried out . A finite element model of the hold model, representative of the complexities in the geometric configuration has been created using the finite element software NISA. Two other models representing the geometry to a limited extent also have been created —- one with two transverse frames and the attached plating alongwith the longitudinal members and the other representing the plating and longitudinal stiffeners between two transverse frames. Linear static analysis of the three models have been carried out and each one with three different boundary conditions. The structural responses have been checked for deflections and stresses against the permissible values. The structure has been found adequate in all the cases. The stresses and deflections predicted by the frame model are comparable with those of the hold model. But no such comparison has been realized for the interstiffener plating model with the other two models.Progressive collapse analyses of the models have been conducted for the three boundary conditions, considering geometric nonlinearity and then combined geometric and material nonlinearity for the hold and the frame models. von Mises — lllyushin yield criteria with elastic-perfectly plastic stress-strain curve has been chosen. ln each case, P-Delta curves have been generated and the ultimate load causing failure (ultimate load factor) has been identified as a multiple of the design load specified by NES.Reliability analysis of the hull module under combined geometric and material nonlinearities have been conducted. The Young's Modulus and the shell thickness have been chosen as the variables. Randomly generated values have been used in the analysis. First Order Second Moment has been used to predict the reliability index and thereafter, the probability of failure. The values have been compared against standard values published in literature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mit aktiven Magnetlagern ist es möglich, rotierende Körper durch magnetische Felder berührungsfrei zu lagern. Systembedingt sind bei aktiv magnetgelagerten Maschinen wesentliche Signale ohne zusätzlichen Aufwand an Messtechnik für Diagnoseaufgaben verfügbar. In der Arbeit wird ein Konzept entwickelt, das durch Verwendung der systeminhärenten Signale eine Diagnose magnetgelagerter rotierender Maschinen ermöglicht und somit neben einer kontinuierlichen Anlagenüberwachung eine schnelle Bewertung des Anlagenzustandes gestattet. Fehler können rechtzeitig und ursächlich in Art und Größe erkannt und entsprechende Gegenmaßnahmen eingeleitet werden. Anhand der erfassten Signale geschieht die Gewinnung von Merkmalen mit signal- und modellgestützten Verfahren. Für den Magnetlagerregelkreis erfolgen Untersuchungen zum Einsatz modellgestützter Parameteridentifikationsverfahren, deren Verwendbarkeit wird bei der Diagnose am Regler und Leistungsverstärker nachgewiesen. Unter Nutzung von Simulationsmodellen sowie durch Experimente an Versuchsständen werden die Merkmalsverläufe im normalen Referenzzustand und bei auftretenden Fehlern aufgenommen und die Ergebnisse in einer Wissensbasis abgelegt. Diese dient als Grundlage zur Festlegung von Grenzwerten und Regeln für die Überwachung des Systems und zur Erstellung wissensbasierter Diagnosemodelle. Bei der Überwachung werden die Merkmalsausprägungen auf das Überschreiten von Grenzwerten überprüft, Informationen über erkannte Fehler und Betriebszustände gebildet sowie gegebenenfalls Alarmmeldungen ausgegeben. Sich langsam anbahnende Fehler können durch die Berechnung der Merkmalstrends mit Hilfe der Regressionsanalyse erkannt werden. Über die bisher bei aktiven Magnetlagern übliche Überwachung von Grenzwerten hinaus erfolgt bei der Fehlerdiagnose eine Verknüpfung der extrahierten Merkmale zur Identifizierung und Lokalisierung auftretender Fehler. Die Diagnose geschieht mittels regelbasierter Fuzzy-Logik, dies gestattet die Einbeziehung von linguistischen Aussagen in Form von Expertenwissen sowie die Berücksichtigung von Unbestimmtheiten und ermöglicht damit eine Diagnose komplexer Systeme. Für Aktor-, Sensor- und Reglerfehler im Magnetlagerregelkreis sowie Fehler durch externe Kräfte und Unwuchten werden Diagnosemodelle erstellt und verifiziert. Es erfolgt der Nachweis, dass das entwickelte Diagnosekonzept mit beherrschbarem Rechenaufwand korrekte Diagnoseaussagen liefert. Durch Kaskadierung von Fuzzy-Logik-Modulen wird die Transparenz des Regelwerks gewahrt und die Abarbeitung der Regeln optimiert. Endresultat ist ein neuartiges hybrides Diagnosekonzept, welches signal- und modellgestützte Verfahren der Merkmalsgewinnung mit wissensbasierten Methoden der Fehlerdiagnose kombiniert. Das entwickelte Diagnosekonzept ist für die Anpassung an unterschiedliche Anforderungen und Anwendungen bei rotierenden Maschinen konzipiert.