749 resultados para Fuzzy Linguistic Controllers
Resumo:
This master thesis work introduces the fuzzy tolerance/equivalence relation and its application in cluster analysis. The work presents about the construction of fuzzy equivalence relations using increasing generators. Here, we investigate and research on the role of increasing generators for the creation of intersection, union and complement operators. The objective is to develop different varieties of fuzzy tolerance/equivalence relations using different varieties of increasing generators. At last, we perform a comparative study with these developed varieties of fuzzy tolerance/equivalence relations in their application to a clustering method.
Resumo:
The present study compares the performance of stochastic and fuzzy models for the analysis of the relationship between clinical signs and diagnosis. Data obtained for 153 children concerning diagnosis (pneumonia, other non-pneumonia diseases, absence of disease) and seven clinical signs were divided into two samples, one for analysis and other for validation. The former was used to derive relations by multi-discriminant analysis (MDA) and by fuzzy max-min compositions (fuzzy), and the latter was used to assess the predictions drawn from each type of relation. MDA and fuzzy were closely similar in terms of prediction, with correct allocation of 75.7 to 78.3% of patients in the validation sample, and displaying only a single instance of disagreement: a patient with low level of toxemia was mistaken as not diseased by MDA and correctly taken as somehow ill by fuzzy. Concerning relations, each method provided different information, each revealing different aspects of the relations between clinical signs and diagnoses. Both methods agreed on pointing X-ray, dyspnea, and auscultation as better related with pneumonia, but only fuzzy was able to detect relations of heart rate, body temperature, toxemia and respiratory rate with pneumonia. Moreover, only fuzzy was able to detect a relationship between heart rate and absence of disease, which allowed the detection of six malnourished children whose diagnoses as healthy are, indeed, disputable. The conclusion is that even though fuzzy sets theory might not improve prediction, it certainly does enhance clinical knowledge since it detects relationships not visible to stochastic models.
Resumo:
In view of the importance of anticipating the occurrence of critical situations in medicine, we propose the use of a fuzzy expert system to predict the need for advanced neonatal resuscitation efforts in the delivery room. This system relates the maternal medical, obstetric and neonatal characteristics to the clinical conditions of the newborn, providing a risk measurement of need of advanced neonatal resuscitation measures. It is structured as a fuzzy composition developed on the basis of the subjective perception of danger of nine neonatologists facing 61 antenatal and intrapartum clinical situations which provide a degree of association with the risk of occurrence of perinatal asphyxia. The resulting relational matrix describes the association between clinical factors and risk of perinatal asphyxia. Analyzing the inputs of the presence or absence of all 61 clinical factors, the system returns the rate of risk of perinatal asphyxia as output. A prospectively collected series of 304 cases of perinatal care was analyzed to ascertain system performance. The fuzzy expert system presented a sensitivity of 76.5% and specificity of 94.8% in the identification of the need for advanced neonatal resuscitation measures, considering a cut-off value of 5 on a scale ranging from 0 to 10. The area under the receiver operating characteristic curve was 0.93. The identification of risk situations plays an important role in the planning of health care. These preliminary results encourage us to develop further studies and to refine this model, which is intended to implement an auxiliary system able to help health care staff to make decisions in perinatal care.
Resumo:
Coronary artery disease (CAD) is a worldwide leading cause of death. The standard method for evaluating critical partial occlusions is coronary arteriography, a catheterization technique which is invasive, time consuming, and costly. There are noninvasive approaches for the early detection of CAD. The basis for the noninvasive diagnosis of CAD has been laid in a sequential analysis of the risk factors, and the results of the treadmill test and myocardial perfusion scintigraphy (MPS). Many investigators have demonstrated that the diagnostic applications of MPS are appropriate for patients who have an intermediate likelihood of disease. Although this information is useful, it is only partially utilized in clinical practice due to the difficulty to properly classify the patients. Since the seminal work of Lotfi Zadeh, fuzzy logic has been applied in numerous areas. In the present study, we proposed and tested a model to select patients for MPS based on fuzzy sets theory. A group of 1053 patients was used to develop the model and another group of 1045 patients was used to test it. Receiver operating characteristic curves were used to compare the performance of the fuzzy model against expert physician opinions, and showed that the performance of the fuzzy model was equal or superior to that of the physicians. Therefore, we conclude that the fuzzy model could be a useful tool to assist the general practitioner in the selection of patients for MPS.
Resumo:
The shift towards a knowledge-based economy has inevitably prompted the evolution of patent exploitation. Nowadays, patent is more than just a prevention tool for a company to block its competitors from developing rival technologies, but lies at the very heart of its strategy for value creation and is therefore strategically exploited for economic pro t and competitive advantage. Along with the evolution of patent exploitation, the demand for reliable and systematic patent valuation has also reached an unprecedented level. However, most of the quantitative approaches in use to assess patent could arguably fall into four categories and they are based solely on the conventional discounted cash flow analysis, whose usability and reliability in the context of patent valuation are greatly limited by five practical issues: the market illiquidity, the poor data availability, discriminatory cash-flow estimations, and its incapability to account for changing risk and managerial flexibility. This dissertation attempts to overcome these impeding barriers by rationalizing the use of two techniques, namely fuzzy set theory (aiming at the first three issues) and real option analysis (aiming at the last two). It commences with an investigation into the nature of the uncertainties inherent in patent cash flow estimation and claims that two levels of uncertainties must be properly accounted for. Further investigation reveals that both levels of uncertainties fall under the categorization of subjective uncertainty, which differs from objective uncertainty originating from inherent randomness in that uncertainties labelled as subjective are highly related to the behavioural aspects of decision making and are usually witnessed whenever human judgement, evaluation or reasoning is crucial to the system under consideration and there exists a lack of complete knowledge on its variables. Having clarified their nature, the application of fuzzy set theory in modelling patent-related uncertain quantities is effortlessly justified. The application of real option analysis to patent valuation is prompted by the fact that both patent application process and the subsequent patent exploitation (or commercialization) are subject to a wide range of decisions at multiple successive stages. In other words, both patent applicants and patentees are faced with a large variety of courses of action as to how their patent applications and granted patents can be managed. Since they have the right to run their projects actively, this flexibility has value and thus must be properly accounted for. Accordingly, an explicit identification of the types of managerial flexibility inherent in patent-related decision making problems and in patent valuation, and a discussion on how they could be interpreted in terms of real options are provided in this dissertation. Additionally, the use of the proposed techniques in practical applications is demonstrated by three fuzzy real option analysis based models. In particular, the pay-of method and the extended fuzzy Black-Scholes model are employed to investigate the profitability of a patent application project for a new process for the preparation of a gypsum-fibre composite and to justify the subsequent patent commercialization decision, respectively; a fuzzy binomial model is designed to reveal the economic potential of a patent licensing opportunity.
Resumo:
Exposure to air pollutants is associated with hospitalizations due to pneumonia in children. We hypothesized the length of hospitalization due to pneumonia may be dependent on air pollutant concentrations. Therefore, we built a computational model using fuzzy logic tools to predict the mean time of hospitalization due to pneumonia in children living in São José dos Campos, SP, Brazil. The model was built with four inputs related to pollutant concentrations and effective temperature, and the output was related to the mean length of hospitalization. Each input had two membership functions and the output had four membership functions, generating 16 rules. The model was validated against real data, and a receiver operating characteristic (ROC) curve was constructed to evaluate model performance. The values predicted by the model were significantly correlated with real data. Sulfur dioxide and particulate matter significantly predicted the mean length of hospitalization in lags 0, 1, and 2. This model can contribute to the care provided to children with pneumonia.
Resumo:
The emerging technologies have recently challenged the libraries to reconsider their role as a mere mediator between the collections, researchers, and wider audiences (Sula, 2013), and libraries, especially the nationwide institutions like national libraries, haven’t always managed to face the challenge (Nygren et al., 2014). In the Digitization Project of Kindred Languages, the National Library of Finland has become a node that connects the partners to interplay and work for shared goals and objectives. In this paper, I will be drawing a picture of the crowdsourcing methods that have been established during the project to support both linguistic research and lingual diversity. The National Library of Finland has been executing the Digitization Project of Kindred Languages since 2012. The project seeks to digitize and publish approximately 1,200 monograph titles and more than 100 newspapers titles in various, and in some cases endangered Uralic languages. Once the digitization has been completed in 2015, the Fenno-Ugrica online collection will consist of 110,000 monograph pages and around 90,000 newspaper pages to which all users will have open access regardless of their place of residence. The majority of the digitized literature was originally published in the 1920s and 1930s in the Soviet Union, and it was the genesis and consolidation period of literary languages. This was the era when many Uralic languages were converted into media of popular education, enlightenment, and dissemination of information pertinent to the developing political agenda of the Soviet state. The ‘deluge’ of popular literature in the 1920s to 1930s suddenly challenged the lexical orthographic norms of the limited ecclesiastical publications from the 1880s onward. Newspapers were now written in orthographies and in word forms that the locals would understand. Textbooks were written to address the separate needs of both adults and children. New concepts were introduced in the language. This was the beginning of a renaissance and period of enlightenment (Rueter, 2013). The linguistically oriented population can also find writings to their delight, especially lexical items specific to a given publication, and orthographically documented specifics of phonetics. The project is financially supported by the Kone Foundation in Helsinki and is part of the Foundation’s Language Programme. One of the key objectives of the Kone Foundation Language Programme is to support a culture of openness and interaction in linguistic research, but also to promote citizen science as a tool for the participation of the language community in research. In addition to sharing this aspiration, our objective within the Language Programme is to make sure that old and new corpora in Uralic languages are made available for the open and interactive use of the academic community as well as the language societies. Wordlists are available in 17 languages, but without tokenization, lemmatization, and so on. This approach was verified with the scholars, and we consider the wordlists as raw data for linguists. Our data is used for creating the morphological analyzers and online dictionaries at the Helsinki and Tromsø Universities, for instance. In order to reach the targets, we will produce not only the digitized materials but also their development tools for supporting linguistic research and citizen science. The Digitization Project of Kindred Languages is thus linked with the research of language technology. The mission is to improve the usage and usability of digitized content. During the project, we have advanced methods that will refine the raw data for further use, especially in the linguistic research. How does the library meet the objectives, which appears to be beyond its traditional playground? The written materials from this period are a gold mine, so how could we retrieve these hidden treasures of languages out of the stack that contains more than 200,000 pages of literature in various Uralic languages? The problem is that the machined-encoded text (OCR) contains often too many mistakes to be used as such in research. The mistakes in OCRed texts must be corrected. For enhancing the OCRed texts, the National Library of Finland developed an open-source code OCR editor that enabled the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary to implement, since these rare and peripheral prints did often include already perished characters, which are sadly neglected by the modern OCR software developers, but belong to the historical context of kindred languages and thus are an essential part of the linguistic heritage (van Hemel, 2014). Our crowdsourcing tool application is essentially an editor of Alto XML format. It consists of a back-end for managing users, permissions, and files, communicating through a REST API with a front-end interface—that is, the actual editor for correcting the OCRed text. The enhanced XML files can be retrieved from the Fenno-Ugrica collection for further purposes. Could the crowd do this work to support the academic research? The challenge in crowdsourcing lies in its nature. The targets in the traditional crowdsourcing have often been split into several microtasks that do not require any special skills from the anonymous people, a faceless crowd. This way of crowdsourcing may produce quantitative results, but from the research’s point of view, there is a danger that the needs of linguists are not necessarily met. Also, the remarkable downside is the lack of shared goal or the social affinity. There is no reward in the traditional methods of crowdsourcing (de Boer et al., 2012). Also, there has been criticism that digital humanities makes the humanities too data-driven and oriented towards quantitative methods, losing the values of critical qualitative methods (Fish, 2012). And on top of that, the downsides of the traditional crowdsourcing become more imminent when you leave the Anglophone world. Our potential crowd is geographically scattered in Russia. This crowd is linguistically heterogeneous, speaking 17 different languages. In many cases languages are close to extinction or longing for language revitalization, and the native speakers do not always have Internet access, so an open call for crowdsourcing would not have produced appeasing results for linguists. Thus, one has to identify carefully the potential niches to complete the needed tasks. When using the help of a crowd in a project that is aiming to support both linguistic research and survival of endangered languages, the approach has to be a different one. In nichesourcing, the tasks are distributed amongst a small crowd of citizen scientists (communities). Although communities provide smaller pools to draw resources, their specific richness in skill is suited for complex tasks with high-quality product expectations found in nichesourcing. Communities have a purpose and identity, and their regular interaction engenders social trust and reputation. These communities can correspond to research more precisely (de Boer et al., 2012). Instead of repetitive and rather trivial tasks, we are trying to utilize the knowledge and skills of citizen scientists to provide qualitative results. In nichesourcing, we hand in such assignments that would precisely fill the gaps in linguistic research. A typical task would be editing and collecting the words in such fields of vocabularies where the researchers do require more information. For instance, there is lack of Hill Mari words and terminology in anatomy. We have digitized the books in medicine, and we could try to track the words related to human organs by assigning the citizen scientists to edit and collect words with the OCR editor. From the nichesourcing’s perspective, it is essential that altruism play a central role when the language communities are involved. In nichesourcing, our goal is to reach a certain level of interplay, where the language communities would benefit from the results. For instance, the corrected words in Ingrian will be added to an online dictionary, which is made freely available for the public, so the society can benefit, too. This objective of interplay can be understood as an aspiration to support the endangered languages and the maintenance of lingual diversity, but also as a servant of ‘two masters’: research and society.
Resumo:
The energy consumption of IT equipments is becoming an issue of increasing importance. In particular, network equipments such as routers and switches are major contributors to the energy consumption of internet. Therefore it is important to understand how the relationship between input parameters such as bandwidth, number of active ports, traffic-load, hibernation-mode and their impact on energy consumption of a switch. In this paper, the energy consumption of a switch is analyzed in extensive experiments. A fuzzy rule-based model of energy consumption of a switch is proposed based on the result of experiments. The model can be used to predict the energy saving when deploying new switches by controlling the parameters to achieve desired energy consumption and subsequent performance. Furthermore, the model can also be used for further researches on energy saving techniques such as energy-efficient routing protocol, dynamic link shutdown, etc.
Resumo:
The National Library of Finland is implementing the Digitization Project of Kindred Languages in 2012–16. Within the project we will digitize materials in the Uralic languages as well as develop tools to support linguistic research and citizen science. Through this project, researchers will gain access to new corpora 329 and to which all users will have open access regardless of their place of residence. Our objective is to make sure that the new corpora are made available for the open and interactive use of both the academic community and the language societies as a whole. The project seeks to digitize and publish approximately 1200 monograph titles and more than 100 newspapers titles in various Uralic languages. The digitization will be completed by the early of 2015, when the Fenno-Ugrica collection would contain around 200 000 pages of editable text. The researchers cannot spend so much time with the material that they could retrieve a satisfactory amount of edited words, so the participation of a crowd in editing work is needed. Often the targets in crowdsourcing have been split into several microtasks that do not require any special skills from the anonymous people, a faceless crowd. This way of crowdsourcing may produce quantitative results, but from the research’s point of view, there is a danger that the needs of linguistic research are not necessarily met. Also, the number of pages is too high to deal with. The remarkable downside is the lack of shared goal or social affinity. There is no reward in traditional methods of crowdsourcing. Nichesourcing is a specific type of crowdsourcing where tasks are distributed amongst a small crowd of citizen scientists (communities). Although communities provide smaller pools to draw resources, their specific richness in skill is suited for the complex tasks with high-quality product expectations found in nichesourcing. Communities have purpose, identity and their regular interactions engenders social trust and reputation. These communities can correspond to research more precisely. Instead of repetitive and rather trivial tasks, we are trying to utilize the knowledge and skills of citizen scientists to provide qualitative results. Some selection must be made, since we are not aiming to correct all 200,000 pages which we have digitized, but give such assignments to citizen scientists that would precisely fill the gaps in linguistic research. A typical task would editing and collecting the words in such fields of vocabularies, where the researchers do require more information. For instance, there’s a lack of Hill Mari words in anatomy. We have digitized the books in medicine and we could try to track the words related to human organs by assigning the citizen scientists to edit and collect words with OCR editor. From the nichesourcing’s perspective, it is essential that the altruism plays a central role, when the language communities involve. Upon the nichesourcing, our goal is to reach a certain level of interplay, where the language communities would benefit on the results. For instance, the corrected words in Ingrian will be added onto the online dictionary, which is made freely available for the public and the society can benefit too. This objective of interplay can be understood as an aspiration to support the endangered languages and the maintenance of lingual diversity, but also as a servant of “two masters”, the research and the society.
Resumo:
Principen om nationalismen där det politiska och det nationella är samspelt kan vara av markant betydelse för uppbyggande av autonomiska regimer. Likaså tillåter decentralicering och delegering av befogenheter för språk och utbildning (officiellt erkännande av språk, standardisering av språk, undervisningsspråk och relaterade läroplaner) formning av identiteter inom dessa autonomiska regimer. Resultatet är en ofullkomlig cirkulär relation där språk, samfund och politiska institutioner ömsesidigt och kontinuerligt formar varandra: lingvistiskt mångfald prägar och formger autonomiska ordningar och vice-versa. De juridiska implikationerna av territoriella och icke-territoriella former av autonomi är dock av en annan art. Emedan territoriell autonomi bygger på idéen om ett eventuellt inkluderande hemland för lingvistiska grupper, vars vistelseort är avgörande, förstärker den icke-territoriella autonomin idéen om ett exclusivt samfund bestående av själv-identifierade medlemmar som är kapabla till självstyre oavsett territoriella gränser. Denna avhandling utgör an analys av sådana juridiska implikationer genom komparativa och institutionella analyser. Avhandlingen föreslår som resultat en serie av normativa och pragmatiska rekommendationer inriktade på att främja demokratiseringsprocesser i linje med principer om multikulturalism.
Resumo:
In this study, a neuro-fuzzy estimator was developed for the estimation of biomass concentration of the microalgae Synechococcus nidulans from initial batch concentrations, aiming to predict daily productivity. Nine replica experiments were performed. The growth was monitored daily through the culture medium optic density and kept constant up to the end of the exponential phase. The network training followed a full 3³ factorial design, in which the factors were the number of days in the entry vector (3,5 and 7 days), number of clusters (10, 30 and 50 clusters) and internal weight softening parameter (Sigma) (0.30, 0.45 and 0.60). These factors were confronted with the sum of the quadratic error in the validations. The validations had 24 (A) and 18 (B) days of culture growth. The validations demonstrated that in long-term experiments (Validation A) the use of a few clusters and high Sigma is necessary. However, in short-term experiments (Validation B), Sigma did not influence the result. The optimum point occurred within 3 days in the entry vector, 10 clusters and 0.60 Sigma and the mean determination coefficient was 0.95. The neuro-fuzzy estimator proved a credible alternative to predict the microalgae growth.
Resumo:
The construction of offshore structures, equipment and devices requires a high level of mechanical reliability in terms of strength, toughness and ductility. One major site for mechanical failure, the weld joint region, needs particularly careful examination, and weld joint quality has become a major focus of research in recent times. Underwater welding carried out offshore faces specific challenges affecting the mechanical reliability of constructions completed underwater. The focus of this thesis is on improvement of weld quality of underwater welding using control theory. This research work identifies ways of optimizing the welding process parameters of flux cored arc welding (FCAW) during underwater welding so as to achieve desired weld bead geometry when welding in a water environment. The weld bead geometry has no known linear relationship with the welding process parameters, which makes it difficult to determine a satisfactory weld quality. However, good weld bead geometry is achievable by controlling the welding process parameters. The doctoral dissertation comprises two sections. The first part introduces the topic of the research, discusses the mechanisms of underwater welding and examines the effect of the water environment on the weld quality of wet welding. The second part comprises four research papers examining different aspects of underwater wet welding and its control and optimization. Issues considered include the effects of welding process parameters on weld bead geometry, optimization of FCAW process parameters, and design of a control system for the purpose of achieving a desired bead geometry that can ensure a high level of mechanical reliability in welded joints of offshore structures. Artificial neural network systems and a fuzzy logic controller, which are incorporated in the control system design, and a hybrid of fuzzy and PID controllers are the major control dynamics used. This study contributes to knowledge of possible solutions for achieving similar high weld quality in underwater wet welding as found with welding in air. The study shows that carefully selected steels with very low carbon equivalent and proper control of the welding process parameters are essential in achieving good weld quality. The study provides a platform for further research in underwater welding. It promotes increased awareness of the need to improve the quality of underwater welding for offshore industries and thus minimize the risk of structural defects resulting from poor weld quality.
Resumo:
There are more than 7000 languages in the world, and many of these have emerged through linguistic divergence. While questions related to the drivers of linguistic diversity have been studied before, including studies with quantitative methods, there is no consensus as to which factors drive linguistic divergence, and how. In the thesis, I have studied linguistic divergence with a multidisciplinary approach, applying the framework and quantitative methods of evolutionary biology to language data. With quantitative methods, large datasets may be analyzed objectively, while approaches from evolutionary biology make it possible to revisit old questions (related to, for example, the shape of the phylogeny) with new methods, and adopt novel perspectives to pose novel questions. My chief focus was on the effects exerted on the speakers of a language by environmental and cultural factors. My approach was thus an ecological one, in the sense that I was interested in how the local environment affects humans and whether this human-environment connection plays a possible role in the divergence process. I studied this question in relation to the Uralic language family and to the dialects of Finnish, thus covering two different levels of divergence. However, as the Uralic languages have not previously been studied using quantitative phylogenetic methods, nor have population genetic methods been previously applied to any dialect data, I first evaluated the applicability of these biological methods to language data. I found the biological methodology to be applicable to language data, as my results were rather similar to traditional views as to both the shape of the Uralic phylogeny and the division of Finnish dialects. I also found environmental conditions, or changes in them, to be plausible inducers of linguistic divergence: whether in the first steps in the divergence process, i.e. dialect divergence, or on a large scale with the entire language family. My findings concerning Finnish dialects led me to conclude that the functional connection between linguistic divergence and environmental conditions may arise through human cultural adaptation to varying environmental conditions. This is also one possible explanation on the scale of the Uralic language family as a whole. The results of the thesis bring insights on several different issues in both a local and a global context. First, they shed light on the emergence of the Finnish dialects. If the approach used in the thesis is applied to the dialects of other languages, broader generalizations may be drawn as to the inducers of linguistic divergence. This again brings us closer to understanding the global patterns of linguistic diversity. Secondly, the quantitative phylogeny of the Uralic languages, with estimated times of language divergences, yields another hypothesis as to the shape and age of the language family tree. In addition, the Uralic languages can now be added to the growing list of language families studied with quantitative methods. This will allow broader inferences as to global patterns of language evolution, and more language families can be included in constructing the tree of the world’s languages. Studying history through language, however, is only one way to illuminate the human past. Therefore, thirdly, the findings of the thesis, when combined with studies of other language families, and those for example in genetics and archaeology, bring us again closer to an understanding of human history.
Resumo:
Today’s international business in highly related to crossing national, cultural and linguistic borders making communication and linguistic skills a vital part of the trade. The purpose of the study is to understand the role of linguistic skills in trust creation in international business relationships. Subobjectives are to discuss the importance of linguistic skills in international business context, to evaluate the strategic value of trust in business relationships and to analyze the extent to which linguistic skills affect trust formation. The scope is restricted to business-to-business markets. The theoretical background consists of different theories and previous studies related to trust and linguistic skills. Based on the theory a new LTS-framework is created to demonstrate a process model of linguistic skills affecting trust creation in international B2B relationships. This study is qualitative using interviews as a data collection method. Altogether eleven interviews were conducted between October 2014 and February 2015. All of the interviewees worked for organizations operating in the field of international business in B2B markets, spoke multiple languages and had a lot of experience in sales and negotiations. This study confirms that linguistic skills are an important part of international business. In many organizations English is used as lingua franca. However, there are several benefits of speaking the mother tongue of the customer. It makes people feel more relaxed and it makes the relationship more intimate and allows to continue developing it at a more personal level. From the strategic point of view trust creates competitive advantage to a company adding strategic value to the business. The data also supported the view that linguistic skills definitely impact the trust formation process. Quickness and easiness could be stated as the main benefits. It was seen that trust forms faster because both parties understand each other better and they become more open about information sharing within a shorter period of time. These findings and the importance of linguistic skills in trust creation should be acknowledged by organizations, especially regarding the human resource management. Boundary spanners are in key positions so special attention should be put into hiring and educating employees which then take care of company’s relationships. Eventually, these benefits are economical and affect to the profitability of the organization