947 resultados para Fuzzy analytic hierarchy process
Resumo:
Networked learning happens naturally within the social systems of which we are all part. However, in certain circumstances individuals may want to actively take initiative to initiate interaction with others they are not yet regularly in exchange with. This may be the case when external influences and societal changes require innovation of existing practices. This paper proposes a framework with relevant dimensions providing insight into precipitated characteristics of designed as well as ‘fostered or grown’ networked learning initiatives. Networked learning initiatives are characterized as “goal-directed, interest-, or needs based activities of a group of (at least three) individuals that initiate interaction across the boundaries of their regular social systems”. The proposed framework is based on two existing research traditions, namely 'networked learning' and 'learning networks', comparing, integrating and building upon knowledge from both perspectives. We uncover some interesting differences between definitions, but also similarities in the way they describe what ‘networked’ means and how learning is conceptualized. We think it is productive to combine both research perspectives, since they both study the process of learning in networks extensively, albeit from different points of view, and their combination can provide valuable insights in networked learning initiatives. We uncover important features of networked learning initiatives, characterize actors and connections of which they are comprised and conditions which facilitate and support them. The resulting framework could be used both for analytic purposes and (partly) as a design framework. In this framework it is acknowledged that not all successful networks have the same characteristics: there is no standard ‘constellation’ of people, roles, rules, tools and artefacts, although there are indications that some network structures work better than others. Interactions of individuals can only be designed and fostered till a certain degree: the type of network and its ‘growth’ (e.g. in terms of the quantity of people involved, or the quality and relevance of co-created concepts, ideas, artefacts and solutions to its ‘inhabitants’) is in the hand of the people involved. Therefore, the framework consists of dimensions on a sliding scale. It introduces a structured and analytic way to look at the precipitation of networked learning initiatives: learning networks. Successive research on the application of this framework and feedback from the networked learning community is needed to further validate it’s usability and value to both research as well as practice.
Resumo:
Networked learning happens naturally within the social systems of which we are all part. However, in certain circumstances individuals may want to actively take initiative to initiate interaction with others they are not yet regularly in exchange with. This may be the case when external influences and societal changes require innovation of existing practices. This paper proposes a framework with relevant dimensions providing insight into precipitated characteristics of designed as well as ‘fostered or grown’ networked learning initiatives. Networked learning initiatives are characterized as “goal-directed, interest-, or needs based activities of a group of (at least three) individuals that initiate interaction across the boundaries of their regular social systems”. The proposed framework is based on two existing research traditions, namely 'networked learning' and 'learning networks', comparing, integrating and building upon knowledge from both perspectives. We uncover some interesting differences between definitions, but also similarities in the way they describe what ‘networked’ means and how learning is conceptualized. We think it is productive to combine both research perspectives, since they both study the process of learning in networks extensively, albeit from different points of view, and their combination can provide valuable insights in networked learning initiatives. We uncover important features of networked learning initiatives, characterize actors and connections of which they are comprised and conditions which facilitate and support them. The resulting framework could be used both for analytic purposes and (partly) as a design framework. In this framework it is acknowledged that not all successful networks have the same characteristics: there is no standard ‘constellation’ of people, roles, rules, tools and artefacts, although there are indications that some network structures work better than others. Interactions of individuals can only be designed and fostered till a certain degree: the type of network and its ‘growth’ (e.g. in terms of the quantity of people involved, or the quality and relevance of co-created concepts, ideas, artefacts and solutions to its ‘inhabitants’) is in the hand of the people involved. Therefore, the framework consists of dimensions on a sliding scale. It introduces a structured and analytic way to look at the precipitation of networked learning initiatives: learning networks. Successive research on the application of this framework and feedback from the networked learning community is needed to further validate it’s usability and value to both research as well as practice.
Resumo:
Motivated by environmental protection concerns, monitoring the flue gas of thermal power plant is now often mandatory due to the need to ensure that emission levels stay within safe limits. Optical based gas sensing systems are increasingly employed for this purpose, with regression techniques used to relate gas optical absorption spectra to the concentrations of specific gas components of interest (NOx, SO2 etc.). Accurately predicting gas concentrations from absorption spectra remains a challenging problem due to the presence of nonlinearities in the relationships and the high-dimensional and correlated nature of the spectral data. This article proposes a generalized fuzzy linguistic model (GFLM) to address this challenge. The GFLM is made up of a series of “If-Then” fuzzy rules. The absorption spectra are input variables in the rule antecedent. The rule consequent is a general nonlinear polynomial function of the absorption spectra. Model parameters are estimated using least squares and gradient descent optimization algorithms. The performance of GFLM is compared with other traditional prediction models, such as partial least squares, support vector machines, multilayer perceptron neural networks and radial basis function networks, for two real flue gas spectral datasets: one from a coal-fired power plant and one from a gas-fired power plant. The experimental results show that the generalized fuzzy linguistic model has good predictive ability, and is competitive with alternative approaches, while having the added advantage of providing an interpretable model.
Resumo:
Eine effiziente Gestaltung von Materialbereitstellungsprozessen ist eine entscheidende Voraussetzung für die Sicherstellung einer hohen Verfügbarkeit von Materialien in der Montage. Die Auswahl adäquater Bereitstellungsstrategien muss sich stets an den Anforderungen des Materialbereitstellungsprozesses orientieren. Die Leistungsanforderungen an eine effektive Materialbereitstellung werden maßgeblich durch den Montageprozess determiniert. Diesen Leistungsanforderungen ist eine passgenaue Materialbereitstellungsstrategie gegenüberzustellen. Die Formulierung der Leistungsanforderungen kann dabei in qualitativer oder quantitativer Form erfolgen. Allein die Berücksichtigung quantitativer Daten ist unzureichend, denn häufig liegen zum Zeitpunkt der Planung weder belastbare quantitative Daten vor, noch erscheint der Aufwand zu deren Ermittlung angemessen. Zudem weisen die herkömmlichen Methoden, die im Rahmen der Auswahl von Materialbereitstellungsstrategien häufig eingesetzt werden, den Nachteil auf, dass eine Nichterfüllung einer bestimmten Leistungsanforderung durch eine besonders gute Erfüllung einer anderen Leistungsanforderung kompensiert werden kann (Zeit vs. Qualität). Um die Auswahl einer Materialbereitstellungsstrategie unter Berücksichtigung qualitativer und quantitativer Anforderungen durchführen zu können, eignet sich in besonderer Weise die Methode des Fuzzy Axiomatic Designs. Diese Methode erlaubt einen Abgleich von Anforderungen an den Materialbereitstellungsprozess und der Eignung unterschiedlicher Materialbereitstellungsstrategien.
Resumo:
O presente trabalho faz um enlace de teorias propostas por dois trabalhos: Transformação de valores crisp em valores fuzzy e construção de gráfico de controle fuzzy. O resultado desse enlace é um gráfico de controle fuzzy que foi aplicado em um processo de produção de iogurte, onde as variáveis analisadas foram: Cor, Aroma, Consistência, Sabor e Acidez. São características que dependem da percepção dos indivíduos, então a forma utilizada para coletar informações a respeito de tais característica foi a análise sensorial. Nas analises um grupo denominado de juízes, atribuía individualmente notas para cada amostra de iogurte em uma escala de 0 a 10. Esses valores crisp, notas atribuídas pelos juízes, foram então, transformados em valores fuzzy, na forma de número fuzzy triangular. Com os números fuzzy, foram construídos os gráficos de controle fuzzy de média e amplitude. Com os valores crisp foram construídos gráficos de controle de Shewhart para média e amplitude, já consolidados pela literatura. Por fim, os resultados encontrados nos gráficos tradicionais foram comparados aos encontrados nos gráficos de controle fuzzy. O que pode-se observar é que o gráfico de controle fuzzy, parece satisfazer de forma significativa a realidade do processo, pois na construção do número fuzzy é considerada a variabilidade do processo. Além disso, caracteriza o processo de produção em alguns níveis, onde nem sempre o processo estará totalmente em controle ou totalmente fora de controle. O que vai ao encontro da teoria fuzzy: se não é possível prever com exatidão determinados resultados é melhor ter uma margem de aceitação, o que implicará na redução de erros.
Resumo:
The thesis is an investigation of the principle of least effort (Zipf 1949 [1972]). The principle is simple (all effort should be least) and universal (it governs the totality of human behavior). Since the principle is also functional, the thesis adopts a functional theory of language as its theoretical framework, i.e. Natural Linguistics. The explanatory system of Natural Linguistics posits that higher principles govern preferences, which, in turn, manifest themselves as concrete, specific processes in a given language. Therefore, the thesis’ aim is to investigate the principle of least effort on the basis of external evidence from English. The investigation falls into the three following strands: the investigation of the principle itself, the investigation of its application in articulatory effort and the investigation of its application in phonological processes. The structure of the thesis reflects the division of its broad aims. The first part of the thesis presents its theoretical background (Chapter One and Chapter Two), the second part of the thesis deals with application of least effort in articulatory effort (Chapter Three and Chapter Four), whereas the third part discusses the principle of least effort in phonological processes (Chapter Five and Chapter Six). Chapter One serves as an introduction, examining various aspects of the principle of least effort such as its history, literature, operation and motivation. It overviews various names which denote least effort, explains the origins of the principle and reviews the literature devoted to the principle of least effort in a chronological order. The chapter also discusses the nature and operation of the principle, providing numerous examples of the principle at work. It emphasizes the universal character of the principle from the linguistic field (low-level phonetic processes and language universals) and the non-linguistic ones (physics, biology, psychology and cognitive sciences), proving that the principle governs human behavior and choices. Chapter Two provides the theoretical background of the thesis in terms of its theoretical framework and discusses the terms used in the thesis’ title, i.e. hierarchy and preference. It justifies the selection of Natural Linguistics as the thesis’ theoretical framework by outlining its major assumptions and demonstrating its explanatory power. As far as the concepts of hierarchy and preference are concerned, the chapter provides their definitions and reviews their various understandings via decision theories and linguistic preference-based theories. Since the thesis investigates the principle of least effort in language and speech, Chapter Three considers the articulatory aspect of effort. It reviews the notion of easy and difficult sounds and discusses the concept of articulatory effort, overviewing its literature as well as various understandings in a chronological fashion. The chapter also presents the concept of articulatory gestures within the framework of Articulatory Phonology. The thesis’ aim is to investigate the principle of least effort on the basis of external evidence, therefore Chapters Four and Six provide evidence in terms of three experiments, text message studies (Chapter Four) and phonological processes in English (Chapter Six). Chapter Four contains evidence for the principle of least effort in articulation on the basis of experiments. It describes the experiments in terms of their predictions and methodology. In particular, it discusses the adopted measure of effort established by means of the effort parameters as well as their status. The statistical methods of the experiments are also clarified. The chapter reports on the results of the experiments, presenting them in a graphical way and discusses their relation to the tested predictions. Chapter Four establishes a hierarchy of speakers’ preferences with reference to articulatory effort (Figures 30, 31). The thesis investigates the principle of least effort in phonological processes, thus Chapter Five is devoted to the discussion of phonological processes in Natural Phonology. The chapter explains the general nature and motivation of processes as well as the development of processes in child language. It also discusses the organization of processes in terms of their typology as well as the order in which processes apply. The chapter characterizes the semantic properties of processes and overviews Luschützky’s (1997) contribution to NP with respect to processes in terms of their typology and incorporation of articulatory gestures in the concept of a process. Chapter Six investigates phonological processes. In particular, it identifies the issues of lenition/fortition definition and process typology by presenting the current approaches to process definitions and their typology. Since the chapter concludes that no coherent definition of lenition/fortition exists, it develops alternative lenition/fortition definitions. The chapter also revises the typology of phonological processes under effort management, which is an extended version of the principle of least effort. Chapter Seven concludes the thesis with a list of the concepts discussed in the thesis, enumerates the proposals made by the thesis in discussing the concepts and presents some questions for future research which have emerged in the course of investigation. The chapter also specifies the extent to which the investigation of the principle of least effort is a meaningful contribution to phonology.
Resumo:
This work presents a proposal to detect interface in atmospheric oil tanks by installing a differential pressure level transmitter to infer the oil-water interface. The main goal of this project is to maximize the quantity of free water that is delivered to the drainage line by controlling the interface. A Fuzzy Controller has been implemented by using the interface transmitter as the Process Variable. Two ladder routine was generated to perform the control. One routine was developed to calculate the error and error variation. The other was generate to develop the fuzzy controller itself. By using rules, the fuzzy controller uses these variables to set the output. The output is the position variation of the drainage valve. Although the ladder routine was implemented into an Allen Bradley PLC, Control Logix family it can be implemented into any brand of PLCs
Resumo:
From their early days, Electrical Submergible Pumping (ESP) units have excelled in lifting much greater liquid rates than most of the other types of artificial lift and developed by good performance in wells with high BSW, in onshore and offshore environments. For all artificial lift system, the lifetime and frequency of interventions are of paramount importance, given the high costs of rigs and equipment, plus the losses coming from a halt in production. In search of a better life of the system comes the need to work with the same efficiency and security within the limits of their equipment, this implies the need for periodic adjustments, monitoring and control. How is increasing the prospect of minimizing direct human actions, these adjustments should be made increasingly via automation. The automated system not only provides a longer life, but also greater control over the production of the well. The controller is the brain of most automation systems, it is inserted the logic and strategies in the work process in order to get you to work efficiently. So great is the importance of controlling for any automation system is expected that, with better understanding of ESP system and the development of research, many controllers will be proposed for this method of artificial lift. Once a controller is proposed, it must be tested and validated before they take it as efficient and functional. The use of a producing well or a test well could favor the completion of testing, but with the serious risk that flaws in the design of the controller were to cause damage to oil well equipment, many of them expensive. Given this reality, the main objective of the present work is to present an environment for evaluation of fuzzy controllers for wells equipped with ESP system, using a computer simulator representing a virtual oil well, a software design fuzzy controllers and a PLC. The use of the proposed environment will enable a reduction in time required for testing and adjustments to the controller and evaluated a rapid diagnosis of their efficiency and effectiveness. The control algorithms are implemented in both high-level language, through the controller design software, such as specific language for programming PLCs, Ladder Diagram language.
Resumo:
This study took place at one of the intercultural universities (IUs) of Mexico that serve primarily indigenous students. The IUs are pioneers in higher education despite their numerous challenges (Bertely, 1998; Dietz, 2008; Pineda & Landorf, 2010; Schmelkes, 2009). To overcome educational inequalities among their students (Ahuja, Berumen, Casillas, Crispín, Delgado et al., 2004; Schmelkes, 2009), the IUs have embraced performance-based assessment (PBA; Casillas & Santini, 2006). PBA allows a shared model of power and control related to learning and evaluation (Anderson, 1998). While conducting a review on PBA strategies of the IUs, the researcher did not find a PBA instrument with valid and reliable estimates. The purpose of this study was to develop a process to create a PBA instrument, an analytic general rubric, with acceptable validity and reliability estimates to assess students’ attainment of competencies in one of the IU’s majors, Intercultural Development Management. The Human Capabilities Approach (HCA) was the theoretical framework and a sequential mixed method (Creswell, 2003; Teddlie & Tashakkori, 2009) was the research design. IU participants created a rubric during two focus groups, and seven Spanish-speaking professors in Mexico and the US piloted using students’ research projects. The evidence that demonstrates the attainment of competencies at the IU is a complex set of actual, potential and/or desired performances or achievements, also conceptualized as “functional capabilities” (FCs; Walker, 2008), that can be used to develop a rubric. Results indicate that the rubric’s validity and reliability estimates reached acceptable estimates of 80% agreement, surpassing minimum requirements (Newman, Newman, & Newman, 2011). Implications for practice involve the use of PBA within a formative assessment framework, and dynamic inclusion of constituencies. Recommendations for further research include introducing this study’s instrument-development process to other IUs, conducting parallel mixed design studies exploring the intersection between HCA and assessment, and conducting a case study exploring assessment in intercultural settings. Education articulated through the HCA empowers students (Unterhalter & Brighouse, 2007; Walker, 2008). This study aimed to contribute to the quality of student learning assessment at the IUs by providing a participatory process to develop a PBA instrument.
Resumo:
Previous research with the ratio-bias task found larger response latencies for conflict trials where the heuristic- and analytic-based responses are assumed to be in opposition (e.g., choosing between 1/10 and 9/100 ratios of success) when compared to no-conflict trials where both processes converge on the same response (e.g., choosing between 1/10 and 11/100). This pattern is consistent with parallel dualprocess models, which assume that there is effective, rather than lax, monitoring of the output of heuristic processing. It is, however, unclear why conflict resolution sometimes fails. Ratio-biased choices may increase because of a decline in analytical reasoning (leaving heuristic-based responses unopposed) or to a rise in heuristic processing (making it more difficult for analytic processes to override the heuristic preferences). Using the process-dissociation procedure, we found that instructions to respond logically and response speed affected analytic (controlled) processing (C), leaving heuristic processing (H) unchanged, whereas the intuitive preference for large nominators (as assessed by responses to equal ratio trials) affected H but not C. These findings create new challenges to the debate between dual-process and singleprocess accounts, which are discussed.
Resumo:
Este trabajo se inscribe en uno de los grandes campos de los estudios organizacionales: la estrategia. La perspectiva clásica en este campo promovió la idea de que proyectarse hacia el futuro implica diseñar un plan (una serie de acciones deliberadas). Avances posteriores mostraron que la estrategia podía ser comprendida de otras formas. Sin embargo, la evolución del campo privilegió en alguna medida la mirada clásica estableciendo, por ejemplo, múltiples modelos para ‘formular’ una estrategia, pero dejando en segundo lugar la manera en la que esta puede ‘emerger’. El propósito de esta investigación es, entonces, aportar al actual nivel de comprensión respecto a las estrategias emergentes en las organizaciones. Para hacerlo, se consideró un concepto opuesto —aunque complementario— al de ‘planeación’ y, de hecho, muy cercano en su naturaleza a ese tipo de estrategias: la improvisación. Dado que este se ha nutrido de valiosos aportes del mundo de la música, se acudió al saber propio de este dominio, recurriendo al uso de ‘la metáfora’ como recurso teórico para entenderlo y alcanzar el objetivo propuesto. Los resultados muestran que 1) las estrategias deliberadas y las emergentes coexisten y se complementan, 2) la improvisación está siempre presente en el contexto organizacional, 3) existe una mayor intensidad de la improvisación en el ‘como’ de la estrategia que en el ‘qué’ y, en oposición a la idea convencional al respecto, 4) se requiere cierta preparación para poder improvisar de manera adecuada.
Resumo:
Intelligent systems are currently inherent to the society, supporting a synergistic human-machine collaboration. Beyond economical and climate factors, energy consumption is strongly affected by the performance of computing systems. The quality of software functioning may invalidate any improvement attempt. In addition, data-driven machine learning algorithms are the basis for human-centered applications, being their interpretability one of the most important features of computational systems. Software maintenance is a critical discipline to support automatic and life-long system operation. As most software registers its inner events by means of logs, log analysis is an approach to keep system operation. Logs are characterized as Big data assembled in large-flow streams, being unstructured, heterogeneous, imprecise, and uncertain. This thesis addresses fuzzy and neuro-granular methods to provide maintenance solutions applied to anomaly detection (AD) and log parsing (LP), dealing with data uncertainty, identifying ideal time periods for detailed software analyses. LP provides deeper semantics interpretation of the anomalous occurrences. The solutions evolve over time and are general-purpose, being highly applicable, scalable, and maintainable. Granular classification models, namely, Fuzzy set-Based evolving Model (FBeM), evolving Granular Neural Network (eGNN), and evolving Gaussian Fuzzy Classifier (eGFC), are compared considering the AD problem. The evolving Log Parsing (eLP) method is proposed to approach the automatic parsing applied to system logs. All the methods perform recursive mechanisms to create, update, merge, and delete information granules according with the data behavior. For the first time in the evolving intelligent systems literature, the proposed method, eLP, is able to process streams of words and sentences. Essentially, regarding to AD accuracy, FBeM achieved (85.64+-3.69)%; eGNN reached (96.17+-0.78)%; eGFC obtained (92.48+-1.21)%; and eLP reached (96.05+-1.04)%. Besides being competitive, eLP particularly generates a log grammar, and presents a higher level of model interpretability.
Resumo:
Systemic lupus erythematosus is an autoimmune disease that causes many psychological repercussions that have been studied through qualitative research. These are considered relevant, since they reveal the amplitude experienced by patients. Given this importance, this study aims to map the qualitative production in this theme, derived from studies of experiences of adult patients of both genders and that had used as a tool a semi-structured interview and/or field observations, and had made use of a sampling by a saturation criterion to determine the number of participants in each study. The survey was conducted in Pubmed, Lilacs, Psycinfo e Cochrane databases, searching productions in English and Portuguese idioms published between January 2005 and June 2012. The 19 revised papers that have dealt with patients in the acute phase of the disease showed themes that were categorized into eight topics that contemplated the experienced process at various stages, from the onset of the disease, extending through the knowledge of the diagnosis and the understanding of the manifestations of the disease, drug treatment and general care, evolution and prognosis. The collected papers also point to the difficulty of understanding, of the patients, on what consists the remission phase, revealing also that this is a clinical stage underexplored by psychological studies.
Resumo:
20
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física