950 resultados para Modelos fuzzy set


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RESUMO - A consciência de uma necessidade clara em rentabilizar a capacidade instalada e os meios tecnológicos e humanos disponíveis no Bloco Operatório e face ao imperativo de um cabal desempenho e de uma adequada efectividade nestes serviços levou-nos à realização deste estudo. Objectivos: O trabalho de projecto centrou-se em quatro objectivos concretos: Elaboração de uma grelha de observação de Modelos de Gestão de Bloco Operatório; Observação de seis Modelos de Gestão de Blocos Operatórios em experiências nacionais e in-loco, de acordo com a grelha de observação; Avaliação da qualidade gestionária na amostra seleccionada à luz dos modelos existentes; Criação de uma grelha de indicadores para a monitorização e avaliação do Bloco Operatório. Metodologia: Na elaboração da grelha de observação dos Blocos Operatórios recorremos a um grupo de peritos, à bibliografia disponível e à informação recolhida em entrevistas. Aplicámos a grelha de observação aos seis Blocos Operatórios e analisámos as informações referentes a cada modelo com a finalidade de encontrar os pontos-chave que mais se destacavam em cada um deles. Para a elaboração da grelha de indicadores de monitorização do Bloco Operatório realizámos uma reunião recorrendo à técnica de grupo nominal para encontrar o nível de consenso entre os peritos. Resultados: Criámos uma grelha de observação de Modelos de Gestão de Bloco Operatório que permite comparar as características de gestão. Esta grelha foi aplicada a seis Blocos Operatórios o que permitiu destacar como elementos principais e de diferenciação: o sistema de incentivos implementado; o sistema informático, de comunicação entre os serviços e de débito directo dos gastos; a existência de uma equipa de gestão de Bloco Operatório e de Gestão de Risco; a importância de um planeamento cirúrgico semanal e da existência de um regulamento do Bloco Operatório. Desenhámos um painel de indicadores para uma monitorização do Bloco Operatório, de onde destacamos: tempo médio de paragem por razões técnicas, tempo médio de paragem por razões operacionais, tempo médio por equipa e tempo médio por procedimento. Considerações finais: Os Blocos Operatórios devem ponderar a existência das componentes mais importantes dos Modelos, bem como recolher exaustivamente indicadores de monitorização. A investigação futura deverá debruçar-se sobre a relação entre os indicadores de monitorização e os Modelos de Gestão, recorrendo à técnicas de benchmarking. -------------------ABSTRACT - This study was driven by the need to optimise available capacity, technology and human resources in the Operating Room and to address the corresponding goals of adequate performance and effectiveness. Objectives: This project focuses on four specific objectives: development of an observation grid of operating room management models; in-loco observation and documentation of six national operating room, according to the grid; assess the quality of management in the selected sample relative to existing management models; create a set of indicators for monitoring and evaluating operating rooms. Methodology: The design of the observation grid was based on experts’ consultation, a literature survey and information gathered in various interviews. The observation grid was applied to six operating rooms and the information for each management model was analysed in order to find its key characteristics. We used the Nominal Group Technique in order to develop a set of indicators for monitoring and evaluating operating rooms. Results: An observation grid was created for operating rooms management models, which allowed comparing management characteristics. This grid was applied to six operating rooms allowing disentangle its main features and differentiating characteristics: implementation of incentive systems; IT systems including information flow between services; inventory and expense management; existence of a management team and effective risk management; importance of weekly planning and regulations. We designed a set control indicators, whose major characteristics are the following: the average down time due to technical reasons, the average down time due to operational reasons, the average time per team and the average time per procedure. Final Conclusions: Operating rooms should consider the most relevant characteristics of management models and collect exhaustive information on control indicators. Future research should be devoted to assessing the operating room performance according to management models, using control indicators and benchmarking techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Apresenta-se nesta tese uma revisão da literatura sobre a modelação de semicondutores de potência baseada na física e posterior análise de desempenho de dois métodos estocásticos, Particle Swarm Optimizaton (PSO) e Simulated Annealing (SA), quando utilizado para identificação eficiente de parâmetros de modelos de dispositivos semicondutores de potência, baseado na física. O conhecimento dos valores destes parâmetros, para cada dispositivo, é fundamental para uma simulação precisa do comportamento dinâmico do semicondutor. Os parâmetros são extraídos passo-a-passo durante simulação transiente e desempenham um papel relevante. Uma outra abordagem interessante nesta tese relaciona-se com o facto de que nos últimos anos, os métodos de modelação para dispositivos de potência têm emergido, com alta precisão e baixo tempo de execução baseado na Equação de Difusão Ambipolar (EDA) para díodos de potência e implementação no MATLAB numa estratégia de optimização formal. A equação da EDA é resolvida numericamente sob várias condições de injeções e o modelo é desenvolvido e implementado como um subcircuito no simulador IsSpice. Larguras de camada de depleção, área total do dispositivo, nível de dopagem, entre outras, são alguns dos parâmetros extraídos do modelo. Extração de parâmetros é uma parte importante de desenvolvimento de modelo. O objectivo de extração de parâmetros e otimização é determinar tais valores de parâmetros de modelo de dispositivo que minimiza as diferenças entre um conjunto de características medidas e resultados obtidos pela simulação de modelo de dispositivo. Este processo de minimização é frequentemente chamado de ajuste de características de modelos para dados de medição. O algoritmo implementado, PSO é uma técnica de heurística de otimização promissora, eficiente e recentemente proposta por Kennedy e Eberhart, baseado no comportamento social. As técnicas propostas são encontradas para serem robustas e capazes de alcançar uma solução que é caracterizada para ser precisa e global. Comparada com algoritmo SA já realizada, o desempenho da técnica proposta tem sido testado utilizando dados experimentais para extrair parâmetros de dispositivos reais das características I-V medidas. Para validar o modelo, comparação entre resultados de modelo desenvolvido com um outro modelo já desenvolvido são apresentados.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relatório de estágio de mestrado em Ensino de Matemática no 3.º Ciclo do Ensino Básico e no Ensino Secundário

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de Mestrado em Engenharia Informática

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado em Engenharia de Sistemas

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de Mestrado em Gestão e Políticas Públicas

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The educational system in Spain is undergoing a reorganization. At present, high-school graduates who want to enroll at a public university must take a set of examinations Pruebas de Aptitud para el Acceso a la Universidad (PAAU). A "new formula" (components, weights, type of exam,...) for university admission is been discussed. The present paper summarizes part of the research done by the author in her PhD. The context for this thesis is the evaluation of large-scale and complex systems of assessment. The main objectives were: to achieve a deep knowledge of the entire university admissions process in Spain, to discover the main sources of uncertainty and topromote empirical research in a continual improvement of the entire process. Focusing in the suitable statistical models and strategies which allow to high-light the imperfections of the system and reduce them, the paper develops, among other approaches, some applications of multilevel modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A biplot, which is the multivariate generalization of the two-variable scatterplot, can be used to visualize the results of many multivariate techniques, especially those that are based on the singular value decomposition. We consider data sets consisting of continuous-scale measurements, their fuzzy coding and the biplots that visualize them, using a fuzzy version of multiple correspondence analysis. Of special interest is the way quality of fit of the biplot is measured, since it is well-known that regular (i.e., crisp) multiple correspondence analysis seriously under-estimates this measure. We show how the results of fuzzy multiple correspondence analysis can be defuzzified to obtain estimated values of the original data, and prove that this implies an orthogonal decomposition of variance. This permits a measure of fit to be calculated in the familiar form of a percentage of explained variance, which is directly comparable to the corresponding fit measure used in principal component analysis of the original data. The approach is motivated initially by its application to a simulated data set, showing how the fuzzy approach can lead to diagnosing nonlinear relationships, and finally it is applied to a real set of meteorological data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the large number of characteristics, there is a need to extract the most relevant characteristicsfrom the input data, so that the amount of information lost in this way is minimal, and the classification realized with the projected data set is relevant with respect to the original data. In order to achieve this feature extraction, different statistical techniques, as well as the principal components analysis (PCA) may be used. This thesis describes an extension of principal components analysis (PCA) allowing the extraction ofa finite number of relevant features from high-dimensional fuzzy data and noisy data. PCA finds linear combinations of the original measurement variables that describe the significant variation in the data. The comparisonof the two proposed methods was produced by using postoperative patient data. Experiment results demonstrate the ability of using the proposed two methods in complex data. Fuzzy PCA was used in the classificationproblem. The classification was applied by using the similarity classifier algorithm where total similarity measures weights are optimized with differential evolution algorithm. This thesis presents the comparison of the classification results based on the obtained data from the fuzzy PCA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a topological approach to studying fuzzy setsby means of modifier operators. Modifier operators are mathematical models, e.g., for hedges, and we present briefly different approaches to studying modifier operators. We are interested in compositional modifier operators, modifiers for short, and these modifiers depend on binary relations. We show that if a modifier depends on a reflexive and transitive binary relation on U, then there exists a unique topology on U such that this modifier is the closure operator in that topology. Also, if U is finite then there exists a lattice isomorphism between the class of all reflexive and transitive relations and the class of all topologies on U. We define topological similarity relation "≈" between L-fuzzy sets in an universe U, and show that the class LU/ ≈ is isomorphic with the class of all topologies on U, if U is finite and L is suitable. We consider finite bitopological spaces as approximation spaces, and we show that lower and upper approximations can be computed by means of α-level sets also in the case of equivalence relations. This means that approximations in the sense of Rough Set Theory can be computed by means of α-level sets. Finally, we present and application to data analysis: we study an approach to detecting dependencies of attributes in data base-like systems, called information systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The extension of traditional data mining methods to time series has been effectively applied to a wide range of domains such as finance, econometrics, biology, security, and medicine. Many existing mining methods deal with the task of change points detection, but very few provide a flexible approach. Querying specific change points with linguistic variables is particularly useful in crime analysis, where intuitive, understandable, and appropriate detection of changes can significantly improve the allocation of resources for timely and concise operations. In this paper, we propose an on-line method for detecting and querying change points in crime-related time series with the use of a meaningful representation and a fuzzy inference system. Change points detection is based on a shape space representation, and linguistic terms describing geometric properties of the change points are used to express queries, offering the advantage of intuitiveness and flexibility. An empirical evaluation is first conducted on a crime data set to confirm the validity of the proposed method and then on a financial data set to test its general applicability. A comparison to a similar change-point detection algorithm and a sensitivity analysis are also conducted. Results show that the method is able to accurately detect change points at very low computational costs. More broadly, the detection of specific change points within time series of virtually any domain is made more intuitive and more understandable, even for experts not related to data mining.