818 resultados para fuzzy rule base models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article is the second part of a review of the historical evolution of mathematical models applied in the development of building technology. The first part described the current state of the art and contrasted various models with regard to the applications to conventional buildings and intelligent buildings. It concluded that mathematical techniques adopted in neural networks, expert systems, fuzzy logic and genetic models, that can be used to address model uncertainty, are well suited for modelling intelligent buildings. Despite the progress, the possible future development of intelligent buildings based on the current trends implies some potential limitations of these models. This paper attempts to uncover the fundamental limitations inherent in these models and provides some insights into future modelling directions, with special focus on the techniques of semiotics and chaos. Finally, by demonstrating an example of an intelligent building system with the mathematical models that have been developed for such a system, this review addresses the influences of mathematical models as a potential aid in developing intelligent buildings and perhaps even more advanced buildings for the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer music usually sounds mechanical; hence, if musicality and music expression of virtual actors could be enhanced according to the user’s mood, the quality of experience would be amplified. We present a solution that is based on improvisation using cognitive models, case based reasoning (CBR) and fuzzy values acting on close-to-affect-target musical notes as retrieved from CBR per context. It modifies music pieces according to the interpretation of the user’s emotive state as computed by the emotive input acquisition componential of the CALLAS framework. The CALLAS framework incorporates the Pleasure-Arousal-Dominance (PAD) model that reflects emotive state of the user and represents the criteria for the music affectivisation process. Using combinations of positive and negative states for affective dynamics, the octants of temperament space as specified by this model are stored as base reference emotive states in the case repository, each case including a configurable mapping of affectivisation parameters. Suitable previous cases are selected and retrieved by the CBR subsystem to compute solutions for new cases, affect values from which control the music synthesis process allowing for a level of interactivity that makes way for an interesting environment to experiment and learn about expression in music.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The statistics of cloud-base vertical velocity simulated by the non-hydrostatic mesoscale model AROME are compared with Cloudnet remote sensing observations at two locations: the ARM SGP site in Central Oklahoma, and the DWD observatory at Lindenberg, Germany. The results show that, as expected, AROME significantly underestimates the variability of vertical velocity at cloud-base compared to observations at their nominal resolution; the standard deviation of vertical velocity in the model is typically 4-6 times smaller than observed, and even more during the winter at Lindenberg. Averaging the observations to the horizontal scale corresponding to the physical grid spacing of AROME (2.5 km) explains 70-80% of the underestimation by the model. Further averaging of the observations in the horizontal is required to match the model values for the standard deviation in vertical velocity. This indicates an effective horizontal resolution for the AROME model of at least 4 times the physically-defined grid spacing. The results illustrate the need for special treatment of sub-grid scale variability of vertical velocities in kilometer-scale atmospheric models, if processes such as aerosol-cloud interactions are to be included in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Government targets for CO2 reductions are being progressively tightened, the Climate Change Act set the UK target as an 80% reduction by 2050 on 1990 figures. The residential sector accounts for about 30% of emissions. This paper discusses current modelling techniques in the residential sector: principally top-down and bottom-up. Top-down models work on a macro-economic basis and can be used to consider large scale economic changes; bottom-up models are detail rich to model technological changes. Bottom-up models demonstrate what is technically possible. However, there are differences between the technical potential and what is likely given the limited economic rationality of the typical householder. This paper recommends research to better understand individuals’ behaviour. Such research needs to include actual choices, stated preferences and opinion research to allow a detailed understanding of the individual end user. This increased understanding can then be used in an agent based model (ABM). In an ABM, agents are used to model real world actors and can be given a rule set intended to emulate the actions and behaviours of real people. This can help in understanding how new technologies diffuse. In this way a degree of micro-economic realism can be added to domestic carbon modelling. Such a model should then be of use for both forward projections of CO2 and to analyse the cost effectiveness of various policy measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and highfrequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global syntheses of palaeoenvironmental data are required to test climate models under conditions different from the present. Data sets for this purpose contain data from spatially extensive networks of sites. The data are either directly comparable to model output or readily interpretable in terms of modelled climate variables. Data sets must contain sufficient documentation to distinguish between raw (primary) and interpreted (secondary, tertiary) data, to evaluate the assumptions involved in interpretation of the data, to exercise quality control, and to select data appropriate for specific goals. Four data bases for the Late Quaternary, documenting changes in lake levels since 30 kyr BP (the Global Lake Status Data Base), vegetation distribution at 18 kyr and 6 kyr BP (BIOME 6000), aeolian accumulation rates during the last glacial-interglacial cycle (DIRTMAP), and tropical terrestrial climates at the Last Glacial Maximum (the LGM Tropical Terrestrial Data Synthesis) are summarised. Each has been used to evaluate simulations of Last Glacial Maximum (LGM: 21 calendar kyr BP) and/or mid-Holocene (6 cal. kyr BP) environments. Comparisons have demonstrated that changes in radiative forcing and orography due to orbital and ice-sheet variations explain the first-order, broad-scale (in space and time) features of global climate change since the LGM. However, atmospheric models forced by 6 cal. kyr BP orbital changes with unchanged surface conditions fail to capture quantitative aspects of the observed climate, including the greatly increased magnitude and northward shift of the African monsoon during the early to mid-Holocene. Similarly, comparisons with palaeoenvironmental datasets show that atmospheric models have underestimated the magnitude of cooling and drying of much of the land surface at the LGM. The inclusion of feedbacks due to changes in ocean- and land-surface conditions at both times, and atmospheric dust loading at the LGM, appears to be required in order to produce a better simulation of these past climates. The development of Earth system models incorporating the dynamic interactions among ocean, atmosphere, and vegetation is therefore mandated by Quaternary science results as well as climatological principles. For greatest scientific benefit, this development must be paralleled by continued advances in palaeodata analysis and synthesis, which in turn will help to define questions that call for new focused data collection efforts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims: Over the past decade in particular, formal linguistic work within L3 acquisition has concentrated on hypothesizing and empirically determining the source of transfer from previous languages—L1, L2 or both—in L3 grammatical representations. In view of the progressive concern with more advanced stages, we aim to show that focusing on L3 initial stages should be one continued priority of the field, even—or especially—if the field is ready to shift towards modeling L3 development and ultimate attainment. Approach: We argue that L3 learnability is significantly impacted by initial stages transfer, as such forms the basis of the initial L3 interlanguage. To illustrate our point, the insights from studies using initial and intermediary stages L3 data are discussed in light of developmental predictions that derive from the initial stages models. Conclusions: Despite a shared desire to understand the process of L3 acquisition in whole, inclusive of offering developmental L3 theories, we argue that the field does not yet have—although is ever closer to—the data basis needed to effectively do so. Originality: This article seeks to convince the readership for the need of conservatism in L3 acquisition theory building, whereby offering a framework on how and why we can most effectively build on the accumulated knowledge of the L3 initial stages in order to make significant, steady progress. Significance: The arguments exposed here are meant to provide an epistemological base for a tenable framework of formal approaches to L3 interlanguage development and, eventually, ultimate attainment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wikipedia is a free, web-based, collaborative, multilingual encyclopedia project supported by the non-profit Wikimedia Foundation. Due to the free nature of Wikipedia and allowing open access to everyone to edit articles the quality of articles may be affected. As all people don’t have equal level of knowledge and also different people have different opinions about a topic so there may be difference between the contributions made by different authors. To overcome this situation it is very important to classify the articles so that the articles of good quality can be separated from the poor quality articles and should be removed from the database. The aim of this study is to classify the articles of Wikipedia into two classes class 0 (poor quality) and class 1(good quality) using the Adaptive Neuro Fuzzy Inference System (ANFIS) and data mining techniques. Two ANFIS are built using the Fuzzy Logic Toolbox [1] available in Matlab. The first ANFIS is based on the rules obtained from J48 classifier in WEKA while the other one was built by using the expert’s knowledge. The data used for this research work contains 226 article’s records taken from the German version of Wikipedia. The dataset consists of 19 inputs and one output. The data was preprocessed to remove any similar attributes. The input variables are related to the editors, contributors, length of articles and the lifecycle of articles. In the end analysis of different methods implemented in this research is made to analyze the performance of each classification method used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to evaluate the performance of two divergent methods for delineating commuting regions, also called labour market areas, in a situation that the base spatial units differ largely in size as a result of an irregular population distribution. Commuting patterns in Sweden have been analyzed with geographical information system technology by delineating commuting regions using two regionalization methods. One, a rule-based method, uses one-way commuting flows to delineate local labour market areas in a top-down procedure based on the selection of predefined employment centres. The other method, the interaction-based Intramax analysis, uses two-way flows in a bottom-up procedure based on numerical taxonomy principles. A comparison of these methods will expose a number of strengths and weaknesses. For both methods, the same data source has been used. The performance of both methods has been evaluated for the country as a whole using resident employed population, self-containment levels and job ratios for criteria. A more detailed evaluation has been done in the Goteborg metropolitan area by comparing regional patterns with the commuting fields of a number of urban centres in this area. It is concluded that both methods could benefit from the inclusion of additional control measures to identify improper allocations of municipalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Durante a análise sísmica de estruturas complexas, o modelo matemático empregado deveria incluir não só as distribuicões irregulares de massas e de rigidezes senão também à natureza tridimensional da ecitação sísmica. Na prática, o elevado número de graus de liberdade involucrado limita este tipo de análise à disponibilidade de grandes computadoras. Este trabalho apresenta um procedimento simplificado, para avaliar a amplificação do movimento sísmico em camadas de solos. Sua aplicação permitiria estabelecer critérios a partir dos quais avalia-se a necessidade de utilizar modelos de interação solo-estrutura mais complexos que os utilizados habitualmente. O procedimento proposto possui as seguientes características : A- Movimento rígido da rocha definido em termos de três componentes ortagonais. Direção de propagação vertical. B- A ecuação constitutiva do solo inclui as características de não linearidade, plasticidade, dependência da história da carga, dissipação de energia e variação de volume. C- O perfil de solos é dicretizado mediante um sistema de massas concentradas. Utiliza-se uma formulação incremental das equações de movimento com integração directa no domínio do tempo. As propriedades pseudo-elásticas do solo são avaliadas em cada intervalo de integração, em função do estado de tensões resultante da acção simultânea das três componentes da excitação. O correcto funcionamento do procedimento proposto é verificado mediante análises unidimensionais (excitação horizontal) incluindo estudos comparativos com as soluções apresentadas por diversos autores. Similarmente apresentam-se análises tridimensionais (acção simultânea das três componentes da excitação considerando registros sísmicos reais. Analisa-se a influência que possui a dimensão da análise (uma análise tridimensional frente a três análises unidimensionais) na resposta de camadas de solos submetidos a diferentes níveis de exçitação; isto é, a limitação do Princípio de Superposisão de Efeitos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sistemas de previsão de cheias podem ser adequadamente utilizados quando o alcance é suficiente, em comparação com o tempo necessário para ações preventivas ou corretivas. Além disso, são fundamentalmente importantes a confiabilidade e a precisão das previsões. Previsões de níveis de inundação são sempre aproximações, e intervalos de confiança não são sempre aplicáveis, especialmente com graus de incerteza altos, o que produz intervalos de confiança muito grandes. Estes intervalos são problemáticos, em presença de níveis fluviais muito altos ou muito baixos. Neste estudo, previsões de níveis de cheia são efetuadas, tanto na forma numérica tradicional quanto na forma de categorias, para as quais utiliza-se um sistema especialista baseado em regras e inferências difusas. Metodologias e procedimentos computacionais para aprendizado, simulação e consulta são idealizados, e então desenvolvidos sob forma de um aplicativo (SELF – Sistema Especialista com uso de Lógica “Fuzzy”), com objetivo de pesquisa e operação. As comparações, com base nos aspectos de utilização para a previsão, de sistemas especialistas difusos e modelos empíricos lineares, revelam forte analogia, apesar das diferenças teóricas fundamentais existentes. As metodologias são aplicadas para previsão na bacia do rio Camaquã (15543 km2), para alcances entre 10 e 48 horas. Dificuldades práticas à aplicação são identificadas, resultando em soluções as quais constituem-se em avanços do conhecimento e da técnica. Previsões, tanto na forma numérica quanto categorizada são executadas com sucesso, com uso dos novos recursos. As avaliações e comparações das previsões são feitas utilizandose um novo grupo de estatísticas, derivadas das freqüências simultâneas de ocorrência de valores observados e preditos na mesma categoria, durante a simulação. Os efeitos da variação da densidade da rede são analisados, verificando-se que sistemas de previsão pluvio-hidrométrica em tempo atual são possíveis, mesmo com pequeno número de postos de aquisição de dados de chuva, para previsões sob forma de categorias difusas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O objetivo da pesquisa atém-se primeiramente em elaborar um protocolo que permita analisar, por meio de um conjunto de indicadores, o processo de reutilização de software no desenvolvimento de sistemas de informação modelando objetos de negócios. O protocolo concebido compõe-se de um modelo analítico e de grades de análise, a serem empregadas na classificação e tabulação dos dados obtidos empiricamente. Com vistas à validação inicial do protocolo de análise, realiza-se um estudo de caso. A investigação ocorre num dos primeiros e, no momento, maior projeto de fornecimento de elementos de software reutilizáveis destinados a negócios, o IBM SANFRANCISCO, bem como no primeiro projeto desenvolvido no Brasil com base no por ele disponibilizado, o sistema Apontamento Universal de Horas (TIME SHEET System). Quanto à aplicabilidade do protocolo na prática, este se mostra abrangente e adequado à compreensão do processo. Quanto aos resultados do estudo de caso, a análise dos dados revela uma situação em que as expectativas (dos pesquisadores) de reutilização de elementos de software orientadas a negócio eram superiores ao observado. Houve, entretanto, reutilização de elementos de baixo nível, que forneceram a infra-estrutura necessária para o desenvolvimento do projeto. Os resultados contextualizados diante das expectativas de reutilização (dos desenvolvedores) são positivos, na medida em que houve benefícios metodológicos e tecnológicos decorrentes da parceria realizada. Por outro lado, constatam-se alguns aspectos restritivos para o desenvolvedor de aplicativos, em virtude de escolhas arbitrárias realizadas pelo provedor de elementos reutilizáveis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Guias para exploração mineral são normalmente baseados em modelos conceituais de depósitos. Esses guias são, normalmente, baseados na experiência dos geólogos, em dados descritivos e em dados genéticos. Modelamentos numéricos, probabilísticos e não probabilísticos, para estimar a ocorrência de depósitos minerais é um novo procedimento que vem a cada dia aumentando sua utilização e aceitação pela comunidade geológica. Essa tese utiliza recentes metodologias para a geração de mapas de favorablidade mineral. A denominada Ilha Cristalina de Rivera, uma janela erosional da Bacia do Paraná, situada na porção norte do Uruguai, foi escolhida como estudo de caso para a aplicação das metodologias. A construção dos mapas de favorabilidade mineral foi feita com base nos seguintes tipos de dados, informações e resultados de prospecção: 1) imagens orbitais; 2) prospecção geoquimica; 3) prospecção aerogeofísica; 4) mapeamento geo-estrutural e 5) altimetria. Essas informacões foram selecionadas e processadas com base em um modelo de depósito mineral (modelo conceitual), desenvolvido com base na Mina de Ouro San Gregorio. O modelo conceitual (modelo San Gregorio), incluiu características descritivas e genéticas da Mina San Gregorio, a qual abrange os elementos característicos significativos das demais ocorrências minerais conhecidas na Ilha Cristalina de Rivera. A geração dos mapas de favorabilidade mineral envolveu a construção de um banco de dados, o processamento dos dados, e a integração dos dados. As etapas de construção e processamento dos dados, compreenderam a coleta, a seleção e o tratamento dos dados de maneira a constituírem os denominados Planos de Informação. Esses Planos de Informação foram gerados e processados organizadamente em agrupamentos, de modo a constituírem os Fatores de Integração para o mapeamento de favorabilidade mineral na Ilha Cristalina de Rivera. Os dados foram integrados por meio da utilização de duas diferentes metodologias: 1) Pesos de Evidência (dirigida pelos dados) e 2) Lógica Difusa (dirigida pelo conhecimento). Os mapas de favorabilidade mineral resultantes da implementação das duas metodologias de integração foram primeiramente analisados e interpretados de maneira individual. Após foi feita uma análise comparativa entre os resultados. As duas metodologias xxiv obtiveram sucesso em identificar, como áreas de alta favorabilidade, as áreas mineralizadas conhecidas, além de outras áreas ainda não trabalhadas. Os mapas de favorabilidade mineral resultantes das duas metodologias mostraram-se coincidentes em relação as áreas de mais alta favorabilidade. A metodologia Pesos de Evidência apresentou o mapa de favorabilidade mineral mais conservador em termos de extensão areal, porém mais otimista em termos de valores de favorabilidade em comparação aos mapas de favorabilidade mineral resultantes da implementação da metodologia Lógica Difusa. Novos alvos para exploração mineral foram identificados e deverão ser objeto de investigação em detalhe.