818 resultados para fuzzy rule base models
Resumo:
The purpose of this study is to develop a decision making system to evaluate the risks in E-Commerce (EC) projects. Competitive software businesses have the critical task of assessing the risk in the software system development life cycle. This can be conducted on the basis of conventional probabilities, but limited appropriate information is available and so a complete set of probabilities is not available. In such problems, where the analysis is highly subjective and related to vague, incomplete, uncertain or inexact information, the Dempster-Shafer (DS) theory of evidence offers a potential advantage. We use a direct way of reasoning in a single step (i.e., extended DS theory) to develop a decision making system to evaluate the risk in EC projects. This consists of five stages 1) establishing knowledge base and setting rule strengths, 2) collecting evidence and data, 3) determining evidence and rule strength to a mass distribution for each rule; i.e., the first half of a single step reasoning process, 4) combining prior mass and different rules; i.e., the second half of the single step reasoning process, 5) finally, evaluating the belief interval for the best support decision of EC project. We test the system by using potential risk factors associated with EC development and the results indicate that the system is promising way of assisting an EC project manager in identifying potential risk factors and the corresponding project risks.
Resumo:
People tend to attribute more regret to a character who has decided to take action and experienced a negative outcome than to one who has decided not to act and experienced a negative outcome. For some decisions, however, this finding is not observed in a between-participants design and thus appears to rely on comparisons between people's representations of action and their representations of inaction. In this article, we outline a mental models account that explains findings from studies that have used within- and between-participants designs, and we suggest that, for decisions with uncertain counterfactual outcomes, information about the consequences of a decision to act causes people to flesh out their representation of the counterfactual states of affairs for inaction. In three experiments, we confirm our predictions about participants' fleshing out of representations, demonstrating that an action effect occurs only when information about the consequences of action is available to participants as they rate the nonactor and when this information about action is informative with respect to judgments about inaction. It is important to note that the action effect always occurs when the decision scenario specifies certain counterfactual outcomes. These results suggest that people sometimes base their attributions of regret on comparisons among different sets of mental models.
Resumo:
This paper studies the dynamic pricing problem of selling fixed stock of perishable items over a finite horizon, where the decision maker does not have the necessary historic data to estimate the distribution of uncertain demand, but has imprecise information about the quantity demand. We model this uncertainty using fuzzy variables. The dynamic pricing problem based on credibility theory is formulated using three fuzzy programming models, viz.: the fuzzy expected revenue maximization model, a-optimistic revenue maximization model, and credibility maximization model. Fuzzy simulations for functions with fuzzy parameters are given and embedded into a genetic algorithm to design a hybrid intelligent algorithm to solve these three models. Finally, a real-world example is presented to highlight the effectiveness of the developed model and algorithm.
Resumo:
Fuzzy-neural-network-based inference systems are well-known universal approximators which can produce linguistically interpretable results. Unfortunately, their dimensionality can be extremely high due to an excessive number of inputs and rules, which raises the need for overall structure optimization. In the literature, various input selection methods are available, but they are applied separately from rule selection, often without considering the fuzzy structure. This paper proposes an integrated framework to optimize the number of inputs and the number of rules simultaneously. First, a method is developed to select the most significant rules, along with a refinement stage to remove unnecessary correlations. An improved information criterion is then proposed to find an appropriate number of inputs and rules to include in the model, leading to a balanced tradeoff between interpretability and accuracy. Simulation results confirm the efficacy of the proposed method.
Resumo:
We present results for a suite of 14 three-dimensional, high-resolution hydrodynamical simulations of delayed-detonation models of Type Ia supernova (SN Ia) explosions. This model suite comprises the first set of three-dimensional SN Ia simulations with detailed isotopic yield information. As such, it may serve as a data base for Chandrasekhar-mass delayed-detonation model nucleosynthetic yields and for deriving synthetic observables such as spectra and light curves. We employ aphysically motivated, stochastic model based on turbulent velocity fluctuations and fuel density to calculate in situ the deflagration-to-detonation transition probabilities. To obtain different strengths of the deflagration phase and thereby different degrees of pre-expansion, we have chosen a sequence of initial models with 1, 3, 5, 10, 20, 40, 100, 150, 200, 300 and 1600 (two different realizations) ignition kernels in a hydrostatic white dwarf with a central density of 2.9 × 10 g cm, as well as one high central density (5.5 × 10 g cm) and one low central density (1.0 × 10 g cm) rendition of the 100 ignition kernel configuration. For each simulation, we determined detailed nucleosynthetic yields by postprocessing10 tracer particles with a 384 nuclide reaction network. All delayed-detonation models result in explosions unbinding thewhite dwarf, producing a range of 56Ni masses from 0.32 to 1.11M. As a general trend, the models predict that the stableneutron-rich iron-group isotopes are not found at the lowest velocities, but rather at intermediate velocities (~3000×10 000 km s) in a shell surrounding a Ni-rich core. The models further predict relatively low-velocity oxygen and carbon, with typical minimum velocities around 4000 and 10 000 km s, respectively. © 2012 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society.
Resumo:
The nearby supernova SN 2011fe can be observed in unprecedented detail. Therefore, it is an important test case for Type Ia supernova (SN Ia) models, which may bring us closer to understanding the physical nature of these objects. Here, we explore how available and expected future observations of SN 2011fe can be used to constrain SN Ia explosion scenarios. We base our discussion on three-dimensional simulations of a delayed detonation in a Chandrasekhar-mass white dwarf and of a violent merger of two white dwarfs (WDs) - realizations of explosion models appropriate for two of the most widely discussed progenitor channels that may give rise to SNe Ia. Although both models have their shortcomings in reproducing details of the early and near-maximum spectra of SN 2011fe obtained by the Nearby Supernova Factory (SNfactory), the overall match with the observations is reasonable. The level of agreement is slightly better for the merger, in particular around maximum, but a clear preference for one model over the other is still not justified. Observations at late epochs, however, hold promise for discriminating the explosion scenarios in a straightforward way, as a nucleosynthesis effect leads to differences in the Co production. SN 2011fe is close enough to be followed sufficiently long to study this effect. © © 2012 The American Astronomical Society. All rights reserved.
Resumo:
This paper will consider the inter-relationship of a number of overlapping disciplinary theoretical concepts relevant to a strengths-based orientation, including well-being, salutogenesis, sense of coherence, quality of life and resilience. Psychological trauma will be referenced and the current evidence base for interventions with children and young people outlined and critiqued. The relational impact of trauma on family relationships is emphasised, providing a rationale for systemic psychotherapeutic interventions as part of a holistic approach to managing the effects of trauma. The congruence between second-order systemic psychotherapy models and a strengths-based philosophy is noted, with particular reference to solution-focused brief therapy and narrative therapy, and illustrated; via a description of the process of helping someone move from a victim position to a survivor identity using solution-focused brief therapy, and through a case example applying a narrative therapy approach to a teenage boy who suffered a serious assault. The benefits of a strength-based approach to psychological trauma for the clients and therapists will be summarised and a number of potential pitfalls articulated.
Resumo:
his paper uses fuzzy-set ideal type analysis to assess the conformity of European leave regulations to four theoretical ideal typical divisions of labour: male breadwinner, caregiver parity, universal breadwinner and universal caregiver. In contrast to the majority of previous studies, the focus of this analysis is on the extent to which leave regulations promote gender equality in the family and the transformation of traditional gender roles. The results of this analysis demonstrate that European countries cluster into five models that only partly coincide with countries’ geographical proximity. Second, none of the countries considered constitutes a universal caregiver model, while the male breadwinner ideal continues to provide the normative reference point for parental leave regulations in a large number of European states. Finally, we witness a growing emphasis at the national and EU levels concerning the universal breadwinner ideal, which leaves gender inequality in unpaid work unproblematized.
Resumo:
Fuzzy answer set programming (FASP) is a generalization of answer set programming to continuous domains. As it can not readily take uncertainty into account, however, FASP is not suitable as a basis for approximate reasoning and cannot easily be used to derive conclusions from imprecise information. To cope with this, we propose an extension of FASP based on possibility theory. The resulting framework allows us to reason about uncertain information in continuous domains, and thus also about information that is imprecise or vague. We propose a syntactic procedure, based on an immediate consequence operator, and provide a characterization in terms of minimal models, which allows us to straightforwardly implement our framework using existing FASP solvers.
Resumo:
A relação entre a epidemiologia, a modelação matemática e as ferramentas computacionais permite construir e testar teorias sobre o desenvolvimento e combate de uma doença. Esta tese tem como motivação o estudo de modelos epidemiológicos aplicados a doenças infeciosas numa perspetiva de Controlo Ótimo, dando particular relevância ao Dengue. Sendo uma doença tropical e subtropical transmitida por mosquitos, afecta cerca de 100 milhões de pessoas por ano, e é considerada pela Organização Mundial de Saúde como uma grande preocupação para a saúde pública. Os modelos matemáticos desenvolvidos e testados neste trabalho, baseiam-se em equações diferenciais ordinárias que descrevem a dinâmica subjacente à doença nomeadamente a interação entre humanos e mosquitos. É feito um estudo analítico dos mesmos relativamente aos pontos de equilíbrio, sua estabilidade e número básico de reprodução. A propagação do Dengue pode ser atenuada através de medidas de controlo do vetor transmissor, tais como o uso de inseticidas específicos e campanhas educacionais. Como o desenvolvimento de uma potencial vacina tem sido uma aposta mundial recente, são propostos modelos baseados na simulação de um hipotético processo de vacinação numa população. Tendo por base a teoria de Controlo Ótimo, são analisadas as estratégias ótimas para o uso destes controlos e respetivas repercussões na redução/erradicação da doença aquando de um surto na população, considerando uma abordagem bioeconómica. Os problemas formulados são resolvidos numericamente usando métodos diretos e indiretos. Os primeiros discretizam o problema reformulando-o num problema de optimização não linear. Os métodos indiretos usam o Princípio do Máximo de Pontryagin como condição necessária para encontrar a curva ótima para o respetivo controlo. Nestas duas estratégias utilizam-se vários pacotes de software numérico. Ao longo deste trabalho, houve sempre um compromisso entre o realismo dos modelos epidemiológicos e a sua tratabilidade em termos matemáticos.
Resumo:
As técnicas estatísticas são fundamentais em ciência e a análise de regressão linear é, quiçá, uma das metodologias mais usadas. É bem conhecido da literatura que, sob determinadas condições, a regressão linear é uma ferramenta estatística poderosíssima. Infelizmente, na prática, algumas dessas condições raramente são satisfeitas e os modelos de regressão tornam-se mal-postos, inviabilizando, assim, a aplicação dos tradicionais métodos de estimação. Este trabalho apresenta algumas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, em particular na estimação de modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. A investigação é desenvolvida em três vertentes, nomeadamente na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, na estimação do parâmetro ridge em regressão ridge e, por último, em novos desenvolvimentos na estimação com máxima entropia. Na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, o trabalho desenvolvido evidencia um melhor desempenho dos estimadores de máxima entropia em relação ao estimador de máxima verosimilhança. Este bom desempenho é notório em modelos com poucas observações por estado e em modelos com um grande número de estados, os quais são comummente afetados por colinearidade. Espera-se que a utilização de estimadores de máxima entropia contribua para o tão desejado aumento de trabalho empírico com estas fronteiras de produção. Em regressão ridge o maior desafio é a estimação do parâmetro ridge. Embora existam inúmeros procedimentos disponíveis na literatura, a verdade é que não existe nenhum que supere todos os outros. Neste trabalho é proposto um novo estimador do parâmetro ridge, que combina a análise do traço ridge e a estimação com máxima entropia. Os resultados obtidos nos estudos de simulação sugerem que este novo estimador é um dos melhores procedimentos existentes na literatura para a estimação do parâmetro ridge. O estimador de máxima entropia de Leuven é baseado no método dos mínimos quadrados, na entropia de Shannon e em conceitos da eletrodinâmica quântica. Este estimador suplanta a principal crítica apontada ao estimador de máxima entropia generalizada, uma vez que prescinde dos suportes para os parâmetros e erros do modelo de regressão. Neste trabalho são apresentadas novas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, tendo por base o estimador de máxima entropia de Leuven, a teoria da informação e a regressão robusta. Os estimadores desenvolvidos revelam um bom desempenho em modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. Por último, são apresentados alguns códigos computacionais para estimação com máxima entropia, contribuindo, deste modo, para um aumento dos escassos recursos computacionais atualmente disponíveis.
Resumo:
O transporte marítimo e o principal meio de transporte de mercadorias em todo o mundo. Combustíveis e produtos petrolíferos representam grande parte das mercadorias transportadas por via marítima. Sendo Cabo Verde um arquipelago o transporte por mar desempenha um papel de grande relevância na economia do país. Consideramos o problema da distribuicao de combustíveis em Cabo Verde, onde uma companhia e responsavel por coordenar a distribuicao de produtos petrolíferos com a gestão dos respetivos níveis armazenados em cada porto, de modo a satisfazer a procura dos varios produtos. O objetivo consiste em determinar políticas de distribuicão de combustíveis que minimizam o custo total de distribuiçao (transporte e operacões) enquanto os n íveis de armazenamento sao mantidos nos n íveis desejados. Por conveniencia, de acordo com o planeamento temporal, o prob¬lema e divido em dois sub-problemas interligados. Um de curto prazo e outro de medio prazo. Para o problema de curto prazo sao discutidos modelos matemáticos de programacao inteira mista, que consideram simultaneamente uma medicao temporal cont ínua e uma discreta de modo a modelar multiplas janelas temporais e taxas de consumo que variam diariamente. Os modelos sao fortalecidos com a inclusão de desigualdades validas. O problema e então resolvido usando um "software" comercial. Para o problema de medio prazo sao inicialmente discutidos e comparados varios modelos de programacao inteira mista para um horizonte temporal curto assumindo agora uma taxa de consumo constante, e sao introduzidas novas desigualdades validas. Com base no modelo escolhido sao compara¬das estrategias heurísticas que combinam três heur ísticas bem conhecidas: "Rolling Horizon", "Feasibility Pump" e "Local Branching", de modo a gerar boas soluçoes admissíveis para planeamentos com horizontes temporais de varios meses. Finalmente, de modo a lidar com situaçoes imprevistas, mas impor¬tantes no transporte marítimo, como as mas condicões meteorológicas e congestionamento dos portos, apresentamos um modelo estocastico para um problema de curto prazo, onde os tempos de viagens e os tempos de espera nos portos sao aleatórios. O problema e formulado como um modelo em duas etapas, onde na primeira etapa sao tomadas as decisões relativas as rotas do navio e quantidades a carregar e descarregar e na segunda etapa (designada por sub-problema) sao consideradas as decisoes (com recurso) relativas ao escalonamento das operacões. O problema e resolvido por um metodo de decomposto que usa um algoritmo eficiente para separar as desigualdades violadas no sub-problema.
Resumo:
Communication and cooperation between billions of neurons underlie the power of the brain. How do complex functions of the brain arise from its cellular constituents? How do groups of neurons self-organize into patterns of activity? These are crucial questions in neuroscience. In order to answer them, it is necessary to have solid theoretical understanding of how single neurons communicate at the microscopic level, and how cooperative activity emerges. In this thesis we aim to understand how complex collective phenomena can arise in a simple model of neuronal networks. We use a model with balanced excitation and inhibition and complex network architecture, and we develop analytical and numerical methods for describing its neuronal dynamics. We study how interaction between neurons generates various collective phenomena, such as spontaneous appearance of network oscillations and seizures, and early warnings of these transitions in neuronal networks. Within our model, we show that phase transitions separate various dynamical regimes, and we investigate the corresponding bifurcations and critical phenomena. It permits us to suggest a qualitative explanation of the Berger effect, and to investigate phenomena such as avalanches, band-pass filter, and stochastic resonance. The role of modular structure in the detection of weak signals is also discussed. Moreover, we find nonlinear excitations that can describe paroxysmal spikes observed in electroencephalograms from epileptic brains. It allows us to propose a method to predict epileptic seizures. Memory and learning are key functions of the brain. There are evidences that these processes result from dynamical changes in the structure of the brain. At the microscopic level, synaptic connections are plastic and are modified according to the dynamics of neurons. Thus, we generalize our cortical model to take into account synaptic plasticity and we show that the repertoire of dynamical regimes becomes richer. In particular, we find mixed-mode oscillations and a chaotic regime in neuronal network dynamics.
Resumo:
This thesis revealed the most importance factors shaping the distribution, abundance and genetic diversity of four marine foundation species. Environmental conditions, particularly sea temperatures, nutrient availability and ocean waves, played a primary role in shaping the spatial distribution and abundance of populations, acting on scales varying from tens of meters to hundreds of kilometres. Furthermore, the use of Species Distribution Models (SDMs) with biological records of occurrence and high-resolution oceanographic data, allowed predicting species distributions across time. This approach highlighted the role of climate change, particularly when extreme temperatures prevailed during glacial and interglacial periods. These results, when combined with mtDNA and microsatellite genetic variation of populations allowed inferring for the influence of past range dynamics in the genetic diversity and structure of populations. For instance, the Last Glacial Maximum produced important shifts in species ranges, leaving obvious signatures of higher genetic diversities in regions where populations persisted (i.e., refugia). However, it was found that a species’ genetic pool is shaped by regions of persistence, adjacent to others experiencing expansions and contractions. Contradicting expectations, refugia seem to play a minor role on the re(colonization) process of previously eroded populations. In addition, the available habitat area for expanding populations and the inherent mechanisms of species dispersal in occupying available habitats were also found to be fundamental in shaping the distributions of genetic diversity. However, results suggest that the high levels of genetic diversity in some populations do not rule out that they may have experienced strong genetic erosion in the past, a process here named shifting genetic baselines. Furthermore, this thesis predicted an ongoing retraction at the rear edges and extinctions of unique genetic lineages, which will impoverish the global gene pool, strongly shifting the genetic baselines in the future.