942 resultados para hierarchical Bayesian models
Resumo:
Current software development relies increasingly on non-trivial coordination logic for com- bining autonomous services often running on di erent platforms. As a rule, however, in typical non-trivial software systems, such a coordination layer is strongly weaved within the application at source code level. Therefore, its precise identi cation becomes a major methodological (and technical) problem which cannot be overestimated along any program understanding or refactoring process. Open access to source code, as granted in OSS certi cation, provides an opportunity for the devel- opment of methods and technologies to extract, from source code, the relevant coordination information. This paper is a step in this direction, combining a number of program analysis techniques to automatically recover coordination information from legacy code. Such information is then expressed as a model in Orc, a general purpose orchestration language
Resumo:
A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.
Resumo:
A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.
Resumo:
The purpose of this study is to investigate the contribution of psychological variables and scales suggested by Economic Psychology in predicting individuals’ default. Therefore, a sample of 555 individuals completed a self-completion questionnaire, which was composed of psychological variables and scales. By adopting the methodology of the logistic regression, the following psychological and behavioral characteristics were found associated with the group of individuals in default: a) negative dimensions related to money (suffering, inequality and conflict); b) high scores on the self-efficacy scale, probably indicating a greater degree of optimism and over-confidence; c) buyers classified as compulsive; d) individuals who consider it necessary to give gifts to children and friends on special dates, even though many people consider this a luxury; e) problems of self-control identified by individuals who drink an average of more than four glasses of alcoholic beverage a day.
Resumo:
This study aims to be a contribution to a theoretical model that explains the effectiveness of the learning and decision-making processes by means of a feedback and mental models perspective. With appropriate mental models, managers should be able to improve their capacity to deal with dynamically complex contexts, in order to achieve long-term success. We present a set of hypotheses about the influence of feedback information and systems thinking facilitation on mental models and management performance. We explore, under controlled conditions, the role of mental models in terms of structure and behaviour. A test based on a simulation experiment with a system dynamics model was performed. Three out of the four hypotheses were confirmed. Causal diagramming positively influences mental model structure similarity, mental model structure similarity positively influences mental model behaviour similarity, and mental model behaviour similarity positively influences the quality of the decision.
Resumo:
This study aims to be a contribution to a theoretical model that explains the effectiveness of the learning and decision-making processes by means of a feedback and mental models perspective. With appropriate mental models, managers should be able to improve their capacity to deal with dynamically complex contexts, in order to achieve long-term success. We present a set of hypotheses about the influence of feedback information and systems thinking facilitation on mental models and management performance. We explore, under controlled conditions, the role of mental models in terms of structure and behaviour. A test based on a simulation experiment with a system dynamics model was performed. Three out of the four hypotheses were confirmed. Causal diagramming positively influences mental model structure similarity, mental model structure similarity positively influences mental model behaviour similarity, and mental model behaviour similarity positively influences the quality of the decision
Resumo:
Os avanços tecnológicos e científicos, na área da saúde, têm vindo a aliar áreas como a Medicina e a Matemática, cabendo à ciência adequar de forma mais eficaz os meios de investigação, diagnóstico, monitorização e terapêutica. Os métodos desenvolvidos e os estudos apresentados nesta dissertação resultam da necessidade de encontrar respostas e soluções para os diferentes desafios identificados na área da anestesia. A índole destes problemas conduz, necessariamente, à aplicação, adaptação e conjugação de diferentes métodos e modelos das diversas áreas da matemática. A capacidade para induzir a anestesia em pacientes, de forma segura e confiável, conduz a uma enorme variedade de situações que devem ser levadas em conta, exigindo, por isso, intensivos estudos. Assim, métodos e modelos de previsão, que permitam uma melhor personalização da dosagem a administrar ao paciente e por monitorizar, o efeito induzido pela administração de cada fármaco, com sinais mais fiáveis, são fundamentais para a investigação e progresso neste campo. Neste contexto, com o objetivo de clarificar a utilização em estudos na área da anestesia de um ajustado tratamento estatístico, proponho-me abordar diferentes análises estatísticas para desenvolver um modelo de previsão sobre a resposta cerebral a dois fármacos durante sedação. Dados obtidos de voluntários serão utilizados para estudar a interação farmacodinâmica entre dois fármacos anestésicos. Numa primeira fase são explorados modelos de regressão lineares que permitam modelar o efeito dos fármacos no sinal cerebral BIS (índice bispectral do EEG – indicador da profundidade de anestesia); ou seja estimar o efeito que as concentrações de fármacos têm na depressão do eletroencefalograma (avaliada pelo BIS). Na segunda fase deste trabalho, pretende-se a identificação de diferentes interações com Análise de Clusters bem como a validação do respetivo modelo com Análise Discriminante, identificando grupos homogéneos na amostra obtida através das técnicas de agrupamento. O número de grupos existentes na amostra foi, numa fase exploratória, obtido pelas técnicas de agrupamento hierárquicas, e a caracterização dos grupos identificados foi obtida pelas técnicas de agrupamento k-means. A reprodutibilidade dos modelos de agrupamento obtidos foi testada através da análise discriminante. As principais conclusões apontam que o teste de significância da equação de Regressão Linear indicou que o modelo é altamente significativo. As variáveis propofol e remifentanil influenciam significativamente o BIS e o modelo melhora com a inclusão do remifentanil. Este trabalho demonstra ainda ser possível construir um modelo que permite agrupar as concentrações dos fármacos, com base no efeito no sinal cerebral BIS, com o apoio de técnicas de agrupamento e discriminantes. Os resultados desmontram claramente a interacção farmacodinâmica dos dois fármacos, quando analisamos o Cluster 1 e o Cluster 3. Para concentrações semelhantes de propofol o efeito no BIS é claramente diferente dependendo da grandeza da concentração de remifentanil. Em suma, o estudo demostra claramente, que quando o remifentanil é administrado com o propofol (um hipnótico) o efeito deste último é potenciado, levando o sinal BIS a valores bastante baixos.
Resumo:
Fluorescent protein microscopy imaging is nowadays one of the most important tools in biomedical research. However, the resulting images present a low signal to noise ratio and a time intensity decay due to the photobleaching effect. This phenomenon is a consequence of the decreasing on the radiation emission efficiency of the tagging protein. This occurs because the fluorophore permanently loses its ability to fluoresce, due to photochemical reactions induced by the incident light. The Poisson multiplicative noise that corrupts these images, in addition with its quality degradation due to photobleaching, make long time biological observation processes very difficult. In this paper a denoising algorithm for Poisson data, where the photobleaching effect is explicitly taken into account, is described. The algorithm is designed in a Bayesian framework where the data fidelity term models the Poisson noise generation process as well as the exponential intensity decay caused by the photobleaching. The prior term is conceived with Gibbs priors and log-Euclidean potential functions, suitable to cope with the positivity constrained nature of the parameters to be estimated. Monte Carlo tests with synthetic data are presented to characterize the performance of the algorithm. One example with real data is included to illustrate its application.
Resumo:
The two-Higgs-doublet model can be constrained by imposing Higgs-family symmetries and/or generalized CP symmetries. It is known that there are only six independent classes of such symmetry-constrained models. We study the CP properties of all cases in the bilinear formalism. An exact symmetry implies CP conservation. We show that soft breaking of the symmetry can lead to spontaneous CP violation (CPV) in three of the classes.
Resumo:
We write down the renormalization-group equations for the Yukawa-coupling matrices in a general multi-Higgs-doublet model. We then assume that the matrices of the Yukawa couplings of the various Higgs doublets to right-handed fermions of fixed quantum numbers are all proportional to each other. We demonstrate that, in the case of the two-Higgs-doublet model, this proportionality is preserved by the renormalization-group running only in the cases of the standard type-I, II, X, and Y models. We furthermore show that a similar result holds even when there are more than two Higgs doublets: the Yukawa-coupling matrices to fermions of a given electric charge remain proportional under the renormalization-group running if and only if there is a basis for the Higgs doublets in which all the fermions of a given electric charge couple to only one Higgs doublet.
Resumo:
A package of B-spline finite strip models is developed for the linear analysis of piezolaminated plates and shells. This package is associated to a global optimization technique in order to enhance the performance of these types of structures, subjected to various types of objective functions and/or constraints, with discrete and continuous design variables. The models considered are based on a higher-order displacement field and one can apply them to the static, free vibration and buckling analyses of laminated adaptive structures with arbitrary lay-ups, loading and boundary conditions. Genetic algorithms, with either binary or floating point encoding of design variables, were considered to find optimal locations of piezoelectric actuators as well as to determine the best voltages applied to them in order to obtain a desired structure shape. These models provide an overall economy of computing effort for static and vibration problems.
Resumo:
We present new populational growth models, generalized logistic models which are proportional to beta densities with shape parameters p and 2, where p > 1, with Malthusian parameter r. The complex dynamical behaviour of these models is investigated in the parameter space (r, p), in terms of topological entropy, using explicit methods, when the Malthusian parameter r increases. This parameter space is split into different regions, according to the chaotic behaviour of the models.
Resumo:
Recent literature has proved that many classical pricing models (Black and Scholes, Heston, etc.) and risk measures (V aR, CV aR, etc.) may lead to “pathological meaningless situations”, since traders can build sequences of portfolios whose risk leveltends to −infinity and whose expected return tends to +infinity, i.e., (risk = −infinity, return = +infinity). Such a sequence of strategies may be called “good deal”. This paper focuses on the risk measures V aR and CV aR and analyzes this caveat in a discrete time complete pricing model. Under quite general conditions the explicit expression of a good deal is given, and its sensitivity with respect to some possible measurement errors is provided too. We point out that a critical property is the absence of short sales. In such a case we first construct a “shadow riskless asset” (SRA) without short sales and then the good deal is given by borrowing more and more money so as to invest in the SRA. It is also shown that the SRA is interested by itself, even if there are short selling restrictions.
Resumo:
The aim of this paper is to analyze the forecasting ability of the CARR model proposed by Chou (2005) using the S&P 500. We extend the data sample, allowing for the analysis of different stock market circumstances and propose the use of various range estimators in order to analyze their forecasting performance. Our results show that there are two range-based models that outperform the forecasting ability of the GARCH model. The Parkinson model is better for upward trends and volatilities which are higher and lower than the mean while the CARR model is better for downward trends and mean volatilities.