975 resultados para Probabilistic graphical models
Resumo:
Despite the strong influence of plant architecture on crop yield, most crop models either ignore it or deal with it in a very rudimentary way. This paper demonstrates the feasibility of linking a model that simulates the morphogenesis and resultant architecture of individual cotton plants with a crop model that simulates the effects of environmental factors on critical physiological processes and resulting yield in cotton. First the varietal parameters of the models were made concordant. Then routines were developed to allocate the flower buds produced each day by the crop model amongst the potential positions generated by the architectural model. This allocation is done according to a set of heuristic rules. The final weight of individual bolls and the shedding of buds and fruit caused by water, N, and C stresses are processed in a similar manner. Observations of the positions of harvestable fruits, both within and between plants, made under a variety of agronomic conditions that had resulted in a broad range of plant architectures were compared to those predicted by the model with the same environmental inputs. As illustrated by comparisons of plant maps, the linked models performed reasonably well, though performance of the fruiting point allocation and shedding algorithms could probably be improved by further analysis of the spatial relationships of retained fruit. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We present two integrable spin ladder models which possess a general free parameter besides the rung coupling J. The models are exactly solvable by means of the Bethe ansatz method and we present the Bethe ansatz equations. We analyze the elementary excitations of the models which reveal the existence of a gap for both models that depends on the free parameter. (C) 2003 American Institute of Physics.
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Esta tese se propôs investigar a lógica inferencial das ações e suas significações em situações que mobilizam as noções de composição probabilística e acaso, bem como o papel dos modelos de significação no funcionamento cognitivo de adultos. Participaram 12 estudantes adultos jovens da classe popular, voluntários, de ambos os sexos, de um curso técnico integrado ao Ensino Médio da Educação de Jovens e Adultos. Foram realizados três encontros, individualmente, com registro em áudio e planilha eletrônica, utilizando-se dois jogos, o Likid Gaz e o Lucky Cassino, do software Missão Cognição (Haddad-Zubel, Pinkas & Pécaut, 2006), e o jogo Soma dos Dados (Silva, Rossetti & Cristo, 2012). Os procedimentos da tarefa foram adaptados de Silva e Frezza (2011): 1) apresentação do jogo; 2) execução do jogo; 3) entrevista semiestruturada; 4) aplicação de três situações-problema com intervenção segundo o Método Clínico; 5) nova partida do jogo; e 6) realização de outras duas situações-problema sem intervenção do Método Clínico. Elaboraram-se níveis de análise heurística, compreensão dos jogos e modelos de significação a partir da identificação de particularidades de procedimentos e significações nos jogos. O primeiro estudo examinou as implicações dos modelos de significação e representações prévias no pensamento do adulto, considerando que o sujeito organiza suas representações ou esquemas prévios relativos a um objeto na forma de modelos de significação em função do grau de complexidade e novidade da tarefa e de sua estrutura lógico matemática, que evoluem por meio do processo de equilibração; para o que precisa da demanda a significar esse aspecto da 13 realidade. O segundo estudo investigou a noção de combinação deduzível evidenciada no jogo Likid Gaz, identificando o papel dos modelos de significação na escolha dos procedimentos, implicando na rejeição de condutas de sistematização ou enumeração. Houve predominância dos níveis iniciais de análise heurística do jogo. O terceiro estudo examinou a noção de probabilidade observada no jogo Lucky Cassino, no qual a maioria dos participantes teve um nível de compreensão do jogo intermediário, com maior diversidade de modelos de significação em relação aos outros jogos, embora com predominância dos mais elementares. A síntese das noções de combinação, probabilidade e acaso foi explorada no quarto estudo pelo jogo Soma dos Dados (Silva, Rossetti & Cristo, 2012), identificando-se que uma limitação para adequada compreensão das ligações imbricadas nessas noções é a implicação significante – se aleatório A, então indeterminado D (notação A D), com construção de pseudonecessidades e pseudo-obrigações ou mesmo necessidades locais, generalizadas inapropriadamente. A resistência ou obstáculos do objeto deveria provocar perturbações, mas a estrutura cognitiva, o ambiente social e os modelos culturais, e a afetividade podem interferir nesse processo.
Resumo:
This paper examines the performance of Portuguese equity funds investing in the domestic and in the European Union market, using several unconditional and conditional multi-factor models. In terms of overall performance, we find that National funds are neutral performers, while European Union funds under-perform the market significantly. These results do not seem to be a consequence of management fees. Overall, our findings are supportive of the robustness of conditional multi-factor models. In fact, Portuguese equity funds seem to be relatively more exposed to smallcaps and more value-oriented. Also, they present strong evidence of time-varying betas and, in the case of the European Union funds, of time-varying alphas too. Finally, in terms of market timing, our tests suggest that mutual fund managers in our sample do not exhibit any market timing abilities. Nevertheless, we find some evidence of timevarying conditional market timing abilities but only at the individual fund level.
Resumo:
Graphical user interfaces (GUIs) are critical components of todays software. Given their increased relevance, correctness and usability of GUIs are becoming essential. This paper describes the latest results in the development of our tool to reverse engineer the GUI layer of interactive computing systems. We use static analysis techniques to generate models of the user interface behaviour from source code. Models help in graphical user interface inspection by allowing designers to concentrate on its more important aspects. One particularly type of model that the tool is able to generate is state machines. The paper shows how graph theory can be useful when applied to these models. A number of metrics and algorithms are used in the analysis of aspects of the user interface's quality. The ultimate goal of the tool is to enable analysis of interactive system through GUIs source code inspection.
Resumo:
Graphical user interfaces (GUIs) make software easy to use by providing the user with visual controls. Therefore, correctness of GUI's code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper presents a generic model for language-independent reverse engineering of graphical user interface based applications, and we explore the integration of model-based testing techniques in our approach, thus allowing us to perform fault detection. A prototype tool has been constructed, which is already capable of deriving and testing a user interface behavioral model of applications written in Java/Swing.
Resumo:
Abstract. Graphical user interfaces (GUIs) make software easy to use by providing the user with visual controls. Therefore, correctness of GUI’s code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper describes our approach to reverse engineer an abstract model of a user interface directly from the GUI’s legacy code. We also present results from a case study. These results are encouraging and give evidence that the goal of reverse engineering user interfaces can be met with more work on this technique.
Resumo:
Color model representation allows characterizing in a quantitative manner, any defined color spectrum of visible light, i.e. with a wavelength between 400nm and 700nm. To accomplish that, each model, or color space, is associated with a function that allows mapping the spectral power distribution of the visible electromagnetic radiation, in a space defined by a set of discrete values that quantify the color components composing the model. Some color spaces are sensitive to changes in lighting conditions. Others assure the preservation of certain chromatic features, remaining immune to these changes. Therefore, it becomes necessary to identify the strengths and weaknesses of each model in order to justify the adoption of color spaces in image processing and analysis techniques. This chapter will address the topic of digital imaging, main standards and formats. Next we will set the mathematical model of the image acquisition sensor response, which enables assessment of the various color spaces, with the aim of determining their invariance to illumination changes.
Resumo:
Current software development relies increasingly on non-trivial coordination logic for com- bining autonomous services often running on di erent platforms. As a rule, however, in typical non-trivial software systems, such a coordination layer is strongly weaved within the application at source code level. Therefore, its precise identi cation becomes a major methodological (and technical) problem which cannot be overestimated along any program understanding or refactoring process. Open access to source code, as granted in OSS certi cation, provides an opportunity for the devel- opment of methods and technologies to extract, from source code, the relevant coordination information. This paper is a step in this direction, combining a number of program analysis techniques to automatically recover coordination information from legacy code. Such information is then expressed as a model in Orc, a general purpose orchestration language
Resumo:
Graphical user interfaces (GUIs) are critical components of today's open source software. Given their increased relevance, the correctness and usability of GUIs are becoming essential. This paper describes the latest results in the development of our tool to reverse engineer the GUI layer of interactive computing open source systems. We use static analysis techniques to generate models of the user interface behavior from source code. Models help in graphical user interface inspection by allowing designers to concentrate on its more important aspects. One particular type of model that the tool is able to generate is state machines. The paper shows how graph theory can be useful when applied to these models. A number of metrics and algorithms are used in the analysis of aspects of the user interface's quality. The ultimate goal of the tool is to enable analysis of interactive system through GUIs source code inspection.
Resumo:
Nowadays despite improvements in usability and intuitiveness users have to adapt to the proposed systems to satisfy their needs. For instance, they must learn how to achieve tasks, how to interact with the system, and fulfill system's specifications. This paper proposes an approach to improve this situation enabling graphical user interface redefinition through virtualization and computer vision with the aim of increasing the system's usability. To achieve this goal the approach is based on enriched task models, virtualization and picture-driven computing.
Resumo:
A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.
Resumo:
This paper presents a catalog of smells in the context of interactive applications. These so-called usability smells are indicators of poor design on an application’s user interface, with the potential to hinder not only its usability but also its maintenance and evolution. To eliminate such usability smells we discuss a set of program/usability refactorings. In order to validate the presented usability smells catalog, and the associated refactorings, we present a preliminary empirical study with software developers in the context of a real open source hospital management application. Moreover, a tool that computes graphical user interface behavior models, giving the applications’ source code, is used to automatically detect usability smells at the model level.