981 resultados para Eclipse modeling framework (EMF)
Resumo:
A class of multi-process models is developed for collections of time indexed count data. Autocorrelation in counts is achieved with dynamic models for the natural parameter of the binomial distribution. In addition to modeling binomial time series, the framework includes dynamic models for multinomial and Poisson time series. Markov chain Monte Carlo (MCMC) and Po ́lya-Gamma data augmentation (Polson et al., 2013) are critical for fitting multi-process models of counts. To facilitate computation when the counts are high, a Gaussian approximation to the P ́olya- Gamma random variable is developed.
Three applied analyses are presented to explore the utility and versatility of the framework. The first analysis develops a model for complex dynamic behavior of themes in collections of text documents. Documents are modeled as a “bag of words”, and the multinomial distribution is used to characterize uncertainty in the vocabulary terms appearing in each document. State-space models for the natural parameters of the multinomial distribution induce autocorrelation in themes and their proportional representation in the corpus over time.
The second analysis develops a dynamic mixed membership model for Poisson counts. The model is applied to a collection of time series which record neuron level firing patterns in rhesus monkeys. The monkey is exposed to two sounds simultaneously, and Gaussian processes are used to smoothly model the time-varying rate at which the neuron’s firing pattern fluctuates between features associated with each sound in isolation.
The third analysis presents a switching dynamic generalized linear model for the time-varying home run totals of professional baseball players. The model endows each player with an age specific latent natural ability class and a performance enhancing drug (PED) use indicator. As players age, they randomly transition through a sequence of ability classes in a manner consistent with traditional aging patterns. When the performance of the player significantly deviates from the expected aging pattern, he is identified as a player whose performance is consistent with PED use.
All three models provide a mechanism for sharing information across related series locally in time. The models are fit with variations on the P ́olya-Gamma Gibbs sampler, MCMC convergence diagnostics are developed, and reproducible inference is emphasized throughout the dissertation.
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.
Resumo:
SELECTOR is a software package for studying the evolution of multiallelic genes under balancing or positive selection while simulating complex evolutionary scenarios that integrate demographic growth and migration in a spatially explicit population framework. Parameters can be varied both in space and time to account for geographical, environmental, and cultural heterogeneity. SELECTOR can be used within an approximate Bayesian computation estimation framework. We first describe the principles of SELECTOR and validate the algorithms by comparing its outputs for simple models with theoretical expectations. Then, we show how it can be used to investigate genetic differentiation of loci under balancing selection in interconnected demes with spatially heterogeneous gene flow. We identify situations in which balancing selection reduces genetic differentiation between population groups compared with neutrality and explain conflicting outcomes observed for human leukocyte antigen loci. These results and three previously published applications demonstrate that SELECTOR is efficient and robust for building insight into human settlement history and evolution.
Resumo:
The current study builds upon a previous study, which examined the degree to which the lexical properties of students’ essays could predict their vocabulary scores. We expand on this previous research by incorporating new natural language processing indices related to both the surface- and discourse-levels of students’ essays. Additionally, we investigate the degree to which these NLP indices can be used to account for variance in students’ reading comprehension skills. We calculated linguistic essay features using our framework, ReaderBench, which is an automated text analysis tools that calculates indices related to linguistic and rhetorical features of text. University students (n = 108) produced timed (25 minutes), argumentative essays, which were then analyzed by ReaderBench. Additionally, they completed the Gates-MacGinitie Vocabulary and Reading comprehension tests. The results of this study indicated that two indices were able to account for 32.4% of the variance in vocabulary scores and 31.6% of the variance in reading comprehension scores. Follow-up analyses revealed that these models further improved when only considering essays that contained multiple paragraph (R2 values = .61 and .49, respectively). Overall, the results of the current study suggest that natural language processing techniques can help to inform models of individual differences among student writers.
Resumo:
Different types of serious games have been used in elucidating computer science areas such as computer games, mobile games, Lego-based games, virtual worlds and webbased games. Different evaluation techniques have been conducted like questionnaires, interviews, discussions and tests. Simulation have been widely used in computer science as a motivational and interactive learning tool. This paper aims to evaluate the possibility of successful implementation of simulation in computer programming modules. A framework is proposed to measure the impact of serious games on enhancing students understanding of key computer science concepts. Experiments will be held on the EEECS of Queen’s University Belfast students to test the framework and attain results.
Resumo:
Planning is an essential process in teams of multiple agents pursuing a common goal. When the effects of actions undertaken by agents are uncertain, evaluating the potential risk of such actions alongside their utility might lead to more rational decisions upon planning. This challenge has been recently tackled for single agent settings, yet domains with multiple agents that present diverse viewpoints towards risk still necessitate comprehensive decision making mechanisms that balance the utility and risk of actions. In this work, we propose a novel collaborative multi-agent planning framework that integrates (i) a team-level online planner under uncertainty that extends the classical UCT approximate algorithm, and (ii) a preference modeling and multicriteria group decision making approach that allows agents to find accepted and rational solutions for planning problems, predicated on the attitude each agent adopts towards risk. When utilised in risk-pervaded scenarios, the proposed framework can reduce the cost of reaching the common goal sought and increase effectiveness, before making collective decisions by appropriately balancing risk and utility of actions.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This work provides a holistic investigation into the realm of feature modeling within software product lines. The work presented identifies limitations and challenges within the current feature modeling approaches. Those limitations include, but not limited to, the dearth of satisfactory cognitive presentation, inconveniency in scalable systems, inflexibility in adapting changes, nonexistence of predictability of models behavior, as well as the lack of probabilistic quantification of model’s implications and decision support for reasoning under uncertainty. The work in this thesis addresses these challenges by proposing a series of solutions. The first solution is the construction of a Bayesian Belief Feature Model, which is a novel modeling approach capable of quantifying the uncertainty measures in model parameters by a means of incorporating probabilistic modeling with a conventional modeling approach. The Bayesian Belief feature model presents a new enhanced feature modeling approach in terms of truth quantification and visual expressiveness. The second solution takes into consideration the unclear support for the reasoning under the uncertainty process, and the challenging constraint satisfaction problem in software product lines. This has been done through the development of a mathematical reasoner, which was designed to satisfy the model constraints by considering probability weight for all involved parameters and quantify the actual implications of the problem constraints. The developed Uncertain Constraint Satisfaction Problem approach has been tested and validated through a set of designated experiments. Profoundly stating, the main contributions of this thesis include the following: • Develop a framework for probabilistic graphical modeling to build the purported Bayesian belief feature model. • Extend the model to enhance visual expressiveness throughout the integration of colour degree variation; in which the colour varies with respect to the predefined probabilistic weights. • Enhance the constraints satisfaction problem by the uncertainty measuring of the parameters truth assumption. • Validate the developed approach against different experimental settings to determine its functionality and performance.
Resumo:
Coronal jets represent important manifestations of ubiquitous solar transients, which may be the source of significant mass and energy input to the upper solar atmosphere and the solar wind. While the energy involved in a jet-like event is smaller than that of “nominal” solar flares and coronal mass ejections (CMEs), jets share many common properties with these phenomena, in particular, the explosive magnetically driven dynamics. Studies of jets could, therefore, provide critical insight for understanding the larger, more complex drivers of the solar activity. On the other side of the size-spectrum, the study of jets could also supply important clues on the physics of transients close or at the limit of the current spatial resolution such as spicules. Furthermore, jet phenomena may hint to basic process for heating the corona and accelerating the solar wind; consequently their study gives us the opportunity to attack a broad range of solar-heliospheric problems.
Resumo:
Background: Over the last few decades, the prevalence of young adults with disabilities (YAD) has steadily risen as a result of advances in medicine, clinical treatment, and biomedical technologythat enhanced their survival into adulthood. Despite investments in services, family supports, and insurance, they experience poor health status and barriers to successful transition into adulthood. Objectives: We investigated the collective roles of multi-faceted factors at intrapersonal, interpersonal and community levels within the social ecological framework on health related outcome including self-rated health (SRH) of YAD. The three specific aims are: 1) to examine sociodemographic differences and health insurance coverage in adolescence; 2) to investigate the role of social skills in relationships with family and peers developed in adolescence; and 3) to collectively explore the association of sociodemographic characteristics, social skills, and community participation in adolescence on SRH. Methods: Using longitudinal data (N=5,020) from the National Longitudinal Transition Study (NLTS2), we conducted multivariate logistic regression analyses to understand the association between insurance status as well as social skills in adolescence and YAD’s health related outcomes. Structural equation modeling (SEM) assessed the confluence of multi-faceted factors from the social ecological model that link to health in early adulthood. Results: Compared with YAD who had private insurance, YAD who had public health insurance in adolescence are at higher odds of experiencing poorer health related outcomes in self-rated health [adjusted odds ratio (aOR=2.89, 95% confidence interval (CI): 1.16, 7.23), problems with health (aOR=2.60, 95%CI: 1.26, 5.35), and missing social activities due to health problems (aOR=2.86, 95%CI: 1.39, 5.85). At the interpersonal level, overall social skills developed through relationship with family and peers in adolescence do not appear to have association with health related outcomes in early adulthood. Finally, at the community level, community participation in adolescence does not have an association with SRH in early adulthood. Conclusions: Having public health insurance coverage does not equate to good health. YAD need additional supports to achieve positive health outcomes. The findings in social skills and community participation suggest other potential factors may be at play for health related outcomes for YAD and the need for further investigation.
Resumo:
La ingeniera de software se enfoca en el desarrollo de aplicaciones desde diferentes puntos de vista usando diversos enfoques, uno de ellos es el Desarrollo de Software Dirigido por Modelos (MDSD, por sus siglas en inglés); al desarrollar soluciones bajo esta propuesta se han visualizado grandes ventajas como velocidad, bajos costos y calidad en los desarrollos, sin embargo también algunas desventajas como la dificultad de intervenir las transformaciones, falta expresividad en los modelos y la generación hacia múltiples plataformas; este último debido a que no es posible delimitar con claridad las características de la plataforma destino al especificar los modelos y las transformaciones que constituyen el proceso de desarrollo. Durante el progreso del presente trabajo se trata de mitigar las tres dificultades antes mencionadas por medio de la construcción de un Lenguaje de Dominio Específico (DSL, por sus siglas en inglés) con toda la información funcional de la aplicación, usando diagramas de paquetes y de clases en UML y diagramas de procesos de negocio en BPMN. Este trabajo hace parte de la macro propuesta Metáfora donde se desarrolló un plugin de Eclipse que está basado en el framework de modelado de eclipse (EMF, por sus siglas en inglés). El plugin tiene las funciones de asistente guiando al usuario a través del proceso iterativo de transformaciones hasta llegar al código fuente. El software que fue desarrollado para que el proceso de generación se pueda parametrizar de acuerdo a los modelos y transformaciones realizadas por el analista de desarrollo con ayuda del analista de negocio. Se tiene la total libertad para configurar las secuencias de transformación y aplicarlas en un orden determinado a un conjunto de modelos específicos con el fin de generar parte de una aplicación.
Resumo:
Symbolic execution is a powerful program analysis technique, but it is very challenging to apply to programs built using event-driven frameworks, such as Android. The main reason is that the framework code itself is too complex to symbolically execute. The standard solution is to manually create a framework model that is simpler and more amenable to symbolic execution. However, developing and maintaining such a model by hand is difficult and error-prone. We claim that we can leverage program synthesis to introduce a high-degree of automation to the process of framework modeling. To support this thesis, we present three pieces of work. First, we introduced SymDroid, a symbolic executor for Android. While Android apps are written in Java, they are compiled to Dalvik bytecode format. Instead of analyzing an app’s Java source, which may not be available, or decompiling from Dalvik back to Java, which requires significant engineering effort and introduces yet another source of potential bugs in an analysis, SymDroid works directly on Dalvik bytecode. Second, we introduced Pasket, a new system that takes a first step toward automatically generating Java framework models to support symbolic execution. Pasket takes as input the framework API and tutorial programs that exercise the framework. From these artifacts and Pasket's internal knowledge of design patterns, Pasket synthesizes an executable framework model by instantiating design patterns, such that the behavior of a synthesized model on the tutorial programs matches that of the original framework. Lastly, in order to scale program synthesis to framework models, we devised adaptive concretization, a novel program synthesis algorithm that combines the best of the two major synthesis strategies: symbolic search, i.e., using SAT or SMT solvers, and explicit search, e.g., stochastic enumeration of possible solutions. Adaptive concretization parallelizes multiple sub-synthesis problems by partially concretizing highly influential unknowns in the original synthesis problem. Thanks to adaptive concretization, Pasket can generate a large-scale model, e.g., thousands lines of code. In addition, we have used an Android model synthesized by Pasket and found that the model is sufficient to allow SymDroid to execute a range of apps.