985 resultados para Unified modeling language
Resumo:
We propose a unified data modeling approach that is equally applicable to supervised regression and classification applications, as well as to unsupervised probability density function estimation. A particle swarm optimization (PSO) aided orthogonal forward regression (OFR) algorithm based on leave-one-out (LOO) criteria is developed to construct parsimonious radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines the center vector and diagonal covariance matrix of one RBF node by minimizing the LOO statistics. For regression applications, the LOO criterion is chosen to be the LOO mean square error, while the LOO misclassification rate is adopted in two-class classification applications. By adopting the Parzen window estimate as the desired response, the unsupervised density estimation problem is transformed into a constrained regression problem. This PSO aided OFR algorithm for tunable-node RBF networks is capable of constructing very parsimonious RBF models that generalize well, and our analysis and experimental results demonstrate that the algorithm is computationally even simpler than the efficient regularization assisted orthogonal least square algorithm based on LOO criteria for selecting fixed-node RBF models. Another significant advantage of the proposed learning procedure is that it does not have learning hyperparameters that have to be tuned using costly cross validation. The effectiveness of the proposed PSO aided OFR construction procedure is illustrated using several examples taken from regression and classification, as well as density estimation applications.
Resumo:
In electronic commerce, systems development is based on two fundamental types of models, business models and process models. A business model is concerned with value exchanges among business partners, while a process model focuses on operational and procedural aspects of business communication. Thus, a business model defines the what in an e-commerce system, while a process model defines the how. Business process design can be facilitated and improved by a method for systematically moving from a business model to a process model. Such a method would provide support for traceability, evaluation of design alternatives, and seamless transition from analysis to realization. This work proposes a unified framework that can be used as a basis to analyze, to interpret and to understand different concepts associated at different stages in e-Commerce system development. In this thesis, we illustrate how UN/CEFACT’s recommended metamodels for business and process design can be analyzed, extended and then integrated for the final solutions based on the proposed unified framework. Also, as an application of the framework, we demonstrate how process-modeling tasks can be facilitated in e-Commerce system design. The proposed methodology, called BP3 stands for Business Process Patterns Perspective. The BP3 methodology uses a question-answer interface to capture different business requirements from the designers. It is based on pre-defined process patterns, and the final solution is generated by applying the captured business requirements by means of a set of production rules to complete the inter-process communication among these patterns.
Resumo:
Many seemingly disparate approaches for marginal modeling have been developed in recent years. We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the proposed copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts.
Resumo:
Bibliography: p. 53.
Resumo:
Map algebra is a data model and simple functional notation to study the distribution and patterns of spatial phenomena. It uses a uniform representation of space as discrete grids, which are organized into layers. This paper discusses extensions to map algebra to handle neighborhood operations with a new data type called a template. Templates provide general windowing operations on grids to enable spatial models for cellular automata, mathematical morphology, and local spatial statistics. A programming language for map algebra that incorporates templates and special processing constructs is described. The programming language is called MapScript. Example program scripts are presented to perform diverse and interesting neighborhood analysis for descriptive, model-based and processed-based analysis.
Resumo:
Theory building is one of the most crucial challenges faced by basic, clinical and population research, which form the scientific foundations of health practices in contemporary societies. The objective of the study is to propose a Unified Theory of Health-Disease as a conceptual tool for modeling health-disease-care in the light of complexity approaches. With this aim, the epistemological basis of theoretical work in the health field and concepts related to complexity theory as concerned to health problems are discussed. Secondly, the concepts of model-object, multi-planes of occurrence, modes of health and disease-illness-sickness complex are introduced and integrated into a unified theoretical framework. Finally, in the light of recent epistemological developments, the concept of Health-Disease-Care Integrals is updated as a complex reference object fit for modeling health-related processes and phenomena.
Resumo:
This article presents a systematic framework for modeling several classes of illness-sickness-disease named as Holopathogenesis. Holopathogenesis is defined as processes of over-determination of diseases and related conditions taken as a whole, comprising selected facets of the complex object Health. First, a conceptual background of Holopathogenesis is presented as a series of significant interfaces (biomolecular-immunological, physiopathological-clinical, epidemiological-ecosocial). Second, propositions derived from Holopathogenesis are introduced in order to allow drawing the disease-illness-sickness complex as a hierarchical network of networks. Third, a formalization of intra- and inter-level correspondences, over-determination processes, effects and links of Holopathogenesis models is proposed. Finally, the Holopathogenesis frame is evaluated as a comprehensive theoretical pathology taken as a preliminary step towards a unified theory of health-disease.
Resumo:
Most of the traditional software and database development approaches tend to be serial, not evolutionary and certainly not agile, especially on data-oriented aspects. Most of the more commonly used methodologies are strict, meaning they’re composed by several stages each with very specific associated tasks. A clear example is the Rational Unified Process (RUP), divided into Business Modeling, Requirements, Analysis & Design, Implementation, Testing and Deployment. But what happens when the needs of a well design and structured plan, meet the reality of a small starting company that aims to build an entire user experience solution. Here resource control and time productivity is vital, requirements are in constant change, and so is the product itself. In order to succeed in this environment a highly collaborative and evolutionary development approach is mandatory. The implications of constant changing requirements imply an iterative development process. Project focus is on Data Warehouse development and business modeling. This area is usually a tricky one. Business knowledge is part of the enterprise, how they work, their goals, what is relevant for analyses are internal business processes. Throughout this document it will be explained why Agile Modeling development was chosen. How an iterative and evolutionary methodology, allowed for reasonable planning and documentation while permitting development flexibility, from idea to product. More importantly how it was applied on the development of a Retail Focused Data Warehouse. A productized Data Warehouse built on the knowledge of not one but several client needs. One that aims not just to store usual business areas but create an innovative sets of business metrics by joining them with store environment analysis, converting Business Intelligence into Actionable Business Intelligence.
Resumo:
Intelligent wheelchairs (IW) are technologies that can increase the autonomy and independence of elderly people and patients suffering from some kind of disability. Nowadays the intelligent wheelchairs and the human-machine studies are very active research areas. This paper presents a methodology and a Data Analysis System (DAS) that provides an adapted command language to an user of the IW. This command language is a set of input sequences that can be created using inputs from an input device or a combination of the inputs available in a multimodal interface. The results show that there are statistical evidences to affirm that the mean of the evaluation of the DAS generated command language is higher than the mean of the evaluation of the command language recommended by the health specialist (p value = 0.002) with a sample of 11 cerebral palsy users. This work demonstrates that it is possible to adapt an intelligent wheelchair interface to the user even when the users present heterogeneous and severe physical constraints.
Resumo:
In this work an adaptive modeling and spectral estimation scheme based on a dual Discrete Kalman Filtering (DKF) is proposed for speech enhancement. Both speech and noise signals are modeled by an autoregressive structure which provides an underlying time frame dependency and improves time-frequency resolution. The model parameters are arranged to obtain a combined state-space model and are also used to calculate instantaneous power spectral density estimates. The speech enhancement is performed by a dual discrete Kalman filter that simultaneously gives estimates for the models and the signals. This approach is particularly useful as a pre-processing module for parametric based speech recognition systems that rely on spectral time dependent models. The system performance has been evaluated by a set of human listeners and by spectral distances. In both cases the use of this pre-processing module has led to improved results.
Resumo:
Trabalho de Projeto apresentado como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Informática
Resumo:
During the last few years many research efforts have been done to improve the design of ETL (Extract-Transform-Load) systems. ETL systems are considered very time-consuming, error-prone and complex involving several participants from different knowledge domains. ETL processes are one of the most important components of a data warehousing system that are strongly influenced by the complexity of business requirements, their changing and evolution. These aspects influence not only the structure of a data warehouse but also the structures of the data sources involved with. To minimize the negative impact of such variables, we propose the use of ETL patterns to build specific ETL packages. In this paper, we formalize this approach using BPMN (Business Process Modelling Language) for modelling more conceptual ETL workflows, mapping them to real execution primitives through the use of a domain-specific language that allows for the generation of specific instances that can be executed in an ETL commercial tool.
Resumo:
Today it is easy to find a lot of tools to define data migration schemas among different types of information systems. Data migration processes use to be implemented on a very diverse range of applications, ranging from conventional operational systems to data warehousing platforms. The implementation of a data migration process often involves a serious planning, considering the development of conceptual migration schemas at early stages. Such schemas help architects and engineers to plan and discuss the most adequate way to migrate data between two different systems. In this paper we present and discuss a way for enriching data migration conceptual schemas in BPMN using a domain-specific language, demonstrating how to convert such enriched schemas to a first correspondent physical representation (a skeleton) in a conventional ETL implementation tool like Kettle.
Resumo:
Dissertação de mestrado integrado em Engenharia Civil