949 resultados para Multivariate Lifetime Data
Resumo:
The aim of this paper is to develop models for experimental open-channel water delivery systems and assess the use of three data-driven modeling tools toward that end. Water delivery canals are nonlinear dynamical systems and thus should be modeled to meet given operational requirements while capturing all relevant dynamics, including transport delays. Typically, the derivation of first principle models for open-channel systems is based on the use of Saint-Venant equations for shallow water, which is a time-consuming task and demands for specific expertise. The present paper proposes and assesses the use of three data-driven modeling tools: artificial neural networks, composite local linear models and fuzzy systems. The canal from Hydraulics and Canal Control Nucleus (A parts per thousand vora University, Portugal) will be used as a benchmark: The models are identified using data collected from the experimental facility, and then their performances are assessed based on suitable validation criterion. The performance of all models is compared among each other and against the experimental data to show the effectiveness of such tools to capture all significant dynamics within the canal system and, therefore, provide accurate nonlinear models that can be used for simulation or control. The models are available upon request to the authors.
Resumo:
A maioria dos órgãos históricos portugueses data dos finais do século XVIII ou do princípio do século XIX. Durante este período foi construído um invulgar número de instrumentos em Lisboa e nas áreas circundantes por António Xavier Machado e Cerveira (1756-1828) e outros organeiros menos prolíficos. O estudo desses órgãos, muitos dos quais (restaurados ou não) se encontram próximos das condições originais, permite a identificação de um tipo de instrumento com uma morfologia específica, claramente emancipada do chamado «órgão ibérico». No entanto, até muito recentemente, não era conhecida música que se adaptasse às idiossincrasisas daqueles instrumentos. O recente estudo das obras para órgão de José Marques e Silva (1782-1837) permitiu clarificar esta situação. Bem conhecido durante a sua vida como organista e compositor, José Marques e Silva foi um dos ultimos mestres do Seminário Patriarcal. A importância da sua produção musical reside não só num substancial número de obras com autoria firmemente estabelecida – escritas, na maior parte, para coro misto com acompanhamento de órgão obbligato – mas também na íntima relação entre a sua escrita e a morfologia dos órgãos construídos em Portugal durante a sua vida. Este artigo enfatiza a importância de José Marques e Silva (indubitavelmente, o mais significativo compositor português para órgão do seu tempo) sublinhando a relevância das suas obras para órgão solo, cujo uso extensivo de escrita idiomática e indicações de registação fazem delas um dos mais importantes documentos só início do século XIX sobre a prática organística em Portugal.
Resumo:
Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 is analyzed. The events, characterized by their magnitude, geographic location and time of occurrence, are divided into groups, either according to the Flinn-Engdahl (F-E) seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Two methods of analysis are considered and compared in this study. In a first method, the distributions of magnitudes are approximated by Gutenberg-Richter (G-R) distributions and the parameters used to reveal the relationships among regions. In the second method, the mutual information is calculated and adopted as a measure of similarity between regions. In both cases, using clustering analysis, visualization maps are generated, providing an intuitive and useful representation of the complex relationships that are present among seismic data. Such relationships might not be perceived on classical geographic maps. Therefore, the generated charts are a valid alternative to other visualization tools, for understanding the global behavior of earthquakes.
Resumo:
High risk of recurrence/progression bladder tumours is treated with Bacillus Calmette-Guérin (BCG) immunotherapy after complete resection of the tumour. Approximately 75% of these tumours express the uncommon carbohydrate antigen sialyl-Tn (Tn), a surrogate biomarker of tumour aggressiveness. Such changes in the glycosylation of cell-surface proteins influence tumour microenvironment and immune responses that may modulate treatment outcome and the course of disease. The aim of this work is to determine the efficiency of BCG immunotherapy against tumours expressing sTn and sTn-related antigen sialyl-6-T (s6T). METHODS: In a retrospective design, 94 tumours from patients treated with BCG were screened for sTn and s6T expression. In vitro studies were conducted to determine the interaction of BCG with high-grade bladder cancer cell line overexpressing sTn. RESULTS: From the 94 cases evaluated, 36 had recurrence after BCG treatment (38.3%). Treatment outcome was influenced by age over 65 years (HR=2.668; (1.344-5.254); P=0.005), maintenance schedule (HR=0.480; (0.246-0.936); P=0.031) and multifocality (HR=2.065; (1.033-4.126); P=0.040). sTn or s6T expression was associated with BCG response (P=0.024; P<0.0001) and with increased recurrence-free survival (P=0.001). Multivariate analyses showed that sTn and/or s6T were independent predictive markers of recurrence after BCG immunotherapy (HR=0.296; (0.148-0.594); P=0.001). In vitro studies demonstrated higher adhesion and internalisation of the bacillus to cells expressing sTn, promoting cell death. CONCLUSION: s6T is described for the first time in bladder tumours. Our data strongly suggest that BCG immunotherapy is efficient against sTn- and s6T-positive tumours. Furthermore, sTn and s6T expression are independent predictive markers of BCG treatment response and may be useful in the identification of patients who could benefit more from this immunotherapy.
Resumo:
OBJECTIVE: To assess overall survival of women with cervical cancer and describe prognostic factors associated. METHODS: A total of 3,341 cases of invasive cervical cancer diagnosed at the Brazilian Cancer Institute, Rio de Janeiro, southeastern Brazil, between 1999 and 2004 were selected. Clinical and pathological characteristics and follow-up data were collected. There were performed a survival analysis using Kaplan-Meier curves and a multivariate analysis through Cox model. RESULTS: Of all cases analyzed, 68.3% had locally advanced disease at the time of diagnosis. The 5-year overall survival was 48%. After multivariate analysis, tumor staging at diagnosis was the single variable significantly associated with prognosis (p<0.001). There was seen a dose-response relationship between mortality and clinical staging, ranging from 27.8 to 749.6 per 1,000 cases-year in women stage I and IV, respectively. CONCLUSIONS: The study showed that early detection through prevention programs is crucial to increase cervical cancer survival.
Resumo:
Research on the problem of feature selection for clustering continues to develop. This is a challenging task, mainly due to the absence of class labels to guide the search for relevant features. Categorical feature selection for clustering has rarely been addressed in the literature, with most of the proposed approaches having focused on numerical data. In this work, we propose an approach to simultaneously cluster categorical data and select a subset of relevant features. Our approach is based on a modification of a finite mixture model (of multinomial distributions), where a set of latent variables indicate the relevance of each feature. To estimate the model parameters, we implement a variant of the expectation-maximization algorithm that simultaneously selects the subset of relevant features, using a minimum message length criterion. The proposed approach compares favourably with two baseline methods: a filter based on an entropy measure and a wrapper based on mutual information. The results obtained on synthetic data illustrate the ability of the proposed expectation-maximization method to recover ground truth. An application to real data, referred to official statistics, shows its usefulness.
Resumo:
Research on cluster analysis for categorical data continues to develop, new clustering algorithms being proposed. However, in this context, the determination of the number of clusters is rarely addressed. We propose a new approach in which clustering and the estimation of the number of clusters is done simultaneously for categorical data. We assume that the data originate from a finite mixture of multinomial distributions and use a minimum message length criterion (MML) to select the number of clusters (Wallace and Bolton, 1986). For this purpose, we implement an EM-type algorithm (Silvestre et al., 2008) based on the (Figueiredo and Jain, 2002) approach. The novelty of the approach rests on the integration of the model estimation and selection of the number of clusters in a single algorithm, rather than selecting this number based on a set of pre-estimated candidate models. The performance of our approach is compared with the use of Bayesian Information Criterion (BIC) (Schwarz, 1978) and Integrated Completed Likelihood (ICL) (Biernacki et al., 2000) using synthetic data. The obtained results illustrate the capacity of the proposed algorithm to attain the true number of cluster while outperforming BIC and ICL since it is faster, which is especially relevant when dealing with large data sets.
Resumo:
Cluster analysis for categorical data has been an active area of research. A well-known problem in this area is the determination of the number of clusters, which is unknown and must be inferred from the data. In order to estimate the number of clusters, one often resorts to information criteria, such as BIC (Bayesian information criterion), MML (minimum message length, proposed by Wallace and Boulton, 1968), and ICL (integrated classification likelihood). In this work, we adopt the approach developed by Figueiredo and Jain (2002) for clustering continuous data. They use an MML criterion to select the number of clusters and a variant of the EM algorithm to estimate the model parameters. This EM variant seamlessly integrates model estimation and selection in a single algorithm. For clustering categorical data, we assume a finite mixture of multinomial distributions and implement a new EM algorithm, following a previous version (Silvestre et al., 2008). Results obtained with synthetic datasets are encouraging. The main advantage of the proposed approach, when compared to the above referred criteria, is the speed of execution, which is especially relevant when dealing with large data sets.
Resumo:
Consider the problem of disseminating data from an arbitrary source node to all other nodes in a distributed computer system, like Wireless Sensor Networks (WSNs). We assume that wireless broadcast is used and nodes do not know the topology. We propose new protocols which disseminate data faster and use fewer broadcasts than the simple broadcast protocol.
Resumo:
Mestrado em Gestão e Avaliação de Tecnologias em Saúde
Resumo:
Nowadays, due to the incredible grow of the mobile devices market, when we want to implement a client-server applications we must consider mobile devices limitations. In this paper we discuss which can be the more reliable and fast way to exchange information between a server and an Android mobile application. This is an important issue because with a responsive application the user experience is more enjoyable. In this paper we present a study that test and evaluate two data transfer protocols, socket and HTTP, and three data serialization formats (XML, JSON and Protocol Buffers) using different environments and mobile devices to realize which is the most practical and fast to use.
Resumo:
Dissertação de Natureza Científica para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Edificações
Resumo:
The goal of the this paper is to show that the DGPS data Internet service we designed and developed provides campus-wide real time access to Differential GPS (DGPS) data and, thus, supports precise outdoor navigation. First we describe the developed distributed system in terms of architecture (a three tier client/server application), services provided (real time DGPS data transportation from remote DGPS sources and campus wide data dissemination) and transmission modes implemented (raw and frame mode over TCP and UDP). Then we present and discuss the results obtained and, finally, we draw some conclusions.
Resumo:
The principal topic of this work is the application of data mining techniques, in particular of machine learning, to the discovery of knowledge in a protein database. In the first chapter a general background is presented. Namely, in section 1.1 we overview the methodology of a Data Mining project and its main algorithms. In section 1.2 an introduction to the proteins and its supporting file formats is outlined. This chapter is concluded with section 1.3 which defines that main problem we pretend to address with this work: determine if an amino acid is exposed or buried in a protein, in a discrete way (i.e.: not continuous), for five exposition levels: 2%, 10%, 20%, 25% and 30%. In the second chapter, following closely the CRISP-DM methodology, whole the process of construction the database that supported this work is presented. Namely, it is described the process of loading data from the Protein Data Bank, DSSP and SCOP. Then an initial data exploration is performed and a simple prediction model (baseline) of the relative solvent accessibility of an amino acid is introduced. It is also introduced the Data Mining Table Creator, a program developed to produce the data mining tables required for this problem. In the third chapter the results obtained are analyzed with statistical significance tests. Initially the several used classifiers (Neural Networks, C5.0, CART and Chaid) are compared and it is concluded that C5.0 is the most suitable for the problem at stake. It is also compared the influence of parameters like the amino acid information level, the amino acid window size and the SCOP class type in the accuracy of the predictive models. The fourth chapter starts with a brief revision of the literature about amino acid relative solvent accessibility. Then, we overview the main results achieved and finally discuss about possible future work. The fifth and last chapter consists of appendices. Appendix A has the schema of the database that supported this thesis. Appendix B has a set of tables with additional information. Appendix C describes the software provided in the DVD accompanying this thesis that allows the reconstruction of the present work.
Resumo:
Sensor/actuator networks promised to extend automated monitoring and control into industrial processes. Avionic system is one of the prominent technologies that can highly gain from dense sensor/actuator deployments. An aircraft with smart sensing skin would fulfill the vision of affordability and environmental friendliness properties by reducing the fuel consumption. Achieving these properties is possible by providing an approximate representation of the air flow across the body of the aircraft and suppressing the detected aerodynamic drags. To the best of our knowledge, getting an accurate representation of the physical entity is one of the most significant challenges that still exists with dense sensor/actuator network. This paper offers an efficient way to acquire sensor readings from very large sensor/actuator network that are located in a small area (dense network). It presents LIA algorithm, a Linear Interpolation Algorithm that provides two important contributions. First, it demonstrates the effectiveness of employing a transformation matrix to mimic the environmental behavior. Second, it renders a smart solution for updating the previously defined matrix through a procedure called learning phase. Simulation results reveal that the average relative error in LIA algorithm can be reduced by as much as 60% by exploiting transformation matrix.