931 resultados para Computational Intelligence in data-driven and hybrid Models and Data Analysis
Resumo:
This dissertation addressed two broad problems in international macroeconomics and conflict analysis. The first problem in the first chapter looked at the behavior of exchange rate and its interaction with industry-level tradable goods prices for three countries, USA, UK and Japan. This question has important monetary policy implications. Here, I computed to what extent changes in exchange rate affected prices of consumer, producer, and export goods. I also studied the timing of these changes in these prices. My results, based on thirty-four industrial prices for USA, UK and Japan, supported the view that changes in exchange rates significantly affect prices of industrial and consumer goods. It also provided an insight to the underlying economic process that led to changes in relative prices. ^ In the second chapter, I explored the predictability of future inflation by incorporating shocks to exchange rates and clearly specified the transmission mechanisms that link exchange rates to industry-level consumer and producer prices. Employing a variety of linear and state-of-the-art nonlinear models, I also predicted growth rates of future prices. Comparing levels of inflation obtained from the above approaches showed superiority of the structural model incorporating the exchange rate pass-through effect. ^ The second broad issue addressed in the third chapter of the dissertation investigated the economic motives for conflict, manifested by rebellion and civil war for seventeen Latin American countries. Based on the analytical framework of Garfinkel, Skaperdas and Syropoulos (2004), I employed ordinal regressions and Markov switching for a panel of seventeen countries to identify trade and openness factors responsible for conflict occurrence and intensity. The results suggested that increased trade openness reduced high intensity domestic conflicts but overdependence on agricultural exports, along with a lack of income earning opportunities lead to more conflicts. Thereafter, using the Cox Proportional Hazard model I studied “conflict duration” and found that over-reliance on agricultural exports explained a major part of the length of conflicts in addition to various socio-political factors. ^
Resumo:
This study critically analyzes the historical role and influence of multinational drug cotpOrations and multinational corporations in general; the u.s. government and the Canadian state in negotiating the global recognition ofIntellectual Property Rights (IPR) under GATT/NAFTA. This process began in 1969 when the Liberal government, in response to high prices for brand-name drugs amended the Patent Act to introduce compulsory licensing by reducing monopoly protection from 20 to seven years. Although the financial position ofthe multinational drug industry was not affected, it campaigned vigorously to change the 1969 legislation. In 1987, the Patent Act was amended to extend protection to 10 years as a condition for free trade talks with the u.s. Nonetheless, the drug industry was not satisfied and accused Canada of providing a bad example to other nations. Therefore, it continued to campaign for global recognition ofIPR laws under GATT. Following the conclusion of the GATTI Trade-Related aspects of Intellectual Property Rights agreement (TRIPS) in 1991, the multinational drug industry and the American government, to the surprise of many, were still not satisfied and sought to implement harsher conditions under NAFTA. The Progressive Conservative government readily agreed without any objections or consideration for the social consequences. As a result, Bill C-91 was introduced. It abandoned compulsory licenses and was made retroactive from December 21, 1991. It is the contention of this thesis that the economic survival of multinational corporations on a global scale depends on the role and functions of the modem state. Similarly, the existence of the state depends on the ideological-political and socioeconomic assistance it gives to multinational corporations on a national and international scale. This dialectical relation of the state and multinational corporations is explored in our theoretical and historical analysis of their role in public policy.
Resumo:
Congresos y conferencias
Resumo:
Discusses the approach taken in Phase 1 of a three-phase project Folktales, Facets and FRBR [funded by a grant from OCLC/ALISE]. This project works with the special collection of folktales at the Center for Children’s Books (CCB) at the University of Illinois at Urbana-Champaign, and the scholars who use this collection. The project aims to enhance the effectiveness and efficiency of folktale access through deep understanding of user needs. Phase 1 included facet analysis of the bibliographic records for a sample of 100 folktale books in the CCB, and task analysis of interviews with four CCB-affiliated faculty. Describes the information tasks, information seeking obstacles, and desired features for a discovery and access tool related to folktales for this initial group of scholarly users of folktales.
Resumo:
This Thesis is composed of a collection of works written in the period 2019-2022, whose aim is to find methodologies of Artificial Intelligence (AI) and Machine Learning to detect and classify patterns and rules in argumentative and legal texts. We define our approach “hybrid”, since we aimed at designing hybrid combinations of symbolic and sub-symbolic AI, involving both “top-down” structured knowledge and “bottom-up” data-driven knowledge. A first group of works is dedicated to the classification of argumentative patterns. Following the Waltonian model of argument and the related theory of Argumentation Schemes, these works focused on the detection of argumentative support and opposition, showing that argumentative evidences can be classified at fine-grained levels without resorting to highly engineered features. To show this, our methods involved not only traditional approaches such as TFIDF, but also some novel methods based on Tree Kernel algorithms. After the encouraging results of this first phase, we explored the use of a some emerging methodologies promoted by actors like Google, which have deeply changed NLP since 2018-19 — i.e., Transfer Learning and language models. These new methodologies markedly improved our previous results, providing us with best-performing NLP tools. Using Transfer Learning, we also performed a Sequence Labelling task to recognize the exact span of argumentative components (i.e., claims and premises), thus connecting portions of natural language to portions of arguments (i.e., to the logical-inferential dimension). The last part of our work was finally dedicated to the employment of Transfer Learning methods for the detection of rules and deontic modalities. In this case, we explored a hybrid approach which combines structured knowledge coming from two LegalXML formats (i.e., Akoma Ntoso and LegalRuleML) with sub-symbolic knowledge coming from pre-trained (and then fine-tuned) neural architectures.
Resumo:
La présente thèse s'intitule "Développent et Application des Méthodologies Computationnelles pour la Modélisation Qualitative". Elle comprend tous les différents projets que j'ai entrepris en tant que doctorante. Plutôt qu'une mise en oeuvre systématique d'un cadre défini a priori, cette thèse devrait être considérée comme une exploration des méthodes qui peuvent nous aider à déduire le plan de processus regulatoires et de signalisation. Cette exploration a été mue par des questions biologiques concrètes, plutôt que par des investigations théoriques. Bien que tous les projets aient inclus des systèmes divergents (réseaux régulateurs de gènes du cycle cellulaire, réseaux de signalisation de cellules pulmonaires) ainsi que des organismes (levure à fission, levure bourgeonnante, rat, humain), nos objectifs étaient complémentaires et cohérents. Le projet principal de la thèse est la modélisation du réseau de l'initiation de septation (SIN) du S.pombe. La cytokinèse dans la levure à fission est contrôlée par le SIN, un réseau signalant de protéines kinases qui utilise le corps à pôle-fuseau comme échafaudage. Afin de décrire le comportement qualitatif du système et prédire des comportements mutants inconnus, nous avons décidé d'adopter l'approche de la modélisation booléenne. Dans cette thèse, nous présentons la construction d'un modèle booléen étendu du SIN, comprenant la plupart des composantes et des régulateurs du SIN en tant que noeuds individuels et testable expérimentalement. Ce modèle utilise des niveaux d'activité du CDK comme noeuds de contrôle pour la simulation d'évènements du SIN à différents stades du cycle cellulaire. Ce modèle a été optimisé en utilisant des expériences d'un seul "knock-out" avec des effets phénotypiques connus comme set d'entraînement. Il a permis de prédire correctement un set d'évaluation de "knock-out" doubles. De plus, le modèle a fait des prédictions in silico qui ont été validées in vivo, permettant d'obtenir de nouvelles idées de la régulation et l'organisation hiérarchique du SIN. Un autre projet concernant le cycle cellulaire qui fait partie de cette thèse a été la construction d'un modèle qualitatif et minimal de la réciprocité des cyclines dans la S.cerevisiae. Les protéines Clb dans la levure bourgeonnante présentent une activation et une dégradation caractéristique et séquentielle durant le cycle cellulaire, qu'on appelle communément les vagues des Clbs. Cet évènement est coordonné avec la courbe d'activation inverse du Sic1, qui a un rôle inhibitoire dans le système. Pour l'identification des modèles qualitatifs minimaux qui peuvent expliquer ce phénomène, nous avons sélectionné des expériences bien définies et construit tous les modèles minimaux possibles qui, une fois simulés, reproduisent les résultats attendus. Les modèles ont été filtrés en utilisant des simulations ODE qualitatives et standardisées; seules celles qui reproduisaient le phénotype des vagues ont été gardées. L'ensemble des modèles minimaux peut être utilisé pour suggérer des relations regulatoires entre les molécules participant qui peuvent ensuite être testées expérimentalement. Enfin, durant mon doctorat, j'ai participé au SBV Improver Challenge. Le but était de déduire des réseaux spécifiques à des espèces (humain et rat) en utilisant des données de phosphoprotéines, d'expressions des gènes et des cytokines, ainsi qu'un réseau de référence, qui était mis à disposition comme donnée préalable. Notre solution pour ce concours a pris la troisième place. L'approche utilisée est expliquée en détail dans le dernier chapitre de la thèse. -- The present dissertation is entitled "Development and Application of Computational Methodologies in Qualitative Modeling". It encompasses the diverse projects that were undertaken during my time as a PhD student. Instead of a systematic implementation of a framework defined a priori, this thesis should be considered as an exploration of the methods that can help us infer the blueprint of regulatory and signaling processes. This exploration was driven by concrete biological questions, rather than theoretical investigation. Even though the projects involved divergent systems (gene regulatory networks of cell cycle, signaling networks in lung cells), as well as organisms (fission yeast, budding yeast, rat, human), our goals were complementary and coherent. The main project of the thesis is the modeling of the Septation Initiation Network (SIN) in S.pombe. Cytokinesis in fission yeast is controlled by the SIN, a protein kinase signaling network that uses the spindle pole body as scaffold. In order to describe the qualitative behavior of the system and predict unknown mutant behaviors we decided to adopt a Boolean modeling approach. In this thesis, we report the construction of an extended, Boolean model of the SIN, comprising most SIN components and regulators as individual, experimentally testable nodes. The model uses CDK activity levels as control nodes for the simulation of SIN related events in different stages of the cell cycle. The model was optimized using single knock-out experiments of known phenotypic effect as a training set, and was able to correctly predict a double knock-out test set. Moreover, the model has made in silico predictions that have been validated in vivo, providing new insights into the regulation and hierarchical organization of the SIN. Another cell cycle related project that is part of this thesis was to create a qualitative, minimal model of cyclin interplay in S.cerevisiae. CLB proteins in budding yeast present a characteristic, sequential activation and decay during the cell cycle, commonly referred to as Clb waves. This event is coordinated with the inverse activation curve of Sic1, which has an inhibitory role in the system. To generate minimal qualitative models that can explain this phenomenon, we selected well-defined experiments and constructed all possible minimal models that, when simulated, reproduce the expected results. The models were filtered using standardized qualitative ODE simulations; only the ones reproducing the wave-like phenotype were kept. The set of minimal models can be used to suggest regulatory relations among the participating molecules, which will subsequently be tested experimentally. Finally, during my PhD I participated in the SBV Improver Challenge. The goal was to infer species-specific (human and rat) networks, using phosphoprotein, gene expression and cytokine data and a reference network provided as prior knowledge. Our solution to the challenge was selected as in the final chapter of the thesis.
Resumo:
Background. Current models of concomitant, intermittent strabismus, heterophoria, convergence and accommodation anomalies are either theoretically complex or incomplete. We propose an alternative and more practical way to conceptualize clinical patterns. Methods. In each of three hypothetical scenarios (normal; high AC/A and low CA/C ratios; low AC/A and high CA/C ratios) there can be a disparity-biased or blur-biased “style”, despite identical ratios. We calculated a disparity bias index (DBI) to reflect these biases. We suggest how clinical patterns fit these scenarios and provide early objective data from small illustrative clinical groups. Results. Normal adults and children showed disparity bias (adult DBI 0.43 (95%CI 0.50-0.36), child DBI 0.20 (95%CI 0.31-0.07) (p=0.001). Accommodative esotropes showed less disparity-bias (DBI 0.03). In the high AC/A and low CA/C scenario, early presbyopes had mean DBI of 0.17 (95%CI 0.28-0.06), compared to DBI of -0.31 in convergence excess esotropes. In the low AC/A and high CA/C scenario near exotropes had mean DBI of 0.27, while we predict that non-strabismic, non-amblyopic hyperopes with good vision without spectacles will show lower DBIs. Disparity bias ranged between 1.25 and -1.67. Conclusions. Establishing disparity or blur bias, together with knowing whether convergence to target demand exceeds accommodation or vice versa explains clinical patterns more effectively than AC/A and CA/C ratios alone. Excessive bias or inflexibility in near-cue use increases risk of clinical problems. We suggest clinicians look carefully at details of accommodation and convergence changes induced by lenses, dissociation and prisms and use these to plan treatment in relation to the model.
Resumo:
This work is focused on the study of saltwater intrusion in coastal aquifers, and in particular on the realization of conceptual schemes to evaluate the risk associated with it. Saltwater intrusion depends on different natural and anthropic factors, both presenting a strong aleatory behaviour, that should be considered for an optimal management of the territory and water resources. Given the uncertainty of problem parameters, the risk associated with salinization needs to be cast in a probabilistic framework. On the basis of a widely adopted sharp interface formulation, key hydrogeological problem parameters are modeled as random variables, and global sensitivity analysis is used to determine their influence on the position of saltwater interface. The analyses presented in this work rely on an efficient model reduction technique, based on Polynomial Chaos Expansion, able to combine the best description of the model without great computational burden. When the assumptions of classical analytical models are not respected, and this occurs several times in the applications to real cases of study, as in the area analyzed in the present work, one can adopt data-driven techniques, based on the analysis of the data characterizing the system under study. It follows that a model can be defined on the basis of connections between the system state variables, with only a limited number of assumptions about the "physical" behaviour of the system.
Resumo:
The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.
Resumo:
The arteriovenous fistula (AVF) is characterized by enhanced blood flow and is the most widely used vascular access for chronic haemodialysis (Sivanesan et al., 1998). A large proportion of the AVF late failures are related to local haemodynamics (Sivanesan et al., 1999a). As in AVF, blood flow dynamics plays an important role in growth, rupture, and surgical treatment of aneurysm. Several techniques have been used to study the flow patterns in simplified models of vascular anastomose and aneurysm. In the present investigation, Computational Fluid Dynamics (CFD) is used to analyze the flow patterns in AVF and aneurysm through the velocity waveform obtained from experimental surgeries in dogs (Galego et al., 2000), as well as intra-operative blood flow recordings of patients with radiocephalic AVF ( Sivanesan et al., 1999b) and physiological pulses (Aires, 1991), respectively. The flow patterns in AVF for dog and patient surgeries data are qualitatively similar. Perturbation, recirculation and separation zones appeared during cardiac cycle, and these were intensified in the diastole phase for the AVF and aneurysm models. The values of wall shear stress presented in this investigation of AVF and aneurysm models oscillated in the range that can both cause damage to endothelial cells and develop atherosclerosis.
Resumo:
This paper proposes a regression model considering the modified Weibull distribution. This distribution can be used to model bathtub-shaped failure rate functions. Assuming censored data, we consider maximum likelihood and Jackknife estimators for the parameters of the model. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and we also present some ways to perform global influence. Besides, for different parameter settings, sample sizes and censoring percentages, various simulations are performed and the empirical distribution of the modified deviance residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended for a martingale-type residual in log-modified Weibull regression models with censored data. Finally, we analyze a real data set under log-modified Weibull regression models. A diagnostic analysis and a model checking based on the modified deviance residual are performed to select appropriate models. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
To investigate their role in receptor coupling to G(q), we mutated all basic amino acids and some conserved hydrophobic residues of the cytosolic surface of the alpha(1b)-adrenergic receptor (AR). The wild type and mutated receptors were expressed in COS-7 cells and characterized for their ligand binding properties and ability to increase inositol phosphate accumulation. The experimental results have been interpreted in the context of both an ab initio model of the alpha(1b)-AR and of a new homology model built on the recently solved crystal structure of rhodopsin. Among the twenty-three basic amino acids mutated only mutations of three, Arg(254) and Lys(258) in the third intracellular loop and Lys(291) at the cytosolic extension of helix 6, markedly impaired the receptor-mediated inositol phosphate production. Additionally, mutations of two conserved hydrophobic residues, Val(147) and Leu(151) in the second intracellular loop had significant effects on receptor function. The functional analysis of the receptor mutants in conjunction with the predictions of molecular modeling supports the hypothesis that Arg(254), Lys(258), as well as Leu(151) are directly involved in receptor-G protein interaction and/or receptor-mediated activation of the G protein. In contrast, the residues belonging to the cytosolic extensions of helices 3 and 6 play a predominant role in the activation process of the alpha(1b)-AR. These findings contribute to the delineation of the molecular determinants of the alpha(1b)-AR/G(q) interface.