760 resultados para Lipschitz trivial
Resumo:
Current software development often relies on non-trivial coordination logic for combining autonomous services, eventually running on different platforms. As a rule, however, such a coordination layer is strongly woven within the application at source code level. Therefore, its precise identification becomes a major methodological (and technical) problem and a challenge to any program understanding or refactoring process. The approach introduced in this paper resorts to slicing techniques to extract coordination data from source code. Such data are captured in a specific dependency graph structure from which a coordination model can be recovered either in the form of an Orc specification or as a collection of code fragments corresponding to the identification of typical coordination patterns in the system. Tool support is also discussed
Resumo:
Clone detection is well established for imperative programs. It works mostly on the statement level and therefore is ill-suited for func- tional programs, whose main constituents are expressions and types. In this paper we introduce clone detection for functional programs using a new intermediate program representation, dubbed Functional Control Tree. We extend clone detection to the identi cation of non-trivial func- tional program clones based on the recursion patterns from the so-called Bird-Meertens formalism
Resumo:
Current software development relies increasingly on non-trivial coordination logic for com- bining autonomous services often running on di erent platforms. As a rule, however, in typical non-trivial software systems, such a coordination layer is strongly weaved within the application at source code level. Therefore, its precise identi cation becomes a major methodological (and technical) problem which cannot be overestimated along any program understanding or refactoring process. Open access to source code, as granted in OSS certi cation, provides an opportunity for the devel- opment of methods and technologies to extract, from source code, the relevant coordination information. This paper is a step in this direction, combining a number of program analysis techniques to automatically recover coordination information from legacy code. Such information is then expressed as a model in Orc, a general purpose orchestration language
Resumo:
O artigo trata da eventual contribuição da nova sociologia econômica (NSE) para uma sociologia do desenvolvimento sustentável. Elaborada a partir do início dos anos 1980, a NSE busca dar conta da economia como totalidade social, o que já representa uma abertura ao desenvolvimento sustentável. Ademais, considerando a economia sob o prisma das instituições, das organizações, das redes e das formas de governança, ela reinscreve a economia no seio da sociedade. Isso seria trivial se essa concepção já fosse reconhecida por nossas sociedades e até mesmo por nossas disciplinas. Como nossa pesquisa documental é feita em duas etapas, tratamos na primeira parte do surgimento da NSE; na segunda parte, examinamos diversas abordagens que propõem uma reconstrução da economia como objeto sociológico. Em conclusão, nos perguntamos o que podemos apreender dessas abordagens, tendo em vista uma melhor compreensão do desenvolvimento sustentável.
Resumo:
Sticky information monetary models have been used in the macroeconomic literature to explain some of the observed features regarding inflation dynamics. In this paper, we explore the consequences of relaxing the rational expectations assumption usually taken in this type of model; in particular, by considering expectations formed through adaptive learning, it is possible to arrive to results other than the trivial convergence to a fixed point long-term equilibrium. The results involve the possibility of endogenous cyclical motion (periodic and a-periodic), which emerges essentially in scenarios of hyperinflation. In low inflation settings, the introduction of learning implies a less severe impact of monetary shocks that, nevertheless, tend to last for additional time periods relative to the pure perfect foresight setup.
Resumo:
Legislação recente veio criar o contexto necessário para a implementação alargada do ensino obrigatório da língua inglesa aos mais novos no nosso país. Actualmente, uma educação sem as línguas é uma educação amputada e incompleta, já que o multilinguismo generalizado é o futuro da Europa. Para além dos pressupostos de ordem cultural, social e económica para a sua inclusão nos currículos, a aprendizagem das línguas enfatiza a promoção do desenvolvimento pessoal e social que o reconhecimento e a estima por outras formas particulares de interpretar o universal proporciona, não se limitando a dotar os alunos com um dispositivo de natureza profissional ou de utilidade turística. Dentro desta perspectiva, a aprendizagem da L.E. serve cabalmente as finalidades de um projecto educativo multicultural e multilingue, e à educação para a literacia intercultural baseada na aprendizagem de L.E. cabe o papel de reconciliar a escola com a vida social tal como ela é, complexa e plural, sem produzir ou reforçar fenómenos de marginalidade, de xenofobia ou de exclusão. Para tal são necessários professores com preparação adequada, isto é, com sólida formação linguístico-comunicativa e pedagógicodidáctica. As metodologias de ensino das L.E. no 1º Ciclo excedem a releitura trivial do legado da didáctica das línguas, para se lançarem na edificação de um paradigma e de uma linguagem admiravelmente multifacetados, onde o elevado nível de integração de uma infinidade de componentes e de estímulos redefine o alcance do processo de aprender uma L.E., abrindo-lhe perspectivas completamente novas e surpreendentes, na observância da necessidade absoluta da articulação vertical das aprendizagens entre os dois primeiros ciclos do Ensino Básico.
Resumo:
Este trabalho tem como objectivo a criação de modelos, resultantes da aplicação de algoritmos e técnicas de mineração de dados, que possam servir de suporte a um sistema de apoio à decisão útil a investidores do mercado de capitais. Os mercados de capitais, neste caso particular a Bolsa de Valores, são sistemas que gera diariamente grandes volumes de informação, cuja análise é complexa e não trivial para um investidor não profissional. Existem muitas variáveis que influenciam a decisão a tomar sobre a carteira de acções (vender, manter, comprar). Estas decisões têm como objectivo a maximização do lucro. Assim, um sistema que auxilie os investidores na tarefa de análise será considerado uma mais-valia. As decisões de um profissional são normalmente baseadas em dois tipos de análise: Fundamental e Técnica. A Análise Fundamental foca-se nos indicadores da “saúde financeira” da empresa, tendo como base a informação disponibilizada pela mesma. Por outro lado , a Análise Técnica, que será o foco deste trabalho, assenta na observação de indicadores estatísticos construídos com base no comportamento passado da acção. O recurso a estas análises permite aos investidores obterem alguma informação sobre a tendência da acção (subida ou descida). As análises apresentadas requerem um bom conhecimento do domínio do problema e tempo, o que as torna pouco acessíveis, sobretudo para os pequenos investidores. Com o intuito de democratizar o acesso a este tipo de investimentos, este trabalho, recorre a algoritmos de mineração de dados, tais como, árvores de classificação e redes neuronais que possam servir de base à construção de um modelo de suporte a obstáculos que podem impedir o investidor comum de entrar na Bolsa, designadamente o tempo gasto na análise e a complexidade da mesma, entre outros. Para a criação de modelos capazes de responder às expectativas, são realizados diversos ensaios recorrendo a vários algoritmos e conjuntos de dados, na busca do que melhor se adequa ao problema. Contudo é de ressalvar que a decisão de investimento estará sempre do lado do investidor, uma vez que o modelo deve permitir unicamente alimentar um sistema de apoio.
Resumo:
A Teia Mundial (Web) foi prevista como uma rede de documentos de hipertexto interligados de forma a criar uma espaço de informação onde humanos e máquinas poderiam comunicar. No entanto, a informação contida na Web tradicional foi/é armazenada de forma não estruturada o que leva a que apenas os humanos a possam consumir convenientemente. Consequentemente, a procura de informações na Web sintáctica é uma tarefa principalmente executada pelos humanos e nesse sentido nem sempre é fácil de concretizar. Neste contexto, tornou-se essencial a evolução para uma Web mais estruturada e mais significativa onde é dado significado bem definido à informação de forma a permitir a cooperação entre humanos e máquinas. Esta Web é usualmente referida como Web Semântica. Além disso, a Web Semântica é totalmente alcançável apenas se os dados de diferentes fontes forem ligados criando assim um repositório de Dados Abertos Ligados (LOD). Com o aparecimento de uma nova Web de Dados (Abertos) Ligados (i.e. a Web Semântica), novas oportunidades e desafios surgiram. Pergunta Resposta (QA) sobre informação semântica é actualmente uma área de investigação activa que tenta tirar vantagens do uso das tecnologias ligadas à Web Semântica para melhorar a tarefa de responder a questões. O principal objectivo do projecto World Search passa por explorar a Web Semântica para criar mecanismos que suportem os utilizadores de domínios de aplicação específicos a responder a questões complexas com base em dados oriundos de diferentes repositórios. No entanto, a avaliação feita ao estado da arte permite concluir que as aplicações existentes não suportam os utilizadores na resposta a questões complexas. Nesse sentido, o trabalho desenvolvido neste documento foca-se em estudar/desenvolver metodologias/processos que permitam ajudar os utilizadores a encontrar respostas exactas/corretas para questões complexas que não podem ser respondidas fazendo uso dos sistemas tradicionais. Tal inclui: (i) Ultrapassar a dificuldade dos utilizadores visionarem o esquema subjacente aos repositórios de conhecimento; (ii) Fazer a ponte entre a linguagem natural expressa pelos utilizadores e a linguagem (formal) entendível pelos repositórios; (iii) Processar e retornar informações relevantes que respondem apropriadamente às questões dos utilizadores. Para esse efeito, são identificadas um conjunto de funcionalidades que são consideradas necessárias para suportar o utilizador na resposta a questões complexas. É também fornecida uma descrição formal dessas funcionalidades. A proposta é materializada num protótipo que implementa as funcionalidades previamente descritas. As experiências realizadas com o protótipo desenvolvido demonstram que os utilizadores efectivamente beneficiam das funcionalidades apresentadas: ▪ Pois estas permitem que os utilizadores naveguem eficientemente sobre os repositórios de informação; ▪ O fosso entre as conceptualizações dos diferentes intervenientes é minimizado; ▪ Os utilizadores conseguem responder a questões complexas que não conseguiam responder com os sistemas tradicionais. Em suma, este documento apresenta uma proposta que comprovadamente permite, de forma orientada pelo utilizador, responder a questões complexas em repositórios semiestruturados.
Resumo:
Diane Arbus‘ photographs are mainly about difference. Most of the time she is trying ‗[…] to suppress, or at least reduce, moral and sensory queasiness‘ (Sontag 1977: 40) in order to represent a world where the subject of the photograph is not merely the ‗other‘ but also the I. Her technique does not coax her subjects into natural poses. Instead she encourages them to be strange and awkward. By posing for her, the revelation of the self is identified with what is odd. This paper aims at understanding the geography of difference that, at the same time, is also of resistance, since Diane Arbus reveals what was forcefully hidden by bringing it into light in such a way that it is impossible to ignore. Her photographs display a poetic beauty that is not only of the ‗I‘ but also of the ‗eye‘. The world that is depicted is one in which we are all the same. She ―atomizes‖ reality by separating each element and ‗Instead of showing identity between things which are different […] everybody is shown to look the same.‘ (Sontag 1977: 47). Furthermore, this paper analyses some of Arbus‘ photographs so as to explain this point of view, by trying to argue that between rejecting and reacting against what is standardized she does not forget the geography of the body which is also a geography of the self. While creating a new imagetic topos, where what is trivial becomes divine, she also presents the frailty of others as our own.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
In the past few years Tabling has emerged as a powerful logic programming model. The integration of concurrent features into the implementation of Tabling systems is demanded by need to use recently developed tabling applications within distributed systems, where a process has to respond concurrently to several requests. The support for sharing of tables among the concurrent threads of a Tabling process is a desirable feature, to allow one of Tabling’s virtues, the re-use of computations by other threads and to allow efficient usage of available memory. However, the incremental completion of tables which are evaluated concurrently is not a trivial problem. In this dissertation we describe the integration of concurrency mechanisms, by the way of multi-threading, in a state of the art Tabling and Prolog system, XSB. We begin by reviewing the main concepts for a formal description of tabled computations, called SLG resolution and for the implementation of Tabling under the SLG-WAM, the abstract machine supported by XSB. We describe the different scheduling strategies provided by XSB and introduce some new properties of local scheduling, a scheduling strategy for SLG resolution. We proceed to describe our implementation work by describing the process of integrating multi-threading in a Prolog system supporting Tabling, without addressing the problem of shared tables. We describe the trade-offs and implementation decisions involved. We then describe an optimistic algorithm for the concurrent sharing of completed tables, Shared Completed Tables, which allows the sharing of tables without incurring in deadlocks, under local scheduling. This method relies on the execution properties of local scheduling and includes full support for negation. We provide a theoretical framework and discuss the implementation’s correctness and complexity. After that, we describe amethod for the sharing of tables among threads that allows parallelism in the computation of inter-dependent subgoals, which we name Concurrent Completion. We informally argue for the correctness of Concurrent Completion. We give detailed performance measurements of the multi-threaded XSB systems over a variety of machines and operating systems, for both the Shared Completed Tables and the Concurrent Completion implementations. We focus our measurements inthe overhead over the sequential engine and the scalability of the system. We finish with a comparison of XSB with other multi-threaded Prolog systems and we compare our approach to concurrent tabling with parallel and distributed methods for the evaluation of tabling. Finally, we identify future research directions.
Resumo:
Many-core platforms based on Network-on-Chip (NoC [Benini and De Micheli 2002]) present an emerging technology in the real-time embedded domain. Although the idea to group the applications previously executed on separated single-core devices, and accommodate them on an individual many-core chip offers various options for power savings, cost reductions and contributes to the overall system flexibility, its implementation is a non-trivial task. In this paper we address the issue of application mapping onto a NoCbased many-core platform when considering fundamentals and trends of current many-core operating systems, specifically, we elaborate on a limited migrative application model encompassing a message-passing paradigm as a communication primitive. As the main contribution, we formulate the problem of real-time application mapping, and propose a three-stage process to efficiently solve it. Through analysis it is assured that derived solutions guarantee the fulfilment of posed time constraints regarding worst-case communication latencies, and at the same time provide an environment to perform load balancing for e.g. thermal, energy, fault tolerance or performance reasons.We also propose several constraints regarding the topological structure of the application mapping, as well as the inter- and intra-application communication patterns, which efficiently solve the issues of pessimism and/or intractability when performing the analysis.
Resumo:
Contention on the memory bus in COTS based multicore systems is becoming a major determining factor of the execution time of a task. Analyzing this extra execution time is non-trivial because (i) bus arbitration protocols in such systems are often undocumented and (ii) the times when the memory bus is requested to be used are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. We present a method for finding an upper bound on the extra execution time of a task due to contention on the memory bus in COTS based multicore systems. This method makes no assumptions on the bus arbitration protocol (other than assuming that it is work-conserving).
Resumo:
Multiprocessors, particularly in the form of multicores, are becoming standard building blocks for executing reliable software. But their use for applications with hard real-time requirements is non-trivial. Well-known realtime scheduling algorithms in the uniprocessor context (Rate-Monotonic [1] or Earliest-Deadline-First [1]) do not perform well on multiprocessors. For this reason the scientific community in the area of real-time systems has produced new algorithms specifically for multiprocessors. In the meanwhile, a proposal [2] exists for extending the Ada language with new basic constructs which can be used for implementing new algorithms for real-time scheduling; the family of task splitting algorithms is one of them which was emphasized in the proposal [2]. Consequently, assessing whether existing task splitting multiprocessor scheduling algorithms can be implemented with these constructs is paramount. In this paper we present a list of state-of-art task-splitting multiprocessor scheduling algorithms and, for each of them, we present detailed Ada code that uses the new constructs.
Resumo:
This Thesis describes the application of automatic learning methods for a) the classification of organic and metabolic reactions, and b) the mapping of Potential Energy Surfaces(PES). The classification of reactions was approached with two distinct methodologies: a representation of chemical reactions based on NMR data, and a representation of chemical reactions from the reaction equation based on the physico-chemical and topological features of chemical bonds. NMR-based classification of photochemical and enzymatic reactions. Photochemical and metabolic reactions were classified by Kohonen Self-Organizing Maps (Kohonen SOMs) and Random Forests (RFs) taking as input the difference between the 1H NMR spectra of the products and the reactants. The development of such a representation can be applied in automatic analysis of changes in the 1H NMR spectrum of a mixture and their interpretation in terms of the chemical reactions taking place. Examples of possible applications are the monitoring of reaction processes, evaluation of the stability of chemicals, or even the interpretation of metabonomic data. A Kohonen SOM trained with a data set of metabolic reactions catalysed by transferases was able to correctly classify 75% of an independent test set in terms of the EC number subclass. Random Forests improved the correct predictions to 79%. With photochemical reactions classified into 7 groups, an independent test set was classified with 86-93% accuracy. The data set of photochemical reactions was also used to simulate mixtures with two reactions occurring simultaneously. Kohonen SOMs and Feed-Forward Neural Networks (FFNNs) were trained to classify the reactions occurring in a mixture based on the 1H NMR spectra of the products and reactants. Kohonen SOMs allowed the correct assignment of 53-63% of the mixtures (in a test set). Counter-Propagation Neural Networks (CPNNs) gave origin to similar results. The use of supervised learning techniques allowed an improvement in the results. They were improved to 77% of correct assignments when an ensemble of ten FFNNs were used and to 80% when Random Forests were used. This study was performed with NMR data simulated from the molecular structure by the SPINUS program. In the design of one test set, simulated data was combined with experimental data. The results support the proposal of linking databases of chemical reactions to experimental or simulated NMR data for automatic classification of reactions and mixtures of reactions. Genome-scale classification of enzymatic reactions from their reaction equation. The MOLMAP descriptor relies on a Kohonen SOM that defines types of bonds on the basis of their physico-chemical and topological properties. The MOLMAP descriptor of a molecule represents the types of bonds available in that molecule. The MOLMAP descriptor of a reaction is defined as the difference between the MOLMAPs of the products and the reactants, and numerically encodes the pattern of bonds that are broken, changed, and made during a chemical reaction. The automatic perception of chemical similarities between metabolic reactions is required for a variety of applications ranging from the computer validation of classification systems, genome-scale reconstruction (or comparison) of metabolic pathways, to the classification of enzymatic mechanisms. Catalytic functions of proteins are generally described by the EC numbers that are simultaneously employed as identifiers of reactions, enzymes, and enzyme genes, thus linking metabolic and genomic information. Different methods should be available to automatically compare metabolic reactions and for the automatic assignment of EC numbers to reactions still not officially classified. In this study, the genome-scale data set of enzymatic reactions available in the KEGG database was encoded by the MOLMAP descriptors, and was submitted to Kohonen SOMs to compare the resulting map with the official EC number classification, to explore the possibility of predicting EC numbers from the reaction equation, and to assess the internal consistency of the EC classification at the class level. A general agreement with the EC classification was observed, i.e. a relationship between the similarity of MOLMAPs and the similarity of EC numbers. At the same time, MOLMAPs were able to discriminate between EC sub-subclasses. EC numbers could be assigned at the class, subclass, and sub-subclass levels with accuracies up to 92%, 80%, and 70% for independent test sets. The correspondence between chemical similarity of metabolic reactions and their MOLMAP descriptors was applied to the identification of a number of reactions mapped into the same neuron but belonging to different EC classes, which demonstrated the ability of the MOLMAP/SOM approach to verify the internal consistency of classifications in databases of metabolic reactions. RFs were also used to assign the four levels of the EC hierarchy from the reaction equation. EC numbers were correctly assigned in 95%, 90%, 85% and 86% of the cases (for independent test sets) at the class, subclass, sub-subclass and full EC number level,respectively. Experiments for the classification of reactions from the main reactants and products were performed with RFs - EC numbers were assigned at the class, subclass and sub-subclass level with accuracies of 78%, 74% and 63%, respectively. In the course of the experiments with metabolic reactions we suggested that the MOLMAP / SOM concept could be extended to the representation of other levels of metabolic information such as metabolic pathways. Following the MOLMAP idea, the pattern of neurons activated by the reactions of a metabolic pathway is a representation of the reactions involved in that pathway - a descriptor of the metabolic pathway. This reasoning enabled the comparison of different pathways, the automatic classification of pathways, and a classification of organisms based on their biochemical machinery. The three levels of classification (from bonds to metabolic pathways) allowed to map and perceive chemical similarities between metabolic pathways even for pathways of different types of metabolism and pathways that do not share similarities in terms of EC numbers. Mapping of PES by neural networks (NNs). In a first series of experiments, ensembles of Feed-Forward NNs (EnsFFNNs) and Associative Neural Networks (ASNNs) were trained to reproduce PES represented by the Lennard-Jones (LJ) analytical potential function. The accuracy of the method was assessed by comparing the results of molecular dynamics simulations (thermal, structural, and dynamic properties) obtained from the NNs-PES and from the LJ function. The results indicated that for LJ-type potentials, NNs can be trained to generate accurate PES to be used in molecular simulations. EnsFFNNs and ASNNs gave better results than single FFNNs. A remarkable ability of the NNs models to interpolate between distant curves and accurately reproduce potentials to be used in molecular simulations is shown. The purpose of the first study was to systematically analyse the accuracy of different NNs. Our main motivation, however, is reflected in the next study: the mapping of multidimensional PES by NNs to simulate, by Molecular Dynamics or Monte Carlo, the adsorption and self-assembly of solvated organic molecules on noble-metal electrodes. Indeed, for such complex and heterogeneous systems the development of suitable analytical functions that fit quantum mechanical interaction energies is a non-trivial or even impossible task. The data consisted of energy values, from Density Functional Theory (DFT) calculations, at different distances, for several molecular orientations and three electrode adsorption sites. The results indicate that NNs require a data set large enough to cover well the diversity of possible interaction sites, distances, and orientations. NNs trained with such data sets can perform equally well or even better than analytical functions. Therefore, they can be used in molecular simulations, particularly for the ethanol/Au (111) interface which is the case studied in the present Thesis. Once properly trained, the networks are able to produce, as output, any required number of energy points for accurate interpolations.