987 resultados para Music|Computer Science
Resumo:
Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of net work partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. Weintroduceanewcausalitymanagementframeworkforeventuallyconsistentdatastores,thatleveragesnodelogicalclocks(BitmappedVersion Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.
Resumo:
The development of ubiquitous computing (ubicomp) environments raises several challenges in terms of their evaluation. Ubicomp virtual reality prototyping tools enable users to experience the system to be developed and are of great help to face those challenges, as they support developers in assessing the consequences of a design decision in the early phases of development. Given the situated nature of ubicomp environments, a particular issue to consider is the level of realism provided by the prototypes. This work presents a case study where two ubicomp prototypes, featuring different levels of immersion (desktop-based versus CAVE-based), were developed and compared. The goal was to determine the cost/benefits relation of both solutions, which provided better user experience results, and whether or not simpler solutions provide the same user experience results as more elaborate one.
Resumo:
Model finders are very popular for exploring scenarios, helping users validate specifications by navigating through conforming model instances. To be practical, the semantics of such scenario exploration operations should be formally defined and, ideally, controlled by the users, so that they are able to quickly reach interesting scenarios. This paper explores the landscape of scenario exploration operations, by formalizing them with a relational model finder. Several scenario exploration operations provided by existing tools are formalized, and new ones are proposed, namely to allow the user to easily explore very similar (or different) scenarios, by attaching preferences to model elements. As a proof-of-concept, such operations were implemented in the popular Alloy Analyzer, further increasing its usefulness for (user-guided) scenario exploration.
Resumo:
Temporal logics targeting real-time systems are traditionally undecidable. Based on a restricted fragment of MTL-R, we propose a new approach for the runtime verification of hard real-time systems. The novelty of our technique is that it is based on incremental evaluation, allowing us to e↵ectively treat duration properties (which play a crucial role in real-time systems). We describe the two levels of operation of our approach: offline simplification by quantifier removal techniques; and online evaluation of a three-valued interpretation for formulas of our fragment. Our experiments show the applicability of this mechanism as well as the validity of the provided complexity results.
Resumo:
This paper introduces the metaphorism pattern of relational specification and addresses how specification following this pattern can be refined into recursive programs. Metaphorisms express input-output relationships which preserve relevant information while at the same time some intended optimization takes place. Text processing, sorting, representation changers, etc., are examples of metaphorisms. The kind of metaphorism refinement proposed in this paper is a strategy known as change of virtual data structure. It gives sufficient conditions for such implementations to be calculated using relation algebra and illustrates the strategy with the derivation of quicksort as example.
Resumo:
"Lecture notes in computer science series", ISSN 0302-9743, vol. 9121
Resumo:
We consider implicit signatures over finite semigroups determined by sets of pseudonatural numbers. We prove that, under relatively simple hypotheses on a pseudovariety V of semigroups, the finitely generated free algebra for the largest such signature is closed under taking factors within the free pro-V semigroup on the same set of generators. Furthermore, we show that the natural analogue of the Pin-Reutenauer descriptive procedure for the closure of a rational language in the free group with respect to the profinite topology holds for the pseudovariety of all finite semigroups. As an application, we establish that a pseudovariety enjoys this property if and only if it is full.
Resumo:
Under the framework of constraint based modeling, genome-scale metabolic models (GSMMs) have been used for several tasks, such as metabolic engineering and phenotype prediction. More recently, their application in health related research has spanned drug discovery, biomarker identification and host-pathogen interactions, targeting diseases such as cancer, Alzheimer, obesity or diabetes. In the last years, the development of novel techniques for genome sequencing and other high-throughput methods, together with advances in Bioinformatics, allowed the reconstruction of GSMMs for human cells. Considering the diversity of cell types and tissues present in the human body, it is imperative to develop tissue-specific metabolic models. Methods to automatically generate these models, based on generic human metabolic models and a plethora of omics data, have been proposed. However, their results have not yet been adequately and critically evaluated and compared. This work presents a survey of the most important tissue or cell type specific metabolic model reconstruction methods, which use literature, transcriptomics, proteomics and metabolomics data, together with a global template model. As a case study, we analyzed the consistency between several omics data sources and reconstructed distinct metabolic models of hepatocytes using different methods and data sources as inputs. The results show that omics data sources have a poor overlapping and, in some cases, are even contradictory. Additionally, the hepatocyte metabolic models generated are in many cases not able to perform metabolic functions known to be present in the liver tissue. We conclude that reliable methods for a priori omics data integration are required to support the reconstruction of complex models of human cells.
Resumo:
Relatório de estágio de mestrado em Ensino de Informática
Resumo:
Higher-dimensional automata constitute a very expressive model for concurrent systems. In this paper, we discuss ``topological abstraction" of higher-dimensional automata, i.e., the replacement of HDAs by smaller ones that can be considered equivalent from the point of view of both computer science and topology. By definition, topological abstraction preserves the homotopy type, the trace category, and the homology graph of an HDA. We establish conditions under which cube collapses yield topological abstractions of HDAs.
Resumo:
Uno de los temas centrales del proyecto concierne la naturaleza de la ciencia de la computación. La reciente aparición de esta disciplina sumada a su origen híbrido como ciencia formal y disciplina tecnológica hace que su caracterización aún no esté completa y menos aún acordada entre los científicos del área. En el trabajo Three paradigms of Computer Science de A. Eden, se presentan tres posiciones admitidamente exageradas acerca de como entender tanto el objeto de estudio (ontología) como los métodos de trabajo (metodología) y la estructura de la teoría y las justificaciones del conocimiento informático (epistemología): La llamada racionalista, la cual se basa en la idea de que los programas son fórmulas lógicas y que la forma de trabajo es deductiva, la tecnocrática que presenta a la ciencia computacional como una disciplina ingenieril y la ahi llamada científica, la cual asimilaría a la computación a las ciencias empíricas. Algunos de los problemas de ciencia de la computación están relacionados con cuestiones de filosofía de la matemática, en particular la relación entre las entidades abstractas y el mundo. Sin embargo, el carácter prescriptivo de los axiomas y teoremas de las teorías de la programación puede permitir interpretaciones alternativas y cuestionaría fuertemente la posibilidad de pensar a la ciencia de la computación como una ciencia empírica, al menos en el sentido tradicional. Por otro lado, es posible que el tipo de análisis aplicado a las ciencias de la computación propuesto en este proyecto aporte nuevas ideas para pensar problemas de filosofía de la matemática. Un ejemplo de estos posibles aportes puede verse en el trabajo de Arkoudas Computers, Justi?cation, and Mathematical Knowledge el cual echa nueva luz al problema del significado de las demostraciones matemáticas.Los objetivos del proyecto son: Caracterizar el campo de las ciencias de la computación.Evaluar los fundamentos ontológicos, epistemológicos y metodológicos de la ciencia de la computación actual.Analizar las relaciones entre las diferentes perspectivas heurísticas y epistémicas y las practicas de la programación.
Resumo:
Visualistics, computer science, picture syntax, picture semantics, picture pragmatics, interactive pictures
Resumo:
El proyecto se enmarca dentro de Plan Ambiental Institucional (PAI) de la Universidad Michoacana de San Nicolás de Hidalgo (UMSNH), México, en lo referente a la gestión de residuos y tiene por finalidad analizar la tipología y composición de los residuos que se generan en algunas de las áreas de Ciudad Universitaria (CU). Para esto se realizó una metodología de recogida no selectiva de residuos puerta a puerta que se estructuró en dos fases, la primera, con el objetivo de obtener toda la información sobre el número y tipo de espacios de los edificios para luego elaborar y llevar a cabo el muestreo de los residuos, y la segunda, que se centró en la captura informática y gestión de los pesos de los mismos. De los datos obtenidos se concluyó que los residuos de mayor peso muestreado fueron el papel, la materia orgánica, el cartón y el vidrio transparente, los residuos de mayor generación per cápita fueron el papel, cartucho de impresora, CD y disquete. Finalmente, se concluye que la UMSNH no da tratamiento a los residuos que al ser depositados al aire libre contaminan su medio ambiente. Reciclándolos podrían obtenerse no sólo beneficios ambientales sino también económicos, que disminuirían el costo del reciclado devolviendo los residuos al ciclo productivo.
Resumo:
Aquest projecte es presenta com una solució al problema aparegut per la implantació de l'estació clínica de treball, e-CAP, en els centres d'atenció primària del servei d'atenció primària de la població de l’Hospitalet del Llobregat. La solució desenvolupada és una aplicació Web que proporciona la funcionalitat requerida per a dur a terme el control de l'equip informàtic, així com la gestió de les incidències. Aquesta aplicació Web està enfocada al personal d'atenció a l'usuari dels centres anomenats anteriorment amb la supervisió del departament de Sistemes d'Informació de cada servei d'atenció primària.
Resumo:
Estudi realitzat a partir d’una estada al Computer Science and Artificial Intelligence Lab, del Massachusetts Institute of Technology, entre 2006 i 2008. La recerca desenvolupada en aquest projecte se centra en mètodes d'aprenentatge automàtic per l'anàlisi sintàctica del llenguatge. Com a punt de partida, establim que la complexitat del llenguatge exigeix no només entendre els processos computacionals associats al llenguatge sinó també entendre com es pot aprendre automàticament el coneixement per a dur a terme aquests processos.