66 resultados para citation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high performance and capacity of current FPGAs makes them suitable as acceleration co-processors. This article studies the implementation, for such accelerators, of the floating-point power function xy as defined by the C99 and IEEE 754-2008 standards, generalized here to arbitrary exponent and mantissa sizes. Last-bit accuracy at the smallest possible cost is obtained thanks to a careful study of the various subcomponents: a floating-point logarithm, a modified floating-point exponential, and a truncated floating-point multiplier. A parameterized architecture generator in the open-source FloPoCo project is presented in details and evaluated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many data streaming applications produces massive amounts of data that must be processed in a distributed fashion due to the resource limitation of a single machine. We propose a distributed data stream clustering protocol. Theoretical analysis shows preliminary results about the quality of discovered clustering. In addition, we present results about the ability to reduce the time complexity respect to the centralized approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The potential shown by Lean in different domains has aroused interest in the software industry. However, it remains unclear how Lean can be effectively applied in a domain such as software development that is fundamentally different from manufacturing. This study explores how Lean principles are implemented in software development companies and the challenges that arise when applying Lean Software Development. For that, a case study was conducted at Ericsson R&D Finland, which successfully adopted Scrum in 2009 and subsequently started a comprehensible transition to Lean in 2010. Focus groups were conducted with company representatives to help devise a questionnaire supporting the creation of a Lean mindset in the company (Team Amplifier). Afterwards, the questionnaire was used in 16 teams based in Finland, Hungary and China to evaluate the status of the transformation. By using Lean thinking, Ericsson R&D Finland has made important improvements to the quality of its products, customer satisfaction and transparency within the organization. Moreover, build times have been reduced over ten times and the number of commits per day has increased roughly five times.The study makes two main contributions to research. First, the main factors that have enabled Ericsson R&D?s achievements are analysed. Elements such as ?network of product owners?, ?continuous integration?, ?work in progress limits? and ?communities of practice? have been identified as being of fundamental importance. Second, three categories of challenges in using Lean Software Development were identified: ?achieving flow?, ?transparency? and ?creating a learning culture?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, we argue that there is a growing number of linked datasets in different natural languages, and that there is a need for guidelines and mechanisms to ensure the quality and organic growth of this emerging multilingual data network. However, we have little knowledge regarding the actual state of this data network, its current practices, and the open challenges that it poses. Questions regarding the distribution of natural languages, the links that are established across data in different languages, or how linguistic features are represented, remain mostly unanswered. Addressing these and other language-related issues can help to identify existing problems, propose new mechanisms and guidelines or adapt the ones in use for publishing linked data including language-related features, and, ultimately, provide metrics to evaluate quality aspects. In this article we review, discuss, and extend current guidelines for publishing linked data by focusing on those methods, techniques and tools that can help RDF publishers to cope with language barriers. Whenever possible, we will illustrate and discuss each of these guidelines, methods, and tools on the basis of practical examples that we have encountered in the publication of the datos.bne.es dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An effective Distributed Denial of Service (DDoS) defense mechanism must guarantee legitimate users access to an Internet service masking the effects of possible attacks. That is, it must be able to detect threats and discard malicious packets in a online fashion. Given that emerging data streaming technology can enable such mitigation in an effective manner, in this paper we present STONE, a stream-based DDoS defense framework, which integrates anomaly-based DDoS detection and mitigation with scalable data streaming technology. With STONE, the traffic of potential targets is analyzed via continuous data streaming queries maintaining information used for both attack detection and mitigation. STONE provides minimal degradation of legitimate users traffic during DDoS attacks and it also faces effectively flash crowds. Our preliminary evaluation based on an implemented prototype and conducted with real legitimate and malicious traffic traces shows that STONE is able to provide fast detection and precise mitigation of DDoS attacks leveraging scalable data streaming technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive by program transformation Pierre Crégut s full-reducing Krivine machine KN from the structural operational semantics of the normal order reduction strategy in a closure-converted pure lambda calculus. We thus establish the correspondence between the strategy and the machine, and showcase our technique for deriving full-reducing abstract machines. Actually, the machine we obtain is a slightly optimised version that can work with open terms and may be used in implementations of proof assistants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Olivier Danvy and others have shown the syntactic correspondence between reduction semantics (a small-step semantics) and abstract machines, as well as the functional correspondence between reduction-free normalisers (a big-step semantics) and abstract machines. The correspondences are established by program transformation (so-called interderivation) techniques. A reduction semantics and a reduction-free normaliser are interderivable when the abstract machine obtained from them is the same. However, the correspondences fail when the underlying reduction strategy is hybrid, i.e., relies on another sub-strategy. Hybridisation is an essential structural property of full-reducing and complete strategies. Hybridisation is unproblematic in the functional correspondence. But in the syntactic correspondence the refocusing and inlining-of-iterate-function steps become context sensitive, preventing the refunctionalisation of the abstract machine. We show how to solve the problem and showcase the interderivation of normalisers for normal order, the standard, full-reducing and complete strategy of the pure lambda calculus. Our solution makes it possible to interderive, rather than contrive, full-reducing abstract machines. As expected, the machine we obtain is a variant of Pierre Crégut s full Krivine machine KN.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

UML is widely accepted as the standard for representing the various software artifacts generated by a development process. For this reason, there have been attempts to use this language to represent the software architecture of systems as well. Unfortunately, these attempts have ended in the same representations (boxes and lines) already criticized by the software architecture community.In this work we propose an extension to the UML metamodel that is able to represent the syntactics and semantics of the C3 architectural style. This style is derived from C2. The modifications to define C3 are described in section 4. This proposal is innovative regarding UML extensions for software architectures, since previous proposals where based on light extensions to the UML meta-model, while we propose a heavyweight extension of the metamodel. On the other hand, this proposal is less ambitious than previous proposals, since we do not want to represent in UML any architectural style, but only one: C3.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El 29 de enero de 2000 Francisco Javier Sáenz de Oíza ofrece su última conferencia en el salón de actos del edificio sede del BBVA en el Paseo de la Castellana de Madrid. Esta conferencia inaugura el ciclo “El arquitecto enseña su obra”, organizado por el Colegio Oficial de Arquitectos de Madrid (C.O.A.M.) y la Escuela Técnica Superior de Arquitectura de Madrid (E.T.S.A.M.). Las obras en concreto que Oíza 'enseña' en este evento son Torres Blancas (Madrid, 1961- 1968) y Banco de Bilbao (Madrid, 1971-1980). Antes de la conferencia, blandiendo una carpeta, Oíza dice: Traigo aquí los textos de siempre, los que siempre he usado, (...) yo no he escrito ni una línea, solo he subrayado ciertos pasajes, en la medida de lo posible me gustaría leer alguno. Pero como aquí hay como para cinco horas, pues no se qué hacer. Citarlo sí, porque de ustedes el que quiera penetrar un poco en mi conocimiento, pues que vea las citas que yo hago aquí, qué pasajes, o qué libros o qué artículos propongo. 1 La carpeta que lleva Oíza en su mano contiene veinticuatro fichas mecanografiadas por él mismo. Las fichas son pasajes de textos; en su mayoría de textos literarios o poéticos, e incluyen una referencia bibliográfica que menciona: título de libro, edición y número de página. Además, en cada ficha están resaltados los pasajes que Oíza pretende leer. 2 Antes del comienzo de la conferencia Oíza dice: Tengo aquí citas de lecturas que le recomendaría a quien quiera enterarse de cómo es este edificio. 3 Durante la conferencia Oíza no parece hablar sobre las obras que debe enseñar; en cambio se dedica, casi exclusivamente, a leer en público. Esta tesis nace de lo sugerido por el propio Oíza. El objetivo general, siguiendo sus propias palabras, es 'penetrar un poco' en su conocimiento a partir de citas y recomendaciones de lecturas realizadas por él mismo al tratar sobre arquitectura. La hipótesis central plantea que por medio de sus lecturas Oíza sí habla de arquitectura, y sostiene también que a partir de sus textos es posible 'enterarse', al menos en parte, de cómo es un edificio, en particular Torres Blancas y Banco de Bilbao. Más aún, se plantea la hipótesis de que Oíza, maestro ágrafo y de carácter socrático, ha construido, no obstante, un 'discurso teórico' arquitectónico suficientemente sistemático. De modo que aún cuando Oíza no ha dejado una 'teoría' escrita es posible reconstruirla, en cierta medida, a partir de su elocución y a partir de la comprensión de sus 'lecturas'. Las fuentes primarias de esta tesis son: a) Las lecturas que Oíza recomienda en su conferencia de enero de 2000. b) Torres Blancas y Banco de Bilbao. c) Las lecturas en público realizadas por Oíza en conferencias, cursos, debates, mesas redondas, programas de TV, etc. El tema que se investiga es la relación entre los textos recomendados por Oíza y su arquitectura, y la pregunta guía de la investigación es cómo y en qué medida los textos pueden contribuir a la comprensión del pensamiento, del discurso y de la obra de Oíza. Torres Blancas y Banco de Bilbao son, en todo momento, las dos principales obras utilizadas para la observación y el contraste de los resultados de la investigación teórica en el desarrollo de la tesis. El soporte teórico y metodológico de la tesis está dado por la hermenéutica en tanto disciplina encargada de la interpretación y correcta comprensión de textos y obras, en particular por la filosofía hermenéutica de Hans-Georg Gadamer expuesta en Verdad y método. La relevancia, el aspecto original y la posible aportación al conocimiento de esta tesis consiste en una aproximación a Oíza y a su arquitectura, desde un punto de vista hasta ahora no investigado de forma sistemática, esto es, a partir de las lecturas de Oíza. ABSTRACT On 29th January 2000 Francisco Javier Sáenz de Oíza gave his last lecture in the Auditorium at BBVA headquarters located in Paseo de la Castellana avenue in Madrid. That lecture opened the series The Architect Shows his Work (El arquitecto enseña su obra) organised by the COAM Official College of Architects and the ETSAM Architecture School of Madrid. The specific works that Oíza 'shows' in his lecture were Torres Blancas (Madrid, 1961- 1968) and Banco de Bilbao (Madrid, 1971-1980). Before the lecture, waving a folder in his hand, Oíza says: I've got here the texts I've always used, (…) I haven't written a line, I've only underlined certain passages, I would like to read some of them to the extent possible. But I have here about five hours of reading, so I don't know what to do. I can cite them, yes, and anyone of you who want to delve a little into my knowledge can look at the citations I make here, the passages, books or articles I recommend. 1 The folder in Oíza's hand contains 24 files typed by the architect himself. The files consist of text passages -most of which are literary or poetry texts- and include the bibliographic citation with book the title, edition and page number. In addition, the passages Oíza intends to read are highlighted. Before starting his lecture Oíza says: I've got here citations of readings I'd recommend to those who want to realise how this building is. 2 During the lecture Oíza doesn't seem to be talking about the works he has to show, on the contrary, he focuses almost exclusively on reading texts to the audience. This thesis is the result of a suggestion made by Oíza himself. The broad aim is to 'delve a little into' his knowledge using citations and reading recommendations made by Oíza when dealing with the architecture subject. The main hypothesis proposes that Oíza talks about architecture through his readings and that starting from 'his' texts it is possible to 'realise', at least in part, how a building is, Torres Blancas and Banco de Bilbao in particular. Moreover, it is proposed the hypothesis that Oíza, as a Socratic teacher reluctant to write down his ideas, has nevertheless built a systematic 'theoretical discourse' on architecture. As a result, even when he has not written a 'theory', it is possible to reconstruct it to a certain degree, starting from his speech and from the understanding of his readings. The primary sources for this thesis are: a) The readings that Oíza recommends in his lecture of January 2000. b) The Torres Blancas and Banco de Bilbao. c) The readings done by Oíza in presentations, lectures, debates, panel discussions, television programmes, etc. The subject under research is the relationship between the texts that Oíza recommends and his architecture. The guiding question for the research is how and to what extent the texts can contribute to the understanding of Oíza's thoughts, discourse and work. Torres Blancas and Banco de Bilbao are the two main works considered along the thesis and they have been used for observation and to test the results of the theoretical research as it progresses. The thesis theoretical and methodological framework is based on the hermeneutics -as the discipline that deals with the interpretation and the correct understanding of texts and works-, in particular the hermeneutics philosophy of Hans-Georg Gadamer in Truth and Method. The relevance, the original element and the possible contribution of this thesis is given by the approach to Oíza and his architecture through Oíza's readings, a perspective that hasn't been yet systematically researched.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most demanding needs in cloud computing is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. In this paper, we examine three eager database replication systems available today: Middle-R, C-JDBC and MySQL Cluster using TPC-W benchmark. We analyze their architecture, replication protocols and compare the performance both in the absence of failures and when there are failures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El aprendizaje automático y la cienciometría son las disciplinas científicas que se tratan en esta tesis. El aprendizaje automático trata sobre la construcción y el estudio de algoritmos que puedan aprender a partir de datos, mientras que la cienciometría se ocupa principalmente del análisis de la ciencia desde una perspectiva cuantitativa. Hoy en día, los avances en el aprendizaje automático proporcionan las herramientas matemáticas y estadísticas para trabajar correctamente con la gran cantidad de datos cienciométricos almacenados en bases de datos bibliográficas. En este contexto, el uso de nuevos métodos de aprendizaje automático en aplicaciones de cienciometría es el foco de atención de esta tesis doctoral. Esta tesis propone nuevas contribuciones en el aprendizaje automático que podrían arrojar luz sobre el área de la cienciometría. Estas contribuciones están divididas en tres partes: Varios modelos supervisados (in)sensibles al coste son aprendidos para predecir el éxito científico de los artículos y los investigadores. Los modelos sensibles al coste no están interesados en maximizar la precisión de clasificación, sino en la minimización del coste total esperado derivado de los errores ocasionados. En este contexto, los editores de revistas científicas podrían disponer de una herramienta capaz de predecir el número de citas de un artículo en el fututo antes de ser publicado, mientras que los comités de promoción podrían predecir el incremento anual del índice h de los investigadores en los primeros años. Estos modelos predictivos podrían allanar el camino hacia nuevos sistemas de evaluación. Varios modelos gráficos probabilísticos son aprendidos para explotar y descubrir nuevas relaciones entre el gran número de índices bibliométricos existentes. En este contexto, la comunidad científica podría medir cómo algunos índices influyen en otros en términos probabilísticos y realizar propagación de la evidencia e inferencia abductiva para responder a preguntas bibliométricas. Además, la comunidad científica podría descubrir qué índices bibliométricos tienen mayor poder predictivo. Este es un problema de regresión multi-respuesta en el que el papel de cada variable, predictiva o respuesta, es desconocido de antemano. Los índices resultantes podrían ser muy útiles para la predicción, es decir, cuando se conocen sus valores, el conocimiento de cualquier valor no proporciona información sobre la predicción de otros índices bibliométricos. Un estudio bibliométrico sobre la investigación española en informática ha sido realizado bajo la cultura de publicar o morir. Este estudio se basa en una metodología de análisis de clusters que caracteriza la actividad en la investigación en términos de productividad, visibilidad, calidad, prestigio y colaboración internacional. Este estudio también analiza los efectos de la colaboración en la productividad y la visibilidad bajo diferentes circunstancias. ABSTRACT Machine learning and scientometrics are the scientific disciplines which are covered in this dissertation. Machine learning deals with the construction and study of algorithms that can learn from data, whereas scientometrics is mainly concerned with the analysis of science from a quantitative perspective. Nowadays, advances in machine learning provide the mathematical and statistical tools for properly working with the vast amount of scientometrics data stored in bibliographic databases. In this context, the use of novel machine learning methods in scientometrics applications is the focus of attention of this dissertation. This dissertation proposes new machine learning contributions which would shed light on the scientometrics area. These contributions are divided in three parts: Several supervised cost-(in)sensitive models are learned to predict the scientific success of articles and researchers. Cost-sensitive models are not interested in maximizing classification accuracy, but in minimizing the expected total cost of the error derived from mistakes in the classification process. In this context, publishers of scientific journals could have a tool capable of predicting the citation count of an article in the future before it is published, whereas promotion committees could predict the annual increase of the h-index of researchers within the first few years. These predictive models would pave the way for new assessment systems. Several probabilistic graphical models are learned to exploit and discover new relationships among the vast number of existing bibliometric indices. In this context, scientific community could measure how some indices influence others in probabilistic terms and perform evidence propagation and abduction inference for answering bibliometric questions. Also, scientific community could uncover which bibliometric indices have a higher predictive power. This is a multi-output regression problem where the role of each variable, predictive or response, is unknown beforehand. The resulting indices could be very useful for prediction purposes, that is, when their index values are known, knowledge of any index value provides no information on the prediction of other bibliometric indices. A scientometric study of the Spanish computer science research is performed under the publish-or-perish culture. This study is based on a cluster analysis methodology which characterizes the research activity in terms of productivity, visibility, quality, prestige and international collaboration. This study also analyzes the effects of collaboration on productivity and visibility under different circumstances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Empirical Software Engineering (ESE) replication researchers need to store and manipulate experimental data for several purposes, in particular analysis and reporting. Current research needs call for sharing and preservation of experimental data as well. In a previous work, we analyzed Replication Data Management (RDM) needs. A novel concept, called Experimental Ecosystem, was proposed to solve current deficiencies in RDMapproaches. The empirical ecosystem provides replication researchers with a common framework that integrates transparently local heterogeneous data sources. A typical situation where the Empirical Ecosystem is applicable, is when several members of a research group, or several research groups collaborating together, need to share and access each other experimental results. However, to be able to apply the Empirical Ecosystem concept and deliver all promised benefits, it is necessary to analyze the software architectures and tools that can properly support it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context: The software engineering community is becoming more aware of the need for experimental replications. In spite of the importance of this topic, there is still much inconsistency in the terminology used to describe replications. Objective: Understand the perspectives of empirical researchers about various terms used to characterize replications and propose a consistent taxonomy of terms. Method: A survey followed by plenary discussion during the 2013 International Software Engineering Research Network meeting. Results: We propose a taxonomy which consolidates the disparate terminology. This taxonomy had a high level of agreement among workshop attendees. Conclusion: Consistent terminology is important for any field to progress. This work is the first step in that direction. Additional study and discussion is still necessary.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe a domain ontology development approach that extracts domain terms from folksonomies and enrich them with data and vocabularies from the Linked Open Data cloud. As a result, we obtain lightweight domain ontologies that combine the emergent knowledge of social tagging systems with formal knowledge from Ontologies. In order to illustrate the feasibility of our approach, we have produced an ontology in the financial domain from tags available in Delicious, using DBpedia, OpenCyc and UMBEL as additional knowledge sources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cross-platform development frameworks for mobile applications promise important advantages in cost cuttings and easy maintenance, posing as a very good option for organizations interested in the design of mobile applications for several platforms. Given that platform conventions are especially important for the User eXperience (UX) of mobile applications, the usage of a framework where the same code defines the behavior of the app in different platforms could have a negative impact in the UX. This paper describes a study where two independent teams have designed two different versions of a mobile application, one using a framework that generates Android and iOS versions automatically, and another team using native tools. The alternative versions for each platform have been evaluated with 37 users with a combination of a laboratory usability test and a longitudinal study. The results show that differences are minimal in the Android platform, but in iOS, even if a reasonably good UX can be obtained with the usage of this framework by an UX-conscious design team, a higher level of UX can be obtained directly developing with a native tool.