945 resultados para Open Computing Language
Resumo:
Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Provenance plays a pivotal in tracing the origin of something and determining how and why something had occurred. With the emergence of the cloud and the benefits it encompasses, there has been a rapid proliferation of services being adopted by commercial and government sectors. However, trust and security concerns for such services are on an unprecedented scale. Currently, these services expose very little internal working to their customers; this can cause accountability and compliance issues especially in the event of a fault or error, customers and providers are left to point finger at each other. Provenance-based traceability provides a mean to address part of this problem by being able to capture and query events occurred in the past to understand how and why it took place. However, due to the complexity of the cloud infrastructure, the current provenance models lack the expressibility required to describe the inner-working of a cloud service. For a complete solution, a provenance-aware policy language is also required for operators and users to define policies for compliance purpose. The current policy standards do not cater for such requirement. To address these issues, in this paper we propose a provenance (traceability) model cProv, and a provenance-aware policy language (cProvl) to capture traceability data, and express policies for validating against the model. For implementation, we have extended the XACML3.0 architecture to support provenance, and provided a translator that converts cProvl policy and request into XACML type.
Resumo:
In this paper, we develop a fast implementation of an hyperspectral coded aperture (HYCA) algorithm on different platforms using OpenCL, an open standard for parallel programing on heterogeneous systems, which includes a wide variety of devices, from dense multicore systems from major manufactures such as Intel or ARM to new accelerators such as graphics processing units (GPUs), field programmable gate arrays (FPGAs), the Intel Xeon Phi and other custom devices. Our proposed implementation of HYCA significantly reduces its computational cost. Our experiments have been conducted using simulated data and reveal considerable acceleration factors. This kind of implementations with the same descriptive language on different architectures are very important in order to really calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions.
Resumo:
A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.
Resumo:
COSTA, Umberto Souza; MOREIRA, Anamaria Martins; MUSICANTE, Matin A.; SOUZA NETO, Plácido A. JCML: A specification language for the runtime verification of Java Card programs. Science of Computer Programming. [S.l]: [s.n], 2010.
Resumo:
This thesis demonstrates a new methodology for the linguistic analysis of literature drawing on the data within The Historical Thesaurus of the Oxford English Dictionary (2009). Developing ideas laid out by Carol McGuirk in her book Robert Burns and the Sentimental Era (1985), this study offers a novel approach to the cultural connections present in the sentimental literature of the eighteenth century, with specific reference to Robert Burns. In doing so, it responds to the need to “stop reading Burns through glossaries and start reading him through dictionaries, thesauruses and histories”, as called for by Murray Pittock (2012). Beginning by situating the methodology in linguistic theory, this thesis goes on firstly to illustrate the ways in which such an approach can be deployed to assess existing literary critical ideas. The first chapter does this testing by examining McGuirk’s book, while simultaneously grounding the study in the necessary contextual background. Secondly, this study investigates, in detail, two aspects of Burns’s sentimental persona construction. Beginning with his open letter ‘The Address of the Scotch Distillers’ and its sentimental use of the language of the Enlightenment, and moving on to one of Burns’s personas in his letters to George Thomson, this section illustrates the importance of persona construction in Burns’s sentimental ethos. Finally, a comprehensive, evidence-based, comparison of linguistic trends examines the extent to which similar sentimental language is used by Burns and Henry Mackenzie, Laurence Sterne, William Shenstone and Samuel Richardson. This thesis shows how this new methodology is a valuable new tool for those involved in literary scholarship. For the first time in any comprehensive way the Historical Thesaurus can be harnessed to make new arguments in literary criticism.
Resumo:
In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the bifurcation problem associated with the steady incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the critical Reynolds number at which a steady pitchfork or Hopf bifurcation occurs when the underlying physical system possesses reflectional or Z_2 symmetry. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to bifurcation problems. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.
Resumo:
This project examines the current available work on the explicit and implicit parallelization of the R scripting language and reports on experimental findings for the development of a model for predicting effective points for automatic parallelization to be performed, based upon input data sizes and function complexity. After finding or creating a series of custom benchmarks, an interval based on data size and time complexity where replacement becomes a viable option was found; specifically between O(N) and O(N3) exclusive. As data size increases, the benefits of parallel processing become more apparent and a point is reached where those benefits outweigh the costs in memory transfer time. Based on our observations, this point can be predicted with a fair amount of accuracy using regression on a sample of approximately ten data sizes spread evenly between a system determined minimum and maximum size.
Resumo:
Developing Cyber-Physical Systems requires methods and tools to support simulation and verification of hybrid (both continuous and discrete) models. The Acumen modeling and simulation language is an open source testbed for exploring the design space of what rigorousbut- practical next-generation tools can deliver to developers of Cyber- Physical Systems. Like verification tools, a design goal for Acumen is to provide rigorous results. Like simulation tools, it aims to be intuitive, practical, and scalable. However, it is far from evident whether these two goals can be achieved simultaneously. This paper explains the primary design goals for Acumen, the core challenges that must be addressed in order to achieve these goals, the “agile research method” taken by the project, the steps taken to realize these goals, the key lessons learned, and the emerging language design.
Resumo:
Code patterns, including programming patterns and design patterns, are good references for programming language feature improvement and software re-engineering. However, to our knowledge, no existing research has attempted to detect code patterns based on code clone detection technology. In this study, we build upon the previous work and propose to detect and analyze code patterns from a collection of open source projects using NiPAT technology. Because design patterns are most closely associated with object-oriented languages, we choose Java and Python projects to conduct our study. The tool we use for detecting patterns is NiPAT, a pattern detecting tool originally developed for the TXL programming language based on the NiCad clone detector. We extend NiPAT for the Java and Python programming languages. Then, we try to identify all the patterns from the pattern report and classify them into several different categories. In the end of the study, we analyze all the patterns and compare the differences between Java and Python patterns.
Resumo:
This work is a description of Tajio, a Western Malayo-Polynesian language spoken in Central Sulawesi, Indonesia. It covers the essential aspects of Tajio grammar without being exhaustive. Tajio has a medium sized phoneme inventory consisting of twenty consonants and five vowels. The language does not have lexical (word) stress; rather, it has a phrasal accent. This phrasal accent regularly occurs on the penultimate syllable of an intonational phrase, rendering this syllable auditorily prominent through a pitch rise. Possible syllable structures in Tajio are (C)V(C). CVN structures are allowed as closed syllables, but CVN syllables in word-medial position are not frequent. As in other languages in the area, the only sequence of consonants allowed in native Tajio words are sequences of nasals followed by a homorganic obstruent. The homorganic nasal-obstruent sequences found in Tajio can occur word-initially and word-medially but never in word-final position. As in many Austronesian languages, word class classification in Tajio is not straightforward. The classification of words in Tajio must be carried out on two levels: the morphosyntactic level and the lexical level. The open word classes in Tajio consist of nouns and verbs. Verbs are further divided into intransitive verbs (dynamic intransitive verbs and statives) and dynamic transitive verbs. Based on their morphological potential, lexical roots in Tajio fall into three classes: single-class roots, dual-class roots and multi-class roots. There are two basic transitive constructions in Tajio: Actor Voice and Undergoer Voice, where the actor or undergoer argument respectively serves as subjects. It shares many characteristics with symmetrical voice languages, yet it is not fully symmetric, as arguments in AV and UV are not equally marked. Neither subjects nor objects are marked in AV constructions. In UV constructions, however, subjects are unmarked while objects are marked either by prefixation or clitization. Evidence from relativization, control and raising constructions supports the analysis that AV and UV are in fact transitive, with subject arguments and object arguments behaving alike in both voices. Only the subject can be relativized, controlled, raised or function as the implicit subject of subjectless adverbial clauses. In contrast, the objects of AV and UV constructions do not exhibit these features. Tajio is a predominantly head-marking language with basic A-V-O constituent order. V and O form a constituent, and the subject can either precede or follow this complex. Thus, basic word order is S-V-O or V-O-S. Subject, as well as non-subject arguments, may be omitted when contextually specified. Verbs are marked for voice and mood, the latter of which is is obligatory. The two values distinguished are realis and non-realis. Depending on the type of predicate involved in clause formation, three clause types can be distinguished: verbal clauses, existential clauses and non-verbal clauses. Tajio has a small number of multi-verbal structures that appear to qualify as serial verb constructions. SVCs in Tajio always include a motion verb or a directional.
Resumo:
Knowledge organization (KO) research is a field of scholarship concerned with the design, study and critique of the processes of organizing and representing documents that societies see as worthy of preserving (Tennis, 2008). In this context we are concerned with the relationship between language and action.On the one hand, we are concerned with what language can and does do for our knowledge organization systems (KOS). For example, how do the words NEGRO or INDIAN work in historical and contemporary indexing languages? In relation to this, we are also concerned with how we know about knowledge organization (KO) and its languages. On the other hand, we are concerned with how to act given this knowledge. That is, how do we carry out research and how do we design, implement, and evaluate KO systems?It is important to consider these questions in the context of our work because we are delegated by society to disseminate cultural memory. We are endowed with a perspective, prepared by an education, and granted positions whereby society asks us to ensure that documentary material is accessible to future generations. There is a social value in our work, and as such there is a social imperative to our work. We must act with good conscience, and use language judiciously, for the memory of the world is a heavy burden.In this paper, I explore these two weights of language and action that bear down on KO researchers. I first summarize what extant literature says about the knowledge claims we make with regard to KO practices and systems. To make it clear what it is that I think we know, I create a schematic that will link claims (language) to actions in advising, implementing, or evaluating information practices and systems.I will then contrast this with what we do not know, that is, what the unanswered questions might be (Gnoli, 2008 ; Dahlberg, 2011), and I will discuss them in relation to the two weights in our field of KO.Further, I will try to provide a systematic overview of possible ways to address these open questions in KO research. I will draw on the concept of elenchus - the forms of epistemology, theory, and methodology in KO (Tennis, 2008), and framework analysis which are structures, work practice, and discourses of KO systems (Tennis, 2006). In so doing, I will argue for a Neopragmatic stance on the weight of language and action in KO (Rorty, 1982 ; 2000). I will close by addressing the lacuna left in Neopragmatic thought – the ethical imperative to use language and action in a particular good and moral way. That is, I will address the ethical imperative of KO given its weights, epistemologies, theories, and methods. To do this, I will review a sample of relevant work on deontology in both western and eastern philosophical schools (e.g., Harvey, 1995).The perspective I want to communicate in this section is that the good in carrying out KO research may begin with epistemic stances (cf., language), but ultimately stands on ethical actions. I will present an analysis describing the micro and the macro ethical concerns in relation to KO research and its advice on practice. I hope this demonstrates that the direction of epistemology, theory, and methodology in KO, while burdened with the dual weights of language and action, is clear when provided an ethical sounding board. We know how to proceed when we understand how our work can benefit the world.KO is an important, if not always understood, division of labor in a society that values its documentary heritage and memory institutions. Being able to do good requires us to understand how to balance the weights of language and action. We must understand where we stand and be able to chart a path forward, one that does not cause harm, but adds value to the world and those that want to access recorded knowledge.