214 resultados para computer science, information systems
Resumo:
This paper gives an insight into cognitive computing for smart cities, resulting in cognitive cities. Cognitive cities and cognitive computing research with the underlying concepts of knowledge graphs and fuzzy cognitive maps are presented and supported by existing tools (i.e., IBM Watson and Google Now) and intended tools (meta-app). The paper illustrates FCM as a suiting instrument to represent information/knowledge in a city environment driven by human-technology interaction, enforcing the concept of cognitive cities. A proposed paper prototype combines the findings of the paper and shows the next step in the implementation of the proposed meta-app.
Resumo:
Information-centric networking (ICN) has been proposed to cope with the drawbacks of the Internet Protocol, namely scalability and security. The majority of research efforts in ICN have focused on routing and caching in wired networks, while little attention has been paid to optimizing the communication and caching efficiency in wireless networks. In this work, we study the application of Raptor codes to Named Data Networking (NDN), which is a popular ICN architecture, in order to minimize the number of transmitted messages and accelerate content retrieval times. We propose RC-NDN, which is a NDN compatible Raptor codes architecture. In contrast to other coding-based NDN solutions that employ network codes, RC-NDN considers security architectures inherent to NDN. Moreover, different from existing network coding based solutions for NDN, RC-NDN does not require significant computational resources, which renders it appropriate for low cost networks. We evaluate RC-NDN in mobile scenarios with high mobility. Evaluations show that RC-NDN outperforms the original NDN significantly. RC-NDN is particularly efficient in dense environments, where retrieval times can be reduced by 83% and the number of Data transmissions by 84.5% compared to NDN.
Resumo:
Time-based indoor localization has been investigated for several years but the accuracy of existing solutions is limited by several factors, e.g., imperfect synchronization, signal bandwidth and indoor environment. In this paper, we compare two time-based localization algorithms for narrow-band signals, i.e., multilateration and fingerprinting. First, we develop a new Linear Least Square (LLS) algorithm for Differential Time Difference Of Arrival (DTDOA). Second, fingerprinting is among the most successful approaches used for indoor localization and typically relies on the collection of measurements on signal strength over the area of interest. We propose an alternative by constructing fingerprints of fine-grained time information of the radio signal. We offer comprehensive analytical discussions on the feasibility of the approaches, which are backed up by evaluations in a software defined radio based IEEE 802.15.4 testbed. Our work contributes to research on localization with narrow-band signals. The results show that our proposed DTDOA-based LLS algorithm obviously improves the localization accuracy compared to traditional TDOA-based LLS algorithm but the accuracy is still limited because of the complex indoor environment. Furthermore, we show that time-based fingerprinting is a promising alternative to power-based fingerprinting.
Resumo:
Clock synchronization in the order of nanoseconds is one of the critical factors for time-based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper, we are particularly interested in GPS-based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. Our method to evaluate the synchronization accuracy is inspired by signal processing algorithms and relies on fine grain time information. The method is able to calculate the clock offset and skew between devices with nanosecond accuracy in real time. It was implemented using software defined radio technology. We demonstrate that GPS-based synchronization suffers from remaining clock offset in the range of a few hundred of nanoseconds but the clock skew is negligible. Finally, we determine a corresponding lower bound on the expected positioning error.
Resumo:
In this work, we provide a passive location monitoring system for IEEE 802.15.4 signal emitters. The system adopts software defined radio techniques to passively overhear IEEE 802.15.4 packets and to extract power information from baseband signals. In our system, we provide a new model based on the nonlinear regression for ranging. After obtaining distance information, a Weighted Centroid (WC) algorithm is adopted to locate users. In WC, each weight is inversely proportional to the nth power of propagation distance, and the degree n is obtained from some initial measurements. We evaluate our system in a 16m-18m area with complex indoor propagation conditions. We are able to achieve a median error of 2:1m with only 4 anchor nodes.
Resumo:
Software dependencies play a vital role in programme comprehension, change impact analysis and other software maintenance activities. Traditionally, these activities are supported by source code analysis; however, the source code is sometimes inaccessible or difficult to analyse, as in hybrid systems composed of source code in multiple languages using various paradigms (e.g. object-oriented programming and relational databases). Moreover, not all stakeholders have adequate knowledge to perform such analyses. For example, non-technical domain experts and consultants raise most maintenance requests; however, they cannot predict the cost and impact of the requested changes without the support of the developers. We propose a novel approach to predicting software dependencies by exploiting the coupling present in domain-level information. Our approach is independent of the software implementation; hence, it can be used to approximate architectural dependencies without access to the source code or the database. As such, it can be applied to hybrid systems with heterogeneous source code or legacy systems with missing source code. In addition, this approach is based solely on information visible and understandable to domain users; therefore, it can be efficiently used by domain experts without the support of software developers. We evaluate our approach with a case study on a large-scale enterprise system, in which we demonstrate how up to 65 of the source code dependencies and 77% of the database dependencies are predicted solely based on domain information.
Resumo:
Software architecture is the result of a design effort aimed at ensuring a certain set of quality attributes. As we show, quality requirements are commonly specified in practice but are rarely validated using automated techniques. In this paper we analyze and classify commonly specified quality requirements after interviewing professionals and running a survey. We report on tools used to validate those requirements and comment on the obstacles encountered by practitioners when performing such activity (e.g., insufficient tool-support; poor understanding of users needs). Finally we discuss opportunities for increasing the adoption of automated tools based on the information we collected during our study (e.g., using a business-readable notation for expressing quality requirements; increasing awareness by monitoring non-functional aspects of a system).
Resumo:
The domain of context-free languages has been extensively explored and there exist numerous techniques for parsing (all or a subset of) context-free languages. Unfortunately, some programming languages are not context-free. Using standard context-free parsing techniques to parse a context-sensitive programming language poses a considerable challenge. Im- plementors of programming language parsers have adopted various techniques, such as hand-written parsers, special lex- ers, or post-processing of an ambiguous parser output to deal with that challenge. In this paper we suggest a simple extension of a top-down parser with contextual information. Contrary to the tradi- tional approach that uses only the input stream as an input to a parsing function, we use a parsing context that provides ac- cess to a stream and possibly to other context-sensitive infor- mation. At a same time we keep the context-free formalism so a grammar definition stays simple without mind-blowing context-sensitive rules. We show that our approach can be used for various purposes such as indent-sensitive parsing, a high-precision island parsing or XML (with arbitrary el- ement names) parsing. We demonstrate our solution with PetitParser, a parsing-expression grammar based, top-down, parser combinator framework written in Smalltalk.
Resumo:
Software developers are often unsure of the exact name of the API method they need to use to invoke the desired behavior. Most state-of-the-art documentation browsers present API artefacts in alphabetical order. Albeit easy to implement, alphabetical order does not help much: if the developer knew the name of the required method, he could have just searched for it in the first place. In a context where multiple projects use the same API, and their source code is available, we can improve the API presentation by organizing the elements in the order in which they are more likely to be used by the developer. Usage frequency data for methods is gathered by analyzing other projects from the same ecosystem and this data is used then to improve tools. We present a preliminary study on the potential of this approach to improve the API presentation by reducing the time it takes to find the method that implements a given feature. We also briefly present our experience with two proof-of-concept tools implemented for Smalltalk and Java.
Resumo:
Dynamically typed languages lack information about the types of variables in the source code. Developers care about this information as it supports program comprehension. Ba- sic type inference techniques are helpful, but may yield many false positives or negatives. We propose to mine information from the software ecosys- tem on how frequently given types are inferred unambigu- ously to improve the quality of type inference for a single system. This paper presents an approach to augment existing type inference techniques by supplementing the informa- tion available in the source code of a project with data from other projects written in the same language. For all available projects, we track how often messages are sent to instance variables throughout the source code. Predictions for the type of a variable are made based on the messages sent to it. The evaluation of a proof-of-concept prototype shows that this approach works well for types that are sufficiently popular, like those from the standard librarie, and tends to create false positives for unpopular or domain specific types. The false positives are, in most cases, fairly easily identifiable. Also, the evaluation data shows a substantial increase in the number of correctly inferred types when compared to the non-augmented type inference.
Resumo:
The finite depth of field of a real camera can be used to estimate the depth structure of a scene. The distance of an object from the plane in focus determines the defocus blur size. The shape of the blur depends on the shape of the aperture. The blur shape can be designed by masking the main lens aperture. In fact, aperture shapes different from the standard circular aperture give improved accuracy of depth estimation from defocus blur. We introduce an intuitive criterion to design aperture patterns for depth from defocus. The criterion is independent of a specific depth estimation algorithm. We formulate our design criterion by imposing constraints directly in the data domain and optimize the amount of depth information carried by blurred images. Our criterion is a quadratic function of the aperture transmission values. As such, it can be numerically evaluated to estimate optimized aperture patterns quickly. The proposed mask optimization procedure is applicable to different depth estimation scenarios. We use it for depth estimation from two images with different focus settings, for depth estimation from two images with different aperture shapes as well as for depth estimation from a single coded aperture image. In this work we show masks obtained with this new evaluation criterion and test their depth discrimination capability using a state-of-the-art depth estimation algorithm.
Resumo:
Indoor localization systems become more interesting for researchers because of the attractiveness of business cases in various application fields. A WiFi-based passive localization system can provide user location information to third-party providers of positioning services. However, indoor localization techniques are prone to multipath and Non-Line Of Sight (NLOS) propagation, which lead to significant performance degradation. To overcome these problems, we provide a passive localization system for WiFi targets with several improved algorithms for localization. Through Software Defined Radio (SDR) techniques, we extract Channel Impulse Response (CIR) information at the physical layer. CIR is later adopted to mitigate the multipath fading problem. We propose to use a Nonlinear Regression (NLR) method to relate the filtered power information to propagation distances, which significantly improves the ranging accuracy compared to the commonly used log-distance path loss model. To mitigate the influence of ranging errors, a new trilateration algorithm is designed as well by combining Weighted Centroid and Constrained Weighted Least Square (WC-CWLS) algorithms. Experiment results show that our algorithm is robust against ranging errors and outperforms the linear least square algorithm and weighted centroid algorithm.
Resumo:
The unprecedented success of social networking sites (SNSs) has been recently overshadowed by concerns about privacy risks. As SNS users grow weary of privacy breaches and thus develop distrust, they may restrict or even terminate their platform activities. In the long run, these developments endanger SNS platforms’ financial viability and undermine their ability to create individual and social value. By applying a justice perspective, this study aims to understand the means at the disposal of SNS providers to leverage the privacy concerns and trusting beliefs of their users—two important determinants of user participation on SNSs. Considering that SNSs have a global appeal, empirical tests assess the effectiveness of justice measures for three culturally distinct countries: Germany, Russia and Morocco. The results indicate that these measures are particularly suited to address trusting beliefs of SNS audience. Specifically, in all examined countries, procedural justice and the awareness dimension of informational justice improve perceptions of trust in the SNS provider. Privacy concerns, however, are not as easy to manage, because the impact of justice-based measures on privacy concerns is not universal. Beyond theoretical value, this research offers valuable practical insights into the use of justice-based measures to promote trust and mitigate privacy concerns in a cross-cultural setting.