23 resultados para Digital communication models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates how mobile technology usage could help to bring Information and communication technologies (ICT) to the people in developing countries. Some people in developing countries have access to use ICT while other people do not have such opportunity. This digital divide among people is present in many developing countries where computers and the Internet are difficult to access. The Internet provides information that can increase productivity and enable markets to function more efficiently. The Internet reduces information travel time and provides more efficient ways for firms and workers to operate. ICT and the Internet can provide opportunities for economic growth and productivity in developing countries. This indicates that it is very important to bridge the digital divide and increase Internet connections in developing countries. The purpose of this thesis is to investigate how can mobile technology and mobile services help to bridge the digital divide in developing countries. Theoretical background of this thesis consists of a collection of articles and reports. Theoretical material was gathered by going through literature on the digital divide, mobile technology and mobile application development. The empirical research was conducted by sending a questionnaire by email to a selection of application developers located in developing countries. The questionnaire’s purpose was to gather qualitative information concerning mobile application development in developing countries. This thesis main result suggests that mobile phones and mobile technology usage can help to bridge the digital divide in developing countries. This study finds that mobile technology provides one of the best tools that can help to bridge the digital divide in developing countries. Mobile technology can bring affordable ICT to people who do not have access to use computers. Smartphones can provide Internet connection, mobile services and mobile applications to a rapidly growing number of mobile phone users in developing countries. New low-cost smartphones empower people in developing countries to have access to information through the Internet. Mobile technology has the potential to help to bridge the digital divide in developing countries where a vast amount of people own mobile phones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this thesis is to explore a different kind of digital content management model and to propose a process in order to manage properly the content on an organization’s website. This process also defines briefly the roles and responsibilities of the different actors implicated. In order to create this process, the thesis has been divided into two parts. First, the theoretical analysis helps to find the two main different content management models, content management adaptation and content management localization model. Each of these models, have been analyzed through a SWOT model in order to identify their particularities and which of them is the best option according to particular organizational objectives. In the empirical part, this thesis has measured the organizational website performance comparing two main data. On one hand, the international website is analyzed in order to identify the results of the content management standardization. On the other hand, content management adaptation, also called content management localization model, is analyzed by looking through the key measure of the Dutch page from the same organization. The resulted output is a process model for localization as well as recommendations on how to proceed when creating a digital content management strategy. However, more research is recommended to provide more comprehensive managerial solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

User experience is a crucial element in interactive storytelling, and as such it is important to recognize the different aspects of a positive user experience in an interactive story. Towards that goal, in the first half of this thesis, we will go through the different elements that make up the user experience, with a strong focus on agency. Agency can be understood as the user’s ability to affect the story or the world in which the story is told with interesting and satisfying choices. The freedoms granted by agency are not completely compatible with traditional storytelling, and as such we will also go through some of the issues of agency-centric design philosophies and explore alternate schools of thought. The core purpose of this thesis is to determine the most important aspects of agency with regards to a positive user experience and attempt to find ways for authors to improve the overall quality of user experience in interactive stories. The latter half of this thesis deals with the research conducted on this matter. This research was carried out by analyzing data from an online survey coupled with data gathered by the interactive storytelling system specifically made for this research (Regicide). The most important aspects of this research deal with influencing perceived agency and facilitating an illusion of agency in different ways, and comparing user experiences in these different test environments. The most important findings based on this research include the importance of context-controlled and focused agency and settings in which the agency takes place and the importance of ensuring user-competency within an interactive storytelling system. Another essential conclusion to this research boils down to communication between the user and the system; the goal of influencing perceived agency should primarily be to ensure that the user is aware of all the theoretical agency they possess.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current research describes digital innovation largely similar to product innovation. Digital innovation is seen as an object of coherent activities, however in reality digital innovation results from convergence of variant technologies and those related actors with versatile business goals. To account for the dynamic nature of digital innovation, this study applies a service perspective to digital innovation. The purpose of the study is to understand how digital innovation emerges within a service ecosystem for autonomous shipping. The sub-objectives of this study are to 1) identify what factors motivate and demotivate actors to integrate resources for autonomous shipping, 2) explore the key technology areas to be integrated to realise the autonomous shipping concept, and 3) suggest how the technology areas are combined for mutual value creation within a service eco-system for autonomous shipping. Insights from autonomous driving were also included. This study draws on literatures on service innovation and service-dominant logic. The research was conducted as a qualitative exploratory case study. The data comprise interviews of 18 marine and automotive industry experts, 4 workshops, 4 seminars, and observations as well as various secondary data sources. The findings revealed that the key actors have versatile motivations regarding autonomous shipping. These varied from opportunities for single applications to occupying a central role in an autonomous technology platform. Thus, autonomous shipping can be seen as an umbrella concept comprising multiple levels. In technical terms, the development of the concept of autonomous shipping is largely based on combining existing technology solutions, which are gradually integrated towards more systemic entities comprising areas of the autonomous shipping concept. This study argues that a service perspective embraces the inherently complex and dynamic nature of digital innovation. This is captured in the developed research framework that describes digital innovation emerging on different levels of interaction: 1. strategic relationships for new solutions, 2. new local networks for technology platforms, and 3. global networks for new markets. The framework shows how the business models and motivations of digital innovation actors feed the emergence of digital innovation in overlapping service ecosystems that together comprise an innovation ecosystem for autonomous technologies. Digital innovation managers will benefit from seeing their businesses as part of a larger ecosystem of value co-creating actors. In orchestrating digital innovation within a service ecosystem, it is suggested that managers consider the resources, roles and institutions within the ecosystem. Finally, as autonomous shipping is at its infancy, the topic provides a number of interesting avenues for future research.