773 resultados para Data transmission systems.
Resumo:
Deploying wireless networks in networked control systems (NCSs) has become more and more popular during the last few years. As a typical type of real-time control systems, an NCS is sensitive to long and nondeterministic time delay and packet losses. However, the nature of the wireless channel has the potential to degrade the performance of NCS networks in many aspects, particularly in time delay and packet losses. Transport layer protocols could play an important role in providing both reliable and fast transmission service to fulfill NCS’s real-time transmission requirements. Unfortunately, none of the existing transport protocols, including the Transport Control Protocol (TCP) and the User Datagram Protocol (UDP), was designed for real-time control applications. Moreover, periodic data and sporadic data are two types of real-time data traffic with different priorities in an NCS. Due to the lack of support for prioritized transmission service, the real-time performance for periodic and sporadic data in an NCS network is often degraded significantly, particularly under congested network conditions. To address these problems, a new transport layer protocol called Reliable Real-Time Transport Protocol (RRTTP) is proposed in this thesis. As a UDP-based protocol, RRTTP inherits UDP’s simplicity and fast transmission features. To improve the reliability, a retransmission and an acknowledgement mechanism are designed in RRTTP to compensate for packet losses. They are able to avoid unnecessary retransmission of the out-of-date packets in NCSs, and collisions are unlikely to happen, and small transmission delay can be achieved. Moreover, a prioritized transmission mechanism is also designed in RRTTP to improve the real-time performance of NCS networks under congested traffic conditions. Furthermore, the proposed RRTTP is implemented in the Network Simulator 2 for comprehensive simulations. The simulation results demonstrate that RRTTP outperforms TCP and UDP in terms of real-time transmissions in an NCS over wireless networks.
Resumo:
This project was a step forward in developing intrusion detection systems in distributed environments such as web services. It investigates a new approach of detection based on so-called "taint-marking" techniques and introduces a theoretical framework along with its implementation in the Linux kernel.
Resumo:
Digital human modeling (DHM) systems underwent significant development within the last years. They achieved constantly growing importance in the field of ergonomic workplace design, product development, product usability, ergonomic research, ergonomic education, audiovisual marketing and the entertainment industry. They help to design ergonomic products as well as healthy and safe socio-technical work systems. In the domain of scientific DHM systems, no industry specific standard interfaces are defined which could facilitate the exchange of 3D solid body data, anthropometric data or motion data. The focus of this article is to provide an overview of requirements for a reliable data exchange between different DHM systems in order to identify suitable file formats. Examples from the literature are discussed in detail. Methods: As a first step a literature review is conducted on existing studies and file formats for exchanging data between different DHM systems. The found file formats can be structured into different categories: static 3D solid body data exchange, anthropometric data exchange, motion data exchange and comprehensive data exchange. Each file format is discussed and advantages as well as disadvantages for the DHM context are pointed out. Case studies are furthermore presented, which show first approaches to exchange data between DHM systems. Lessons learnt are shortly summarized. Results: A selection of suitable file formats for data exchange between DHM systems is determined from the literature review.
Resumo:
Performance of urban transit systems may be quantified and assessed using transit capacity and productive capacity in planning, design and operational management activities. Bunker (4) defines important productive performance measures of an individual transit service and transit line, which are extended in this paper to quantify efficiency and operating fashion of transit services and lines. Comparison of a hypothetical bus line’s operation during a morning peak hour and daytime hour demonstrates the usefulness of productiveness efficiency and passenger transmission efficiency, passenger churn and average proportion line length traveled to the operator in understanding their services’ and lines’ productive performance, operating characteristics, and quality of service. Productiveness efficiency can flag potential pass-up activity under high load conditions, as well as ineffective resource deployment. Proportion line length traveled can directly measure operating fashion. These measures can be used to compare between lines/routes and, within a given line, various operating scenarios and time horizons to target improvements. The next research stage is investigating within-line variation using smart card passenger data and field observation of pass-ups. Insights will be used to further develop practical guidance to operators.
Resumo:
Analytically or computationally intractable likelihood functions can arise in complex statistical inferential problems making them inaccessible to standard Bayesian inferential methods. Approximate Bayesian computation (ABC) methods address such inferential problems by replacing direct likelihood evaluations with repeated sampling from the model. ABC methods have been predominantly applied to parameter estimation problems and less to model choice problems due to the added difficulty of handling multiple model spaces. The ABC algorithm proposed here addresses model choice problems by extending Fearnhead and Prangle (2012, Journal of the Royal Statistical Society, Series B 74, 1–28) where the posterior mean of the model parameters estimated through regression formed the summary statistics used in the discrepancy measure. An additional stepwise multinomial logistic regression is performed on the model indicator variable in the regression step and the estimated model probabilities are incorporated into the set of summary statistics for model choice purposes. A reversible jump Markov chain Monte Carlo step is also included in the algorithm to increase model diversity for thorough exploration of the model space. This algorithm was applied to a validating example to demonstrate the robustness of the algorithm across a wide range of true model probabilities. Its subsequent use in three pathogen transmission examples of varying complexity illustrates the utility of the algorithm in inferring preference of particular transmission models for the pathogens.
Resumo:
This thesis presents a novel program parallelization technique incorporating with dynamic and static scheduling. It utilizes a problem specific pattern developed from the prior knowledge of the targeted problem abstraction. Suitable for solving complex parallelization problems such as data intensive all-to-all comparison constrained by memory, the technique delivers more robust and faster task scheduling compared to the state-of-the art techniques. Good performance is achieved from the technique in data intensive bioinformatics applications.
Resumo:
Focus groups are a popular qualitative research method for information systems researchers. However, compared with the abundance of research articles and handbooks on planning and conducting focus groups, surprisingly, there is little research on how to analyse focus group data. Moreover, those few articles that specifically address focus group analysis are all in fields other than information systems, and offer little specific guidance for information systems researchers. Further, even the studies that exist in other fields do not provide a systematic and integrated procedure to analyse both focus group ‘content’ and ‘interaction’ data. As the focus group is a valuable method to answer the research questions of many IS studies (in the business, government and society contexts), we believe that more attention should be paid to this method in the IS research. This paper offers a systematic and integrated procedure for qualitative focus group data analysis in information systems research.
Resumo:
Distributed systems are widely used for solving large-scale and data-intensive computing problems, including all-to-all comparison (ATAC) problems. However, when used for ATAC problems, existing computational frameworks such as Hadoop focus on load balancing for allocating comparison tasks, without careful consideration of data distribution and storage usage. While Hadoop-based solutions provide users with simplicity of implementation, their inherent MapReduce computing pattern does not match the ATAC pattern. This leads to load imbalances and poor data locality when Hadoop's data distribution strategy is used for ATAC problems. Here we present a data distribution strategy which considers data locality, load balancing and storage savings for ATAC computing problems in homogeneous distributed systems. A simulated annealing algorithm is developed for data distribution and task scheduling. Experimental results show a significant performance improvement for our approach over Hadoop-based solutions.
Resumo:
While enhanced cybersecurity options, mainly based around cryptographic functions, are needed overall speed and performance of a healthcare network may take priority in many circumstances. As such the overall security and performance metrics of those cryptographic functions in their embedded context needs to be understood. Understanding those metrics has been the main aim of this research activity. This research reports on an implementation of one network security technology, Internet Protocol Security (IPSec), to assess security performance. This research simulates sensitive healthcare information being transferred over networks, and then measures data delivery times with selected security parameters for various communication scenarios on Linux-based and Windows-based systems. Based on our test results, this research has revealed a number of network security metrics that need to be considered when designing and managing network security for healthcare-specific or non-healthcare-specific systems from security, performance and manageability perspectives. This research proposes practical recommendations based on the test results for the effective selection of network security controls to achieve an appropriate balance between network security and performance
Resumo:
The world has experienced a large increase in the amount of available data. Therefore, it requires better and more specialized tools for data storage and retrieval and information privacy. Recently Electronic Health Record (EHR) Systems have emerged to fulfill this need in health systems. They play an important role in medicine by granting access to information that can be used in medical diagnosis. Traditional systems have a focus on the storage and retrieval of this information, usually leaving issues related to privacy in the background. Doctors and patients may have different objectives when using an EHR system: patients try to restrict sensible information in their medical records to avoid misuse information while doctors want to see as much information as possible to ensure a correct diagnosis. One solution to this dilemma is the Accountable e-Health model, an access protocol model based in the Information Accountability Protocol. In this model patients are warned when doctors access their restricted data. They also enable a non-restrictive access for authenticated doctors. In this work we use FluxMED, an EHR system, and augment it with aspects of the Information Accountability Protocol to address these issues. The Implementation of the Information Accountability Framework (IAF) in FluxMED provides ways for both patients and physicians to have their privacy and access needs achieved. Issues related to storage and data security are secured by FluxMED, which contains mechanisms to ensure security and data integrity. The effort required to develop a platform for the management of medical information is mitigated by the FluxMED's workflow-based architecture: the system is flexible enough to allow the type and amount of information being altered without the need to change in your source code.
Resumo:
Earlier studies have shown that the speed of information transmission developed radically during the 19th century. The fast development was mainly due to the change from sailing ships and horse-driven coaches to steamers and railways, as well as the telegraph. Speed of information transmission has normally been measured by calculating the duration between writing and receiving a letter, or between an important event and the time when the news was published elsewhere. As overseas mail was generally carried by ships, the history of communications and maritime history are closely related. This study also brings a postal historical aspect to the academic discussion. Additionally, there is another new aspect included. In business enterprises, information flows generally consisted of multiple transactions. Although fast one-way information was often crucial, e.g. news of a changing market situation, at least equally important was that there was a possibility to react rapidly. To examine the development of business information transmission, the duration of mail transport has been measured by a systematic and commensurable method, using consecutive information circles per year as the principal tool for measurement. The study covers a period of six decades, several of the world's most important trade routes and different mail-carrying systems operated by merchant ships, sailing packets and several nations' steamship services. The main sources have been the sailing data of mail-carrying ships and correspondence of several merchant houses in England. As the world's main trade routes had their specific historical backgrounds with different businesses, interests and needs, the systems for information transmission did not develop similarly or simultaneously. It was a process lasting several decades, initiated by the idea of organizing sailings in a regular line system. The evolution proceeded generally as follows: originally there was a more or less irregular system, then a regular system and finally a more frequent regular system of mail services. The trend was from sail to steam, but both these means of communication improved following the same scheme. Faster sailings alone did not radically improve the number of consecutive information circles per year, if the communication was not frequent enough. Neither did improved frequency advance the information circulation if the trip was very long or if the sailings were overlapping instead of complementing each other. The speed of information transmission could be improved by speeding up the voyage itself (technological improvements, minimizing the waiting time at ports of call, etc.) but especially by organizing sailings so that the recipients had the possibility to reply to arriving mails without unnecessary delay. It took two to three decades before the mail-carrying shipping companies were able to organize their sailings in an optimal way. Strategic shortcuts over isthmuses (e.g. Panama, Suez) together with the cooperation between steamships and railways enabled the most effective improvements in global communications before the introduction of the telegraph.
Resumo:
This thesis introduced two novel reputation models to generate accurate item reputation scores using ratings data and the statistics of the dataset. It also presented an innovative method that incorporates reputation awareness in recommender systems by employing voting system methods to produce more accurate top-N item recommendations. Additionally, this thesis introduced a personalisation method for generating reputation scores based on users' interests, where a single item can have different reputation scores for different users. The personalised reputation scores are then used in the proposed reputation-aware recommender systems to enhance the recommendation quality.
Resumo:
Solving large-scale all-to-all comparison problems using distributed computing is increasingly significant for various applications. Previous efforts to implement distributed all-to-all comparison frameworks have treated the two phases of data distribution and comparison task scheduling separately. This leads to high storage demands as well as poor data locality for the comparison tasks, thus creating a need to redistribute the data at runtime. Furthermore, most previous methods have been developed for homogeneous computing environments, so their overall performance is degraded even further when they are used in heterogeneous distributed systems. To tackle these challenges, this paper presents a data-aware task scheduling approach for solving all-to-all comparison problems in heterogeneous distributed systems. The approach formulates the requirements for data distribution and comparison task scheduling simultaneously as a constrained optimization problem. Then, metaheuristic data pre-scheduling and dynamic task scheduling strategies are developed along with an algorithmic implementation to solve the problem. The approach provides perfect data locality for all comparison tasks, avoiding rearrangement of data at runtime. It achieves load balancing among heterogeneous computing nodes, thus enhancing the overall computation time. It also reduces data storage requirements across the network. The effectiveness of the approach is demonstrated through experimental studies.
Resumo:
This workshop aims at discussing alternative approaches to resolving the problem of health information fragmentation, partially resulting from difficulties of health complex systems to semantically interact at the information level. In principle, we challenge the current paradigm of keeping medical records where they were created and discuss an alternative approach in which an individual's health data can be maintained by new entities whose sole responsibility is the sustainability of individual-centric health records. In particular, we will discuss the unique characteristics of the European health information landscape. This workshop is also a business meeting of the IMIA Working Group on Health Record Banking.
Resumo:
Vapor-liquid equilibrium data for the systems diisopropyl ether-n-heptane and diisopropyl ether-carbon tetrachloride have been reported at pressures of 760, 1520, and 2280 mm. of Hg. The systems form ideal mixtures under the pressure range studied.