75 resultados para End-to-side neurorrhaphy
Resumo:
The term post-war violence has been with us for much of the twentieth century but the issue itself has existed for centuries. The study of violence in post-war societies has been explored by philosophers (Erasmus), statesmen (Sir Thomas More) and sociologists (Emile Durkheim). In many cases the cessation of war and the signing of peace accords do not always mean an end to the violence. This book examines in considerable detail the causes and purposes of post-conflict violence and argues that features which constrain or encourage violence accumulate in such a manner as to create distinct and different types of post-war environments...
Resumo:
The continuing need for governments to radically improve the delivery of public services has led to a new, holistic government reform strategy labeled “Transformational Government” that strongly emphasizes customer-centricity. Attention has turned to online portals as a cost effective front-end to deliver services and engage customers as well as to the corresponding organizational approaches for the back-end to decouple the service interface from the departmental structures. The research presented in this paper makes three contributions: Firstly, a systematic literature review of approaches to the evaluation of online portal models in the public sector is presented. Secondly, the findings of a usability study comparing the online presences of the Queensland Government, the UK Government and the South Australian Government are reported and the relative strengths and weaknesses of the different approaches are discussed. And thirdly, the limitations of the usability study in the context of a broader “Transformational Government” approach are identified and service bundling is suggested as an innovative solution to further improve online service delivery.
Resumo:
Deploying wireless networks in networked control systems (NCSs) has become more and more popular during the last few years. As a typical type of real-time control systems, an NCS is sensitive to long and nondeterministic time delay and packet losses. However, the nature of the wireless channel has the potential to degrade the performance of NCS networks in many aspects, particularly in time delay and packet losses. Transport layer protocols could play an important role in providing both reliable and fast transmission service to fulfill NCS’s real-time transmission requirements. Unfortunately, none of the existing transport protocols, including the Transport Control Protocol (TCP) and the User Datagram Protocol (UDP), was designed for real-time control applications. Moreover, periodic data and sporadic data are two types of real-time data traffic with different priorities in an NCS. Due to the lack of support for prioritized transmission service, the real-time performance for periodic and sporadic data in an NCS network is often degraded significantly, particularly under congested network conditions. To address these problems, a new transport layer protocol called Reliable Real-Time Transport Protocol (RRTTP) is proposed in this thesis. As a UDP-based protocol, RRTTP inherits UDP’s simplicity and fast transmission features. To improve the reliability, a retransmission and an acknowledgement mechanism are designed in RRTTP to compensate for packet losses. They are able to avoid unnecessary retransmission of the out-of-date packets in NCSs, and collisions are unlikely to happen, and small transmission delay can be achieved. Moreover, a prioritized transmission mechanism is also designed in RRTTP to improve the real-time performance of NCS networks under congested traffic conditions. Furthermore, the proposed RRTTP is implemented in the Network Simulator 2 for comprehensive simulations. The simulation results demonstrate that RRTTP outperforms TCP and UDP in terms of real-time transmissions in an NCS over wireless networks.
Resumo:
This paper presents a model for the generation of a MAC tag using a stream cipher. The input message is used indirectly to control segments of the keystream that form the MAC tag. Several recent proposals can be considered as instances of this general model, as they all perform message accumulation in this way. However, they use slightly different processes in the message preparation and finalisation phases. We examine the security of this model for different options and against different types of attack, and conclude that the indirect injection model can be used to generate MAC tags securely for certain combinations of options. Careful consideration is required at the design stage to avoid combinations of options that result in susceptibility to forgery attacks. Additionally, some implementations may be vulnerable to side-channel attacks if used in Authenticated Encryption (AE) algorithms. We give design recommendations to provide resistance to these attacks for proposals following this model.
Resumo:
For industrial wireless sensor networks, maintaining the routing path for a high packet delivery ratio is one of the key objectives in network operations. It is important to both provide the high data delivery rate at the sink node and guarantee a timely delivery of the data packet at the sink node. Most proactive routing protocols for sensor networks are based on simple periodic updates to distribute the routing information. A faulty link causes packet loss and retransmission at the source until periodic route update packets are issued and the link has been identified as broken. We propose a new proactive route maintenance process where periodic update is backed-up with a secondary layer of local updates repeating with shorter periods for timely discovery of broken links. Proposed route maintenance scheme improves reliability of the network by decreasing the packet loss due to delayed identification of broken links. We show by simulation that proposed mechanism behaves better than the existing popular routing protocols (AODV, AOMDV and DSDV) in terms of end-to-end delay, routing overhead, packet reception ratio.
Resumo:
Software development settings provide a great opportunity for CSCW researchers to study collaborative work. In this paper, we explore a specific work practice called bug reproduction that is a part of the software bug-fixing process. Bug re-production is a highly collaborative process by which software developers attempt to locally replicate the ‘environment’ within which a bug was originally encountered. Customers, who encounter bugs in their everyday use of systems, play an important role in bug reproduction as they provide useful information to developers, in the form of steps for reproduction, software screenshots, trace logs, and other ways to describe a problem. Bug reproduction, however, poses major hurdles in software maintenance as it is often challenging to replicate the contextual aspects that are at play at the customers’ end. To study the bug reproduction process from a human-centered perspective, we carried out an ethnographic study at a multinational engineering company. Using semi-structured interviews, a questionnaire and half-a-day observation of sixteen software developers working on different software maintenance projects, we studied bug reproduction. In this pa-per, we present a holistic view of bug reproduction practices from a real-world set-ting and discuss implications for designing tools to address the challenges developers face during bug reproduction.
Resumo:
Service-oriented architectures and Web services mature and have become more widely accepted and used by industry. This growing adoption increased the demands for new ways of using Web service technology. Users start re-combining and mediating other providers’ services in ways that have not been anticipated by their original provider. Within organisations and cross-organisational communities, discoverable services are organised in repositories providing convenient access to adaptable end-to-end business processes. This idea is captured in the term Service Ecosystem. This paper addresses the question of how quality management can be performed in such service ecosystems. Service quality management is a key challenge when services are composed of a dynamic set of heterogeneous sub-services from different service providers. This paper contributes to this important area by developing a reference model of quality management in service ecosystems. We illustrate the application of the reference model in an exploratory case study. With this case study, we show how the reference model helps to derive requirements for the implementation and support of quality management in an exemplary service ecosystem in public administration.
Acceptability-based QoE management for user-centric mobile video delivery : a field study evaluation
Resumo:
Effective Quality of Experience (QoE) management for mobile video delivery – to optimize overall user experience while adapting to heterogeneous use contexts – is still a big challenge to date. This paper proposes a mobile video delivery system to emphasize the use of acceptability as the main indicator of QoE to manage the end-to-end factors in delivering mobile video services. The first contribution is a novel framework for user-centric mobile video system that is based on acceptability-based QoE (A-QoE) prediction models, which were derived from comprehensive subjective studies. The second contribution is results from a field study that evaluates the user experience of the proposed system during realistic usage circumstances, addressing the impacts of perceived video quality, loading speed, interest in content, viewing locations, network bandwidth, display devices, and different video coding approaches, including region-of-interest (ROI) enhancement and center zooming
Resumo:
This thesis introduces a method of applying Bayesian Networks to combine information from a range of data sources for effective decision support systems. It develops a set of techniques in development, validation, visualisation, and application of Complex Systems models, with a working demonstration in an Australian airport environment. The methods presented here have provided a modelling approach that produces highly flexible, informative and applicable interpretations of a system's behaviour under uncertain conditions. These end-to-end techniques are applied to the development of model based dashboards to support operators and decision makers in the multi-stakeholder airport environment. They provide highly flexible and informative interpretations and confidence in these interpretations of a system's behaviour under uncertain conditions.
Resumo:
Underwater wireless sensor networks (UWSNs) have become the seat of researchers' attention recently due to their proficiency to explore underwater areas and design different applications for marine discovery and oceanic surveillance. One of the main objectives of each deployed underwater network is discovering the optimized path over sensor nodes to transmit the monitored data to onshore station. The process of transmitting data consumes energy of each node, while energy is limited in UWSNs. So energy efficiency is a challenge in underwater wireless sensor network. Dual sinks vector based forwarding (DS-VBF) takes both residual energy and location information into consideration as priority factors to discover an optimized routing path to save energy in underwater networks. The modified routing protocol employs dual sinks on the water surface which improves network lifetime. According to deployment of dual sinks, packet delivery ratio and the average end to end delay are enhanced. Based on our simulation results in comparison with VBF, average end to end delay reduced more than 80%, remaining energy increased 10%, and the increment of packet reception ratio was about 70%.
Resumo:
Supervisory Control and Data Acquisition (SCADA) systems are one of the key foundations of smart grids. The Distributed Network Protocol version 3 (DNP3) is a standard SCADA protocol designed to facilitate communications in substations and smart grid nodes. The protocol is embedded with a security mechanism called Secure Authentication (DNP3-SA). This mechanism ensures that end-to-end communication security is provided in substations. This paper presents a formal model for the behavioural analysis of DNP3-SA using Coloured Petri Nets (CPN). Our DNP3-SA CPN model is capable of testing and verifying various attack scenarios: modification, replay and spoofing, combined complex attack and mitigation strategies. Using the model has revealed a previously unidentified flaw in the DNP3-SA protocol that can be exploited by an attacker that has access to the network interconnecting DNP3 devices. An attacker can launch a successful attack on an outstation without possessing the pre-shared keys by replaying a previously authenticated command with arbitrary parameters. We propose an update to the DNP3-SA protocol that removes the flaw and prevents such attacks. The update is validated and verified using our CPN model proving the effectiveness of the model and importance of the formal protocol analysis.
Resumo:
Climate change is one of the most important issues confronting the sustainable supply of seafood, with projections suggesting major effects on wild and farmed fisheries worldwide. While climate change has been a consideration for Australian fisheries and aquaculture management, emphasis in both research and adaptation effort has been at the production end of supply chains—impacts further along the chain have been overlooked to date. A holistic biophysical and socio-economic system view of seafood industries, as represented by end-to-end supply chains, may lead to an additional set of options in the face of climate change, thus maximizing opportunities for improved fishery profitability, while also reducing the potential for maladaptation. In this paper, we explore Australian seafood industry stakeholder perspectives on potential options for adaptation along seafood supply chains based on future potential scenarios. Stakeholders, representing wild capture and aquaculture industries, provided a range of actions targeting different stages of the supply chain. Overall, proposed strategies were predominantly related to the production end of the supply chain, suggesting that greater attention in developing adaptation options is needed at post-production stages. However, there are chain-wide adaptation strategies that can present win–win scenarios, where commercial objectives beyond adaptation can also be addressed alongside direct or indirect impacts of climate. Likewise, certain adaptation strategies in place at one stage of the chain may have varying implications on other stages of the chain. These findings represent an important step in understanding the role of supply chains in effective adaptation of fisheries and aquaculture industries to climate change.
Resumo:
Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.
Resumo:
Introduction Electronic medication administration record (eMAR) systems are promoted as a potential intervention to enhance medication safety in residential aged care facilities (RACFs). The purpose of this study was to conduct an in-practice evaluation of an eMAR being piloted in one Australian RACF before its roll out, and to provide recommendations for system improvements. Methods A multidisciplinary team conducted direct observations of workflow (n=34 hours) in the RACF site and the community pharmacy. Semi-structured interviews (n=5) with RACF staff and the community pharmacist were conducted to investigate their views of the eMAR system. Data were analysed using a grounded theory approach to identify challenges associated with the design of the eMAR system. Results The current eMAR system does not offer an end-to-end solution for medication management. Many steps, including prescribing by doctors and communication with the community pharmacist, are still performed manually using paper charts and fax machines. Five major challenges associated with the design of eMAR system were identified: limited interactivity; inadequate flexibility; problems related to information layout and semantics; the lack of relevant decision support; and system maintenance issues.We suggest recommendations to improve the design of the eMAR system and to optimize existing workflows. Discussion Immediate value can be achieved by improving the system interactivity, reducing inconsistencies in data entry design and offering dedicated organisational support to minimise connectivity issues. Longer-term benefits can be achieved by adding decision support features and establishing system interoperability requirements with stakeholder groups (e.g. community pharmacies) prior to system roll out. In-practice evaluations of technologies like eMAR system have great value in identifying design weaknesses which inhibit optimal system use.
Resumo:
Deep packet inspection is a technology which enables the examination of the content of information packets being sent over the Internet. The Internet was originally set up using “end-to-end connectivity” as part of its design, allowing nodes of the network to send packets to all other nodes of the network, without requiring intermediate network elements to maintain status information about the transmission. In this way, the Internet was created as a “dumb” network, with “intelligent” devices (such as personal computers) at the end or “last mile” of the network. The dumb network does not interfere with an application's operation, nor is it sensitive to the needs of an application, and as such it treats all information sent over it as (more or less) equal. Yet, deep packet inspection allows the examination of packets at places on the network which are not endpoints, In practice, this permits entities such as Internet service providers (ISPs) or governments to observe the content of the information being sent, and perhaps even manipulate it. Indeed, the existence and implementation of deep packet inspection may challenge profoundly the egalitarian and open character of the Internet. This paper will firstly elaborate on what deep packet inspection is and how it works from a technological perspective, before going on to examine how it is being used in practice by governments and corporations. Legal problems have already been created by the use of deep packet inspection, which involve fundamental rights (especially of Internet users), such as freedom of expression and privacy, as well as more economic concerns, such as competition and copyright. These issues will be considered, and an assessment of the conformity of the use of deep packet inspection with law will be made. There will be a concentration on the use of deep packet inspection in European and North American jurisdictions, where it has already provoked debate, particularly in the context of discussions on net neutrality. This paper will also incorporate a more fundamental assessment of the values that are desirable for the Internet to respect and exhibit (such as openness, equality and neutrality), before concluding with the formulation of a legal and regulatory response to the use of this technology, in accordance with these values.