864 resultados para communication networks
Resumo:
Suppose two parties, holding vectors A = (a 1,a 2,...,a n ) and B = (b 1,b 2,...,b n ) respectively, wish to know whether a i > b i for all i, without disclosing any private input. This problem is called the vector dominance problem, and is closely related to the well-studied problem for securely comparing two numbers (Yao’s millionaires problem). In this paper, we propose several protocols for this problem, which improve upon existing protocols on round complexity or communication/computation complexity.
Resumo:
The first generation e-passport standard is proven to be insecure and prone to various attacks. To strengthen, the European Union (EU) has proposed an Extended Access Control (EAC) mechanism for e-passports that intends to provide better security in protecting biometric information of the e-passport bearer. But, our analysis shows, the EU proposal fails to address many security and privacy issues that are paramount in implementing a strong security mechanism. In this paper we propose an on-line authentication mechanism for electronic passports that addresses the weakness in existing implementations, of both The International Civil Aviation Organisation (ICAO) and EU. Our proposal utilises ICAO PKI implementation, thus requiring very little modifications to the existing infrastructure which is already well established.
Resumo:
Background & Research Focus Managing knowledge for innovation and organisational benefit has been extensively investigated in studies of large firms (Smith, Collins & Clark, 2005; Zucker, et al., 2007) and to a large extent there is limited research into studies of small- and medium- sized enterprises (SMEs). There are some investigations in knowledge management research on SMEs, but what remains to be seen in particular is the question of where are the potential challenges for managing knowledge more effectively within these firms? Effective knowledge management (KM) processes and systems lead to improved performance in pursuing distinct capabilities that contribute to firm-level innovation (Nassim 2009; Zucker et al. 2007; Verona and Ravasi 2003). Managing internal and external knowledge in a way that links it closely to the innovation process can assist the creation and implementation of new products and services. KM is particularly important in knowledge intensive firms where the knowledge requirements are highly specialized, diverse and often emergent. However, to a large extent the KM processes of small firms that are often the source of new knowledge and an important element of the value networks of larger companies have not been closely studied. To address this gap which is of increasing importance with the growing number of small firms, we need to further investigate knowledge management processes and the ways that firms find, capture, apply and integrate knowledge from multiple sources for their innovation process. This study builds on the previous literature and applies existing frameworks and takes the process and activity view of knowledge management as a starting point of departure (see among others Kraaijenbrink, Wijnhoven & Groen, 2007; Enberg, Lindkvist, & Tell, 2006; Lu, Wang & Mao, 2007). In this paper, it is attempted to develop a better understanding of the challenges of knowledge management within the innovation process in small knowledge-oriented firms. The paper aims to explore knowledge management processes and practices in firms that are engaged in the new product/service development programs. Consistent with the exploratory character of the study, the research question is: How is knowledge integrated, sourced and recombined from internal and external sources for innovation and new product development? Research Method The research took an exploratory case study approach and developed a theoretical framework to investigate the knowledge situation of knowledge-intensive firms. Equipped with the conceptual foundation, the research adopted a multiple case study method investigating four diverse Australian knowledge-intensive firms from IT, biotechnology, nanotechnology and biochemistry industries. The multiple case study method allowed us to document in some depth the knowledge management experience of the theses firms. Case study data were collected through a review of company published data and semi-structured interviews with managers using an interview guide to ensure uniform coverage of the research themes. This interview guide was developed following development of the framework and a review of the methodologies and issues covered by similar studies in other countries and used some questions common to these studies. It was framed to gather data around knowledge management activity within the business, focusing on the identification, acquisition and utilisation of knowledge, but collecting a range of information about subject as well. The focus of the case studies was on the use of external and internal knowledge to support their knowledge intensive products and services. Key Findings Firstly a conceptual and strategic knowledge management framework has been developed. The knowledge determinants are related to the nature of knowledge, organisational context, and mechanism of the linkages between internal and external knowledge. Overall, a number of key observations derived from this study, which demonstrated the challenges of managing knowledge and how important KM is as a management tool for innovation process in knowledge-oriented firms. To summarise, findings suggest that knowledge management process in these firms is very much project focused and not embedded within the overall organisational routines and mainly based on ad hoc and informal processes. Our findings highlighted lack of formal knowledge management process within our sampled firms. This point to the need for more specialised capabilities in knowledge management for these firms. We observed a need for an effective knowledge transfer support system which is required to facilitate knowledge sharing and particularly capturing and transferring tacit knowledge from one team members to another. In sum, our findings indicate that building effective and adaptive IT systems to manage and share knowledge in the firm is one of the biggest challenges for these small firms. Also, there is little explicit strategy in small knowledge-intensive firms that is targeted at systematic KM either at the strategic or operational level. Therefore, a strategic approach to managing knowledge for innovation as well as leadership and management are essential to achieving effective KM. In particular, research findings demonstrate that gathering tacit knowledge, internal and external to the organization, and applying processes to ensure the availability of knowledge for innovation teams, drives down the risks and cost of innovation. KM activities and tools, such as KM systems, environmental scanning, benchmarking, intranets, firm-wide databases and communities of practice to acquire knowledge and to make it accessible, were elements of KM. Practical Implications The case study method that used in this study provides practical insight into the knowledge management process within Australian knowledge-intensive firms. It also provides useful lessons which can be used by other firms in managing the knowledge more effectively in the innovation process. The findings would be helpful for small firms that may be searching for a practical method for managing and integrating their specialised knowledge. Using the results of this exploratory study and to address the challenges of knowledge management, this study proposes five practices that are discussed in the paper for managing knowledge more efficiently to improve innovation: (1) Knowledge-based firms must be strategic in knowledge management processes for innovation, (2) Leadership and management should encourage various practices for knowledge management, (3) Capturing and sharing tacit knowledge is critical and should be managed, (4)Team knowledge integration practices should be developed, (5) Knowledge management and integration through communication networks, and technology systems should be encouraged and strengthen. In sum, the main managerial contribution of the paper is the recognition of knowledge determinants and processes, and their effects on the effective knowledge management within firm. This may serve as a useful benchmark in the strategic planning of the firm as it utilises new and specialised knowledge.
Resumo:
Unified Communication (UC) is the integration of two or more real time communication systems into one platform. Integrating core communication systems into one overall enterprise level system delivers more than just cost saving. These real-time interactive communication services and applications over Internet Protocol (IP) have become critical in boosting employee accessibility and efficiency, improving customer support and fostering business agility. However, some small and medium-sized businesses (SMBs) are far from implementing this solution due to the high cost of initial deployment and ongoing support. In this paper, we will discuss and demonstrate an open source UC solution, viz. “Asterisk” for use by SMBs, and report on some performance tests using SIPp. The contribution from this research is the provision of technical advice to SMBs in deploying UC, which is manageable in terms of cost, ease of deployment and support.
Resumo:
We present some improved analytical results as part of the ongoing work on the analysis of Fugue-256 hash function, a second round candidate in the NIST’s SHA3 competition. First we improve Aumasson and Phans’ integral distinguisher on the 5.5 rounds of the final transformation of Fugue-256 to 16.5 rounds. Next we improve the designers’ meet-in-the-middle preimage attack on Fugue-256 from 2480 time and memory to 2416. Finally, we comment on possible methods to obtain free-start distinguishers and free-start collisions for Fugue-256.
Resumo:
Preface The 9th Australasian Conference on Information Security and Privacy (ACISP 2004) was held in Sydney, 13–15 July, 2004. The conference was sponsored by the Centre for Advanced Computing – Algorithms and Cryptography (ACAC), Information and Networked Security Systems Research (INSS), Macquarie University and the Australian Computer Society. The aims of the conference are to bring together researchers and practitioners working in areas of information security and privacy from universities, industry and government sectors. The conference program covered a range of aspects including cryptography, cryptanalysis, systems and network security. The program committee accepted 41 papers from 195 submissions. The reviewing process took six weeks and each paper was carefully evaluated by at least three members of the program committee. We appreciate the hard work of the members of the program committee and external referees who gave many hours of their valuable time. Of the accepted papers, there were nine from Korea, six from Australia, five each from Japan and the USA, three each from China and Singapore, two each from Canada and Switzerland, and one each from Belgium, France, Germany, Taiwan, The Netherlands and the UK. All the authors, whether or not their papers were accepted, made valued contributions to the conference. In addition to the contributed papers, Dr Arjen Lenstra gave an invited talk, entitled Likely and Unlikely Progress in Factoring. This year the program committee introduced the Best Student Paper Award. The winner of the prize for the Best Student Paper was Yan-Cheng Chang from Harvard University for his paper Single Database Private Information Retrieval with Logarithmic Communication. We would like to thank all the people involved in organizing this conference. In particular we would like to thank members of the organizing committee for their time and efforts, Andrina Brennan, Vijayakrishnan Pasupathinathan, Hartono Kurnio, Cecily Lenton, and members from ACAC and INSS.
Resumo:
Unified communications as a service (UCaaS) can be regarded as a cost-effective model for on-demand delivery of unified communications services in the cloud. However, addressing security concerns has been seen as the biggest challenge to the adoption of IT services in the cloud. This study set up a cloud system via VMware suite to emulate hosting unified communications (UC), the integration of two or more real time communication systems, services in the cloud in a laboratory environment. An Internet Protocol Security (IPSec) gateway was also set up to support network-level security for UCaaS against possible security exposures. This study was aimed at analysis of an implementation of UCaaS over IPSec and evaluation of the latency of encrypted UC traffic while protecting that traffic. Our test results show no latency while IPSec is implemented with a G.711 audio codec. However, the performance of the G.722 audio codec with an IPSec implementation affects the overall performance of the UC server. These results give technical advice and guidance to those involved in security controls in UC security on premises as well as in the cloud.
Resumo:
The publish/subscribe paradigm has lately received much attention. In publish/subscribe systems, a specialized event-based middleware delivers notifications of events created by producers (publishers) to consumers (subscribers) interested in that particular event. It is considered a good approach for implementing Internet-wide distributed systems as it provides full decoupling of the communicating parties in time, space and synchronization. One flavor of the paradigm is content-based publish/subscribe which allows the subscribers to express their interests very accurately. In order to implement a content-based publish/subscribe middleware in way suitable for Internet scale, its underlying architecture must be organized as a peer-to-peer network of content-based routers that take care of forwarding the event notifications to all interested subscribers. A communication infrastructure that provides such service is called a content-based network. A content-based network is an application-level overlay network. Unfortunately, the expressiveness of the content-based interaction scheme comes with a price - compiling and maintaining the content-based forwarding and routing tables is very expensive when the amount of nodes in the network is large. The routing tables are usually partially-ordered set (poset) -based data structures. In this work, we present an algorithm that aims to improve scalability in content-based networks by reducing the workload of content-based routers by offloading some of their content routing cost to clients. We also provide experimental results of the performance of the algorithm. Additionally, we give an introduction to the publish/subscribe paradigm and content-based networking and discuss alternative ways of improving scalability in content-based networks. ACM Computing Classification System (CCS): C.2.4 [Computer-Communication Networks]: Distributed Systems - Distributed applications
Resumo:
We study a scheduling problem in a wireless network where vehicles are used as store-and-forward relays, a situation that might arise, for example, in practical rural communication networks. A fixed source node wants to transfer a file to a fixed destination node, located beyond its communication range. In the absence of any infrastructure connecting the two nodes, we consider the possibility of communication using vehicles passing by. Vehicles arrive at the source node at renewal instants and are known to travel towards the destination node with average speed v sampled from a given probability distribution. Th source node communicates data packets (or fragments) of the file to the destination node using these vehicles as relays. We assume that the vehicles communicate with the source node and the destination node only, and hence, every packet communication involves two hops. In this setup, we study the source node's sequential decision problem of transferring packets of the file to vehicles as they pass by, with the objective of minimizing delay in the network. We study both the finite file size case and the infinite file size case. In the finite file size case, we aim to minimize the expected file transfer delay, i.e. expected value of the maximum of the packet sojourn times. In the infinite file size case, we study the average packet delay minimization problem as well as the optimal tradeoff achievable between the average queueing delay at the source node buffer and the average transit delay in the relay vehicle.
Resumo:
Security in a mobile communication environment is always a matter for concern, even after deploying many security techniques at device, network, and application levels. The end-to-end security for mobile applications can be made robust by developing dynamic schemes at application level which makes use of the existing security techniques varying in terms of space, time, and attacks complexities. In this paper we present a security techniques selection scheme for mobile transactions, called the Transactions-Based Security Scheme (TBSS). The TBSS uses intelligence to study, and analyzes the security implications of transactions under execution based on certain criterion such as user behaviors, transaction sensitivity levels, and credibility factors computed over the previous transactions by the users, network vulnerability, and device characteristics. The TBSS identifies a suitable level of security techniques from the repository, which consists of symmetric, and asymmetric types of security algorithms arranged in three complexity levels, covering various encryption/decryption techniques, digital signature schemes, andhashing techniques. From this identified level, one of the techniques is deployed randomly. The results shows that, there is a considerable reduction in security cost compared to static schemes, which employ pre-fixed security techniques to secure the transactions data.
Resumo:
We propose certain discrete parameter variants of well known simulation optimization algorithms. Two of these algorithms are based on the smoothed functional (SF) technique while two others are based on the simultaneous perturbation stochastic approximation (SPSA) method. They differ from each other in the way perturbations are obtained and also the manner in which projections and parameter updates are performed. All our algorithms use two simulations and two-timescale stochastic approximation. As an application setting, we consider the important problem of admission control of packets in communication networks under dependent service times. We consider a discrete time slotted queueing model of the system and consider two different scenarios - one where the service times have a dependence on the system state and the other where they depend on the number of arrivals in a time slot. Under our settings, the simulated objective function appears ill-behaved with multiple local minima and a unique global minimum characterized by a sharp dip in the objective function in a small region of the parameter space. We compare the performance of our algorithms on these settings and observe that the two SF algorithms show the best results overall. In fact, in many cases studied, SF algorithms converge to the global minimum.
Resumo:
We propose two algorithms for Q-learning that use the two-timescale stochastic approximation methodology. The first of these updates Q-values of all feasible state–action pairs at each instant while the second updates Q-values of states with actions chosen according to the ‘current’ randomized policy updates. A proof of convergence of the algorithms is shown. Finally, numerical experiments using the proposed algorithms on an application of routing in communication networks are presented on a few different settings.
Resumo:
We propose several stochastic approximation implementations for related algorithms in flow-control of communication networks. First, a discrete-time implementation of Kelly's primal flow-control algorithm is proposed. Convergence with probability 1 is shown, even in the presence of communication delays and stochastic effects seen in link congestion indications. This ensues from an analysis of the flow-control algorithm using the asynchronous stochastic approximation (ASA) framework. Two relevant enhancements are then pursued: a) an implementation of the primal algorithm using second-order information, and b) an implementation where edge-routers rectify misbehaving flows. Next, discretetime implementations of Kelly's dual algorithm and primaldual algorithm are proposed. Simulation results a) verifying the proposed algorithms and, b) comparing the stability properties are presented.
Resumo:
Due to their non-stationarity, finite-horizon Markov decision processes (FH-MDPs) have one probability transition matrix per stage. Thus the curse of dimensionality affects FH-MDPs more severely than infinite-horizon MDPs. We propose two parametrized 'actor-critic' algorithms to compute optimal policies for FH-MDPs. Both algorithms use the two-timescale stochastic approximation technique, thus simultaneously performing gradient search in the parametrized policy space (the 'actor') on a slower timescale and learning the policy gradient (the 'critic') via a faster recursion. This is in contrast to methods where critic recursions learn the cost-to-go proper. We show w.p 1 convergence to a set with the necessary condition for constrained optima. The proposed parameterization is for FHMDPs with compact action sets, although certain exceptions can be handled. Further, a third algorithm for stochastic control of stopping time processes is presented. We explain why current policy evaluation methods do not work as critic to the proposed actor recursion. Simulation results from flow-control in communication networks attest to the performance advantages of all three algorithms.
Resumo:
We develop extensions of the Simulated Annealing with Multiplicative Weights (SAMW) algorithm that proposed a method of solution of Finite-Horizon Markov Decision Processes (FH-MDPs). The extensions developed are in three directions: a) Use of the dynamic programming principle in the policy update step of SAMW b) A two-timescale actor-critic algorithm that uses simulated transitions alone, and c) Extending the algorithm to the infinite-horizon discounted-reward scenario. In particular, a) reduces the storage required from exponential to linear in the number of actions per stage-state pair. On the faster timescale, a 'critic' recursion performs policy evaluation while on the slower timescale an 'actor' recursion performs policy improvement using SAMW. We give a proof outlining convergence w.p. 1 and show experimental results on two settings: semiconductor fabrication and flow control in communication networks.