998 resultados para servers


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter discusses an action research study towards the development of a decision framework to support a fully integrated multi disciplinary Building Information Model (BIM) using a Model Server. The framework was proposed to facilitate multi disciplinary collaborative BIM adoption through, informed selection of a project specific BIM approach and tools contingent upon project collaborators’ readiness, tool capabilities and workflow dependencies. The aim of the research was to explore the technical concerns in relation to Model Servers to support multi disciplinary model integration and collaboration; however it became clear that there were both technical and non technical issues that needed consideration. The evidence also suggests that there are varying levels of adoption which impacts upon further diffusion of the technologies. Therefore the need for a decision framework was identified based on the findings from an exploratory study conducted to investigate industry expectations. The study revealed that even the market leaders who are early technology adopters in the Australian industry in many cases have varying degrees of practical experiential knowledge of BIM and hence at times low levels of confidence of the future diffusion of BIM technology throughout the industry. The study did not focus on the benefits of BIM implementation as this was not the intention, as the industry partners involved are market leaders and early adopters of the technology and did not need convincing of the benefits. Coupled with this there are various other past studies that have contributed to the ‘benefits’ debate. There were numerous factors affecting BIM adoption which were grouped in to two main areas; technical tool functional requirements and needs, and non technical strategic issues. The need for guidance on where to start, what tools were available and how to work through the legal, procurement and cultural challenges was evidenced in the exploratory study. Therefore a BIM decision framework was initiated, based upon these industry concerns. Eight case studies informed the development of the framework and a summary of the key findings is presented. Primary and secondary case studies from firms that have adopted a structured approach to technology adoption are presented. The Framework consists of four interrelated key elements including a strategic purpose and scoping matrix, work process mapping, technical requirements for BIM tools and Model Servers, and framework implementation guide. The BIM framework was presented in draft format again to key industry stakeholders and considered in comparison with current best practice BIM adoption to further validate the framework. There was no request to change any part of the Framework. However, it is an ongoing process and it will be presented again to industry through the various project partners. The Framework may be refined within the boundaries of the action research process as an ongoing activity as more experiential knowledge can be incorporated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Functional observers estimate a linear function of the state vector directly without having to estimate all the individual states. In the past various observer structures have been employed to design such functional estimates. In this paper we discuss the generality of those various observer structures and prove the conditions under which those observer structures are unified. The paper also highlights and clarifies the need to remove the self-convergent states from the system and also from the functions to be estimated before proceeding with the design of a functional observer or else incorrect conclusions regarding the existence of functional observers can be arrived at.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a novel approach to authentication and privacy in RFID systems based on the minimum disclosure property and in conformance to EPC Class-1 Gen-2 specifications. We take into account the computational constraints of EPC Class-1 Gen-2 passive RFID tags and only the cyclic redundancy check (CRC) and pseudo random number generator (PRNG) functions that passive RFID tags are capable of are employed. Detailed security analysis of our scheme shows that it can offer robust security properties in terms of tag anonymity and tag untraceability while at the same time being robust to replay, tag impersonation and desynchronisation attacks. Simulations results are also presented to study the scalability of the proposed scheme and its impact on authentication delay.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a novel secure tag ownership transfer scheme for closed loop RFID systems. An important property of our method is that the ownership transfer is guaranteed to be atomic and the scheme is protected against desynchronisation leading to permanent DoS. Further, it is suited to the computational constraints of EPC Class-1 Gen-2 passive RFID tags as they only use the CRC and PRNG functions that passive RFID tags are capable of. We provide a detailed security analysis to show that our scheme satisfies the required security properties of tag anonymity, tag location privacy, forward secrecy, forward untraceability while being resistant to replay, desynchronisation and server impersonation attacks. Performance comparisons show that our scheme is practical and can be implemented on passive low-cost RFID tags.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Radio Frequency Identification (RFID) is a technological revolution that is expected to soon replace barcode systems. One of the important features of an RFID system is its ability to search for a particular tag among a group of tags. This task is quite common where RFID systems play a vital role. To our knowledge not much work has been done in this secure search area of RFID. Also, most of the existing work do not comply with the C1G2 standards. Our work aims to fill that gap by proposing a protocol based on Quadratic Residues property that does not use the expensive hash functions or any complex encryption schemes but achieves total compliance with industry standards while meeting the security requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Facebook disseminates messages for billions of users everyday. Though there are log files stored on central servers, law enforcement agencies outside of the U.S. cannot easily acquire server log files from Facebook. This work models Facebook user groups by using a random graph model. Our aim is to facilitate detectives quickly estimating the size of a Facebook group with which a suspect is involved. We estimate this group size according to the number of immediate friends and the number of extended friends which are usually accessible by the public. We plot and examine UML diagrams to describe Facebook functions. Our experimental results show that asymmetric Facebook friendship fulfills the assumption of applying random graph models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tracking services play a fundamental role in the smartphone ecosystem. While their primary purpose is to provide a smartphone user with the ability to regulate the extent of sharing private information with external parties, these services can also be misused by advertisers in order to boost revenues. In this paper, we investigate tracking services on the Android and iOS smartphone platforms. We present a simple and effective way to monitor traffic generated by tracking services to and from the smartphone and external servers. To evaluate our work, we dynamically execute a set of Android and iOS applications, collected from their respective official markets. Our empirical results indicate that even if the user disables or limits tracking services on the smartphone, applications can by-pass those settings and, consequently, leak private information to external parties. On the other hand, when testing the location 'on' setting, we notice that generally location is not tracked.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

 Many web servers contain some dangerous pages (we name them eigenpages) that can indicate their vulnerabilities. Therefore, some worms such as Santy locate their targets by searching for these eigenpages in search engines with well-crafted queries. In this paper, we focus on the modeling and containment of these special worms targeting web applications. We propose a containment system based on honey pots. We make search engines randomly insert a few honey pages that will induce visitors to the pre-established honey pots among the search results for the arriving queries. And then infectious can be detected and reported to the search engines when their malicious scans hit the honey pots. We find that the Santy worm can be well stopped by inserting no more than two honey pages in every one hundred search results. We also solve the challenging issue to dynamically generate matching honey pages for those dynamically arriving queries. Finally, a prototype is implemented to prove the technical feasibility of this system. © 2013 by CESER Publications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of tracking settings in smartphones facilitates the provision of tailored services to users by allowing service providers access to unique identifiers stored on the smartphones. In this paper, we investigate the 'tracking off' settings on the Blackberry 10 and Windows Phone 8 platforms. To determine if they work as claimed, we set up a test bed suitable for both operating systems to capture traffic between the smartphone and external servers. We dynamically execute a set of similar Blackberry 10 and Windows Phone 8 applications, downloaded from their respective official markets. Our results indicate that even if users turn off tracking settings in their smartphones, some applications leak unique identifiers without their knowledge. © 2014 Springer International Publishing Switzerland.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High Performance Computing (HPC) clouds have started to change the way how research in science, in particular medicine and genomics (bioinformatics) is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. However, most HPC clouds are provided at an Infrastructure as a Service (IaaS) level, users are presented with a set of virtual servers which need to be put together to form HPC environments via time consuming resource management and software configuration tasks, which make them practically unusable by discipline, non-computing specialists. In response, there is a new trend to expose cloud applications as services to simplify access and execution on clouds. This paper firstly examines commonly used cloud-based genomic analysis services (Tuxedo Suite, Galaxy and Cloud Bio Linux). As a follow up, we propose two new solutions (HPCaaS and Uncinus), which aim to automate aspects of the service development and deployment process. By comparing and contrasting these five solutions, we identify key mechanisms of service creation, execution and access that are required to support genomic research on the SaaS cloud, in particular by discipline specialists. © 2014 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three significant events at the start of 2015 have put freedom of speech firmly on the global agenda. The first was the carry-over from the December 2014 illegal entry to the Sony Corporation’s file servers by anonymous hackers, believed to be linked to the North Korean regime. The second was the horrible attack on journalists, editors, and cartoonists at the French satirical magazine, Charlie Hebdo on 7 January. The third was the election of leftwing anti-austerity party Syrzia in Greece on 25 January.While each event is different in scope and size, they are important to scholars of the political economy of communication because they all speak to ongoing debates about freedom of expression, freedom of speech and freedom of the press. I name each of these concepts separately because, despite popular confusion, they are not the same thing (Patching and Hirst, 2014) . Freedom of expression is the right to individual self-expression through any means; it is an inalienable human right. Freedom of speech refers to the right (and the physical ability) to utter political speech, to say what others wish to repress and to demand a voice with which to express a range of social and political thoughts. Freedom of the press is a very particular version of freedom of expression that is intimately bound with the political economy of speech and of the printing press. Freedom of the press is impossible without the press and, despite its theoretical availability to all of us, this principle is impossible to articulate without the material means (usually money) to actually deploy a printing press (or the electronic means of broadcasting and publishing).Freedom of expression is immutable; freedom of speech subject to legal, ethical and ideological restriction (for better, or worse) and freedom of the press is peculiar to bourgeois society in that it entails the freedom to own and operate a press, not the right to say or publish on a level playing field. Access to freedom of the press is determined in the marketplace and is subject to the unequal power relationships that such determination implies.It is fitting to start with the Charlie Hebdo massacre because the loss of 17 lives makes this the most chilling of the three events and demands that it be given prominence in any analysis. No lives have been lost yet because Sony’s computers were hacked and the election of Syriza has not (yet) led to mass deaths in Greece.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Web servers are usually located in a well-organized data center where these servers connect with the outside Internet directly through backbones. Meanwhile, the application-layer distributed denials of service (AL-DDoS) attacks are critical threats to the Internet, particularly to those business web servers. Currently, there are some methods designed to handle the AL-DDoS attacks, but most of them cannot be used in heavy backbones. In this paper, we propose a new method to detect AL-DDoS attacks. Our work distinguishes itself from previous methods by considering AL-DDoS attack detection in heavy backbone traffic. Besides, the detection of AL-DDoS attacks is easily misled by flash crowd traffic. In order to overcome this problem, our proposed method constructs a Real-time Frequency Vector (RFV) and real-timely characterizes the traffic as a set of models. By examining the entropy of AL-DDoS attacks and flash crowds, these models can be used to recognize the real AL-DDoS attacks. We integrate the above detection principles into a modularized defense architecture, which consists of a head-end sensor, a detection module and a traffic filter. With a swift AL-DDoS detection speed, the filter is capable of letting the legitimate requests through but the attack traffic is stopped. In the experiment, we adopt certain episodes of real traffic from Sina and Taobao to evaluate our AL-DDoS detection method and architecture. Compared with previous methods, the results show that our approach is very effective in defending AL-DDoS attacks at backbones. © 2013 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud is becoming a dominant computing platform. Naturally, a question that arises is whether we can beat notorious DDoS attacks in a cloud environment. Researchers have demonstrated that the essential issue of DDoS attack and defense is resource competition between defenders and attackers. A cloud usually possesses profound resources and has full control and dynamic allocation capability of its resources. Therefore, cloud offers us the potential to overcome DDoS attacks. However, individual cloud hosted servers are still vulnerable to DDoS attacks if they still run in the traditional way. In this paper, we propose a dynamic resource allocation strategy to counter DDoS attacks against individual cloud customers. When a DDoS attack occurs, we employ the idle resources of the cloud to clone sufficient intrusion prevention servers for the victim in order to quickly filter out attack packets and guarantee the quality of the service for benign users simultaneously. We establish a mathematical model to approximate the needs of our resource investment based on queueing theory. Through careful system analysis and real-world data set experiments, we conclude that we can defeat DDoS attacks in a cloud environment. © 2013 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, fields with substantial computing requirementshave turned to cloud computing for economical, scalable, and on-demandprovisioning of required execution environments. However, current cloudofferings focus on providing individual servers while tasks such as applicationdistribution and data preparation are left to cloud users. This article presents anew form of cloud called HPC Hybrid Deakin (H2D) cloud; an experimentalhybrid cloud capable of utilising both local and remote computational servicesfor large embarrassingly parallel applications. As well as supporting execution,H2D also provides a new service, called DataVault, that provides transparentdata management services so all cloud-hosted clusters have required datasetsbefore commencing execution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transparent computing is an emerging computing paradigm where the users can enjoy any kind of service over networks on-demand with any devices, without caring about the underlying deployment details. In transparent computing, all software resources (even the OS) are stored on remote servers, from which the clients can request the resources for local execution in a block-streaming way. This paradigm has many benefits including cross-platform experience, user orientation, and platform independence. However, due to its fundamental features, e.g., separation of computation and storage in clients and servers respectively, and block-streaming-based scheduling and execution, transparent computing faces many new security challenges that may become its biggest obstacle. In this paper, we propose a Transparent Computing Security Architecture (TCSA), which builds user-controlled security for transparent computing by allowing the users to configure the desired security environments on demand. We envision, TCSA, which allows the users to take the initiative to protect their own data, is a promising solution for data security in transparent computing. © 2014 IEEE.