807 resultados para Level of service.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clinical experience plays an important role in the development of expertise, particularly when coupled with reflection on practice. There is debate, however, regarding the amount of clinical experience that is required to become an expert. Various lengths of practice have been suggested as suitable for determining expertise, ranging from five years to 15 years. This study aimed to investigate the association between length of experience and therapists’ level of expertise in the field of cerebral palsy with upper limb hypertonicity using an empirical procedure named Cochrane–Weiss–Shanteau (CWS). The methodology involved re-analysis of quantitative data collected in two previous studies. In Study 1, 18 experienced occupational therapists made hypothetical clinical decisions related to 110 case vignettes, while in Study 2, 29 therapists considered 60 case vignettes drawn randomly from those used in Study 1. A CWS index was calculated for each participant's case decisions. Then, in each study, Spearman's rho was calculated to identify the correlations between the duration of experience and level of expertise. There was no significant association between these two variables in both studies. These analyses corroborated previous findings of no association between length of experience and judgemental performance. Therefore, length of experience may not be an appropriate criterion for determining level of expertise in relation to cerebral palsy practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The loosely-coupled and dynamic nature of web services architectures has many benefits, but also leads to an increased vulnerability to denial of service attacks. While many papers have surveyed and described these vulnerabilities, they are often theoretical and lack experimental data to validate them, and assume an obsolete state of web services technologies. This paper describes experiments involving several denial of service vulnerabilities in well-known web services platforms, including Java Metro, Apache Axis, and Microsoft .NET. The results both confirm and deny the presence of some of the most well-known vulnerabilities in web services technologies. Specifically, major web services platforms appear to cope well with attacks that target memory exhaustion. However, attacks targeting CPU-time exhaustion are still effective, regardless of the victim’s platform.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysis by enzyme-linked immunosorbent assay showed that Rice tungro bacilliform virus (RTBV) accumulated in a cyclic pattern from early to late stages of infection in tungro-susceptible variety, Taichung Native 1 (TN1), and resistant variety, Balimau Putih, singly infected with RTBV or co-infected with RTBV+Rice tungro spherical virus (RTSV). These changes in virus accumulation resulted in differences in RTBV levels and incidence of infection. The virus levels were expressed relative to those of the susceptible variety and the incidence of infection was assessed at different weeks after inoculation. At a particular time point, RTBV levels in TN1 or Balimau Putih singly infected with RTBV were not significantly different from the virus level in plants co-infected with RTBV+RTSV. The relative RTBV levels in Balimau Putih either singly infected with RTBV or co-infected with RTBV+RTSV were significantly lower than those in TN1. The incidence of RTBV infection varied at different times in Balimau Putih but not in TN1, and to determine the actual infection, the number of plants that became infected at least once anytime during the 4wk observation period was considered. Considering the changes in RTBV accumulation, new parameters for analyzing RTBV resistance were established. Based on these parameters, Balimau Putih was characterized having resistance to virus accumulation although the actual incidence of infection was >75%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Denial of Service Testing Framework (dosTF) being developed as part of the joint India-Australia research project for ‘Protecting Critical Infrastructure from Denial of Service Attacks’ allows for the construction, monitoring and management of emulated Distributed Denial of Service attacks using modest hardware resources. The purpose of the testbed is to study the effectiveness of different DDoS mitigation strategies and to allow for the testing of defense appliances. Experiments are saved and edited in XML as abstract descriptions of an attack/defense strategy that is only mapped to real resources at run-time. It also provides a web-application portal interface that can start, stop and monitor an attack remotely. Rather than monitoring a service under attack indirectly, by observing traffic and general system parameters, monitoring of the target application is performed directly in real time via a customised SNMP agent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studies indicate project success should be viewed from the different perspectives of the individual stakeholders. Project managers are owner’s agents. In order to allow early corrective actions to take place in case a project is diverted from plan, to accurately report perceived success of the stakeholders by project managers is essential, though there has been little systematic research in this area. The aim of this paper is to report the findings of an empirical study that compares the level of alignment between project managers and key stakeholders on a list of project performance indicators. A telephone survey involving 18 complex project managers and various key project stakeholder groups was conducted in this study. Krippendorff’s Kappa alpha reliability test was used to assess the alignment levels between project managers and stakeholders. Despite the overall agreement level between project manager and stakeholders is only medium; results have also identified 12 performance indicators that have significant level of agreement between project managers and stakeholders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Client puzzles are meant to act as a defense against denial of service (DoS) attacks by requiring a client to solve some moderately hard problem before being granted access to a resource. However, recent client puzzle difficulty definitions (Stebila and Ustaoglu, 2009; Chen et al., 2009) do not ensure that solving n puzzles is n times harder than solving one puzzle. Motivated by examples of puzzles where this is the case, we present stronger definitions of difficulty for client puzzles that are meaningful in the context of adversaries with more computational power than required to solve a single puzzle. A protocol using strong client puzzles may still not be secure against DoS attacks if the puzzles are not used in a secure manner. We describe a security model for analyzing the DoS resistance of any protocol in the context of client puzzles and give a generic technique for combining any protocol with a strong client puzzle to obtain a DoS-resistant protocol.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gradual authentication is a principle proposed by Meadows as a way to tackle denial-of-service attacks on network protocols by gradually increasing the confidence in clients before the server commits resources. In this paper, we propose an efficient method that allows a defending server to authenticate its clients gradually with the help of some fast-to-verify measures. Our method integrates hash-based client puzzles along with a special class of digital signatures supporting fast verification. Our hash-based client puzzle provides finer granularity of difficulty and is proven secure in the puzzle difficulty model of Chen et al. (2009). We integrate this with the fast-verification digital signature scheme proposed by Bernstein (2000, 2008). These schemes can be up to 20 times faster for client authentication compared to RSA-based schemes. Our experimental results show that, in the Secure Sockets Layer (SSL) protocol, fast verification digital signatures can provide a 7% increase in connections per second compared to RSA signatures, and our integration of client puzzles with client authentication imposes no performance penalty on the server since puzzle verification is a part of signature verification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interoperable and loosely-coupled web services architecture, while beneficial, can be resource-intensive, and is thus susceptible to denial of service (DoS) attacks in which an attacker can use a relatively insignificant amount of resources to exhaust the computational resources of a web service. We investigate the effectiveness of defending web services from DoS attacks using client puzzles, a cryptographic countermeasure which provides a form of gradual authentication by requiring the client to solve some computationally difficult problems before access is granted. In particular, we describe a mechanism for integrating a hash-based puzzle into existing web services frameworks and analyze the effectiveness of the countermeasure using a variety of scenarios on a network testbed. Client puzzles are an effective defence against flooding attacks. They can also mitigate certain types of semantic-based attacks, although they may not be the optimal solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed Denial-of-Service (DDoS) attacks continue to be one of the most pernicious threats to the delivery of services over the Internet. Not only are DDoS attacks present in many guises, they are also continuously evolving as new vulnerabilities are exploited. Hence accurate detection of these attacks still remains a challenging problem and a necessity for ensuring high-end network security. An intrinsic challenge in addressing this problem is to effectively distinguish these Denial-of-Service attacks from similar looking Flash Events (FEs) created by legitimate clients. A considerable overlap between the general characteristics of FEs and DDoS attacks makes it difficult to precisely separate these two classes of Internet activity. In this paper we propose parameters which can be used to explicitly distinguish FEs from DDoS attacks and analyse two real-world publicly available datasets to validate our proposal. Our analysis shows that even though FEs appear very similar to DDoS attacks, there are several subtle dissimilarities which can be exploited to separate these two classes of events.