830 resultados para Theory proposed by Habermas


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gradual authentication is a principle proposed by Meadows as a way to tackle denial-of-service attacks on network protocols by gradually increasing the confidence in clients before the server commits resources. In this paper, we propose an efficient method that allows a defending server to authenticate its clients gradually with the help of some fast-to-verify measures. Our method integrates hash-based client puzzles along with a special class of digital signatures supporting fast verification. Our hash-based client puzzle provides finer granularity of difficulty and is proven secure in the puzzle difficulty model of Chen et al. (2009). We integrate this with the fast-verification digital signature scheme proposed by Bernstein (2000, 2008). These schemes can be up to 20 times faster for client authentication compared to RSA-based schemes. Our experimental results show that, in the Secure Sockets Layer (SSL) protocol, fast verification digital signatures can provide a 7% increase in connections per second compared to RSA signatures, and our integration of client puzzles with client authentication imposes no performance penalty on the server since puzzle verification is a part of signature verification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The vast sequence divergence among different virus groups has presented a great challenge to alignment-based analysis of virus phylogeny. Due to the problems caused by the uncertainty in alignment, existing tools for phylogenetic analysis based on multiple alignment could not be directly applied to the whole-genome comparison and phylogenomic studies of viruses. There has been a growing interest in alignment-free methods for phylogenetic analysis using complete genome data. Among the alignment-free methods, a dynamical language (DL) method proposed by our group has successfully been applied to the phylogenetic analysis of bacteria and chloroplast genomes. Results In this paper, the DL method is used to analyze the whole-proteome phylogeny of 124 large dsDNA viruses and 30 parvoviruses, two data sets with large difference in genome size. The trees from our analyses are in good agreement to the latest classification of large dsDNA viruses and parvoviruses by the International Committee on Taxonomy of Viruses (ICTV). Conclusions The present method provides a new way for recovering the phylogeny of large dsDNA viruses and parvoviruses, and also some insights on the affiliation of a number of unclassified viruses. In comparison, some alignment-free methods such as the CV Tree method can be used for recovering the phylogeny of large dsDNA viruses, but they are not suitable for resolving the phylogeny of parvoviruses with a much smaller genome size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of appropriate features to characterise an output class or object is critical for all classification problems. In order to find optimal feature descriptors for vegetation species classification in a power line corridor monitoring application, this article evaluates the capability of several spectral and texture features. A new idea of spectral–texture feature descriptor is proposed by incorporating spectral vegetation indices in statistical moment features. The proposed method is evaluated against several classic texture feature descriptors. Object-based classification method is used and a support vector machine is employed as the benchmark classifier. Individual tree crowns are first detected and segmented from aerial images and different feature vectors are extracted to represent each tree crown. The experimental results showed that the proposed spectral moment features outperform or can at least compare with the state-of-the-art texture descriptors in terms of classification accuracy. A comprehensive quantitative evaluation using receiver operating characteristic space analysis further demonstrates the strength of the proposed feature descriptors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims. This article is a report of a study done to identify how renal nurses experience information about renal care and the information practices that they used to support everyday practice. Background. What counts as nursing knowledge remains a contested area in the discipline yet little research has been undertaken. Information practice encompasses a range of activities such as seeking, evaluation and sharing of information. The ability to make informed judgement is dependent on nurses being able to identify relevant sources of information that inform their practice and those sources of information may enable the identification of what knowledge is important to nursing practice. Method. The study was philosophically framed from a practice perspective and informed by Habermas and Schatzki; it employed qualitative research techniques. Using purposive sampling six registered nurses working in two regional renal units were interviewed during 2009 and data was thematically analysed. Findings. The information practices of renal nurses involved mapping an information landscape in which they drew on information obtained from epistemic, social and corporeal sources. They also used coupling, a process of drawing together information from a range of sources, to enable them to practice. Conclusion. Exploring how nurses engage with information, and the role the information plays in situating and enacting epistemic, social and corporeal knowledge into everyday nursing practice is instructive because it indicates that nurses must engage with all three modalities in order to perform effectively, efficiently and holistically in the context of patient care. © 2011 The Authors. Journal of Advanced Nursing © 2011 Blackwell Publishing Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter β ∈ [0,1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter β is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. ©2001 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of learning problems can be cast as an Online Convex Game: on each round, a learner makes a prediction x from a convex set, the environment plays a loss function f, and the learner’s long-term goal is to minimize regret. Algorithms have been proposed by Zinkevich, when f is assumed to be convex, and Hazan et al., when f is assumed to be strongly convex, that have provably low regret. We consider these two settings and analyze such games from a minimax perspective, proving minimax strategies and lower bounds in each case. These results prove that the existing algorithms are essentially optimal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an algorithm called Optimistic Linear Programming (OLP) for learning to optimize average reward in an irreducible but otherwise unknown Markov decision process (MDP). OLP uses its experience so far to estimate the MDP. It chooses actions by optimistically maximizing estimated future rewards over a set of next-state transition probabilities that are close to the estimates, a computation that corresponds to solving linear programs. We show that the total expected reward obtained by OLP up to time T is within C(P) log T of the reward obtained by the optimal policy, where C(P) is an explicit, MDP-dependent constant. OLP is closely related to an algorithm proposed by Burnetas and Katehakis with four key differences: OLP is simpler, it does not require knowledge of the supports of transition probabilities, the proof of the regret bound is simpler, but our regret bound is a constant factor larger than the regret of their algorithm. OLP is also similar in flavor to an algorithm recently proposed by Auer and Ortner. But OLP is simpler and its regret bound has a better dependence on the size of the MDP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the Service-oriented architecture paradigm has become ever more popular, different standardization efforts have been proposed by various consortia to enable interaction among heterogeneous environments through this paradigm. This chapter will overview the most prevalent of these SOA Efforts. It will first show how technical services can be described, how they can interact with each other and be discovered by users. Next, the chapter will present different standards to facilitate service composition and to design service-oriented environments in light of a universal understanding of service orientation. The chapter will conclude with a summary and a discussion on the limitations of the reviewed standards along their ability to describe service properties. This paves the way to the next chapters where the USDL standard will be presented, which aim to lift such limitations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Between 2001 and 2005, the US airline industry faced financial turmoil. At the same time, the European airline industry entered a period of substantive deregulation. This period witnessed opportunities for low-cost carriers to become more competitive in the market as a result of these combined events. To help assess airline performance in the aftermath of these events, this paper provides new evidence of technical efficiency for 42 national and international airlines in 2006 using the data envelopment analysis (DEA) bootstrap approach first proposed by Simar and Wilson (J Econ, 136:31-64, 2007). In the first stage, technical efficiency scores are estimated using a bootstrap DEA model. In the second stage, a truncated regression is employed to quantify the economic drivers underlying measured technical efficiency. The results highlight the key role played by non-discretionary inputs in measures of airline technical efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider time-space fractional reaction diffusion equations in two dimensions. This equation is obtained from the standard reaction diffusion equation by replacing the first order time derivative with the Caputo fractional derivative, and the second order space derivatives with the fractional Laplacian. Using the matrix transfer technique proposed by Ilic, Liu, Turner and Anh [Fract. Calc. Appl. Anal., 9:333--349, 2006] and the numerical solution strategy used by Yang, Turner, Liu, and Ilic [SIAM J. Scientific Computing, 33:1159--1180, 2011], the solution of the time-space fractional reaction diffusion equations in two dimensions can be written in terms of a matrix function vector product $f(A)b$ at each time step, where $A$ is an approximate matrix representation of the standard Laplacian. We use the finite volume method over unstructured triangular meshes to generate the matrix $A$, which is therefore non-symmetric. However, the standard Lanczos method for approximating $f(A)b$ requires that $A$ is symmetric. We propose a simple and novel transformation in which the standard Lanczos method is still applicable to find $f(A)b$, despite the loss of symmetry. Numerical results are presented to verify the accuracy and efficiency of our newly proposed numerical solution strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Boundaries are an important field of study because they mediate almost every aspect of organizational life. They are becoming increasingly more important as organizations change more frequently and yet, despite the endemic use of the boundary metaphor in common organizational parlance, they are poorly understood. Organizational boundaries are under-theorized and researchers in related fields often simply assume their existence, without defining them. The literature on organizational boundaries is fragmented with no unifying theoretical basis. As a result, when it is recognized that an organizational boundary is "dysfunctional". there is little recourse to models on which to base remediating action. This research sets out to develop just such a theoretical model and is guided by the general question: "What is the nature of organizational boundaries?" It is argued that organizational boundaries can be conceptualised through elements of both social structure and of social process. Elements of structure include objects, coupling, properties and identity. Social processes include objectification, identification, interaction and emergence. All of these elements are integrated by a core category, or basic social process, called boundary weaving. An organizational boundary is a complex system of objects and emergent properties that are woven together by people as they interact together, objectifying the world around them, identifying with these objects and creating couplings of varying strength and polarity as well as their own fragmented identity. Organizational boundaries are characterised by the multiplicity of interconnections, a particular domain of objects, varying levels of embodiment and patterns of interaction. The theory developed in this research emerged from an exploratory, qualitative research design employing grounded theory methodology. The field data was collected from the training headquarters of the New Zealand Army using semi-structured interviews and follow up observations. The unit of analysis is an organizational boundary. Only one research context was used because of the richness and multiplicity of organizational boundaries that were present. The model arose, grounded in the data collected, through a process of theoretical memoing and constant comparative analysis. Academic literature was used as a source of data to aid theory development and the saturation of some central categories. The final theory is classified as middle range, being substantive rather than formal, and is generalizable across medium to large organizations in low-context societies. The main limitation of the research arose from the breadth of the research with multiple lines of inquiry spanning several academic disciplines, with some relevant areas such as the role of identity and complexity being addressed at a necessarily high level. The organizational boundary theory developed by this research replaces the typology approaches, typical of previous theory on organizational boundaries and reconceptualises the nature of groups in organizations as well as the role of "boundary spanners". It also has implications for any theory that relies on the concept of boundaries, such as general systems theory. The main contribution of this research is the development of a holistic model of organizational boundaries including an explanation of the multiplicity of boundaries . no organization has a single definable boundary. A significant aspect of this contribution is the integration of aspects of complexity theory and identity theory to explain the emergence of higher-order properties of organizational boundaries and of organizational identity. The core category of "boundary weaving". is a powerful new metaphor that significantly reconceptualises the way organizational boundaries may be understood in organizations. It invokes secondary metaphors such as the weaving of an organization's "boundary fabric". and provides managers with other metaphorical perspectives, such as the management of boundary friction, boundary tension, boundary permeability and boundary stability. Opportunities for future research reside in formalising and testing the theory as well as developing analytical tools that would enable managers in organizations to apply the theory in practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cold-formed steel stud walls are a major component of Light Steel Framing (LSF) building systems used in commercial, industrial and residential buildings. In the conventional LSF stud wall systems, thin steel studs are protected from fire by placing one or two layers of plasterboard on both sides with or without cavity insulation. However, there is very limited data about the structural and thermal performance of stud wall systems while past research showed contradicting results, for example, about the benefits of cavity insulation. This research was therefore conducted to improve the knowledge and understanding of the structural and thermal performance of cold-formed steel stud wall systems (both load bearing and non-load bearing) under fire conditions and to develop new improved stud wall systems including reliable and simple methods to predict their fire resistance rating. Full scale fire tests of cold-formed steel stud wall systems formed the basis of this research. This research proposed an innovative LSF stud wall system in which a composite panel made of two plasterboards with insulation between them was used to improve the fire rating. Hence fire tests included both conventional steel stud walls with and without the use of cavity insulation and the new composite panel system. A propane fired gas furnace was specially designed and constructed first. The furnace was designed to deliver heat in accordance with the standard time temperature curve as proposed by AS 1530.4 (SA, 2005). A compression loading frame capable of loading the individual studs of a full scale steel stud wall system was also designed and built for the load-bearing tests. Fire tests included comprehensive time-temperature measurements across the thickness and along the length of all the specimens using K type thermocouples. They also included the measurements of load-deformation characteristics of stud walls until failure. The first phase of fire tests included 15 small scale fire tests of gypsum plasterboards, and composite panels using different types of insulating material of varying thickness and density. Fire performance of single and multiple layers of gypsum plasterboards was assessed including the effect of interfaces between adjacent plasterboards on the thermal performance. Effects of insulations such as glass fibre, rock fibre and cellulose fibre were also determined while the tests provided important data relating to the temperature at which the fall off of external plasterboards occurred. In the second phase, nine small scale non-load bearing wall specimens were tested to investigate the thermal performance of conventional and innovative steel stud wall systems. Effects of single and multiple layers of plasterboards with and without vertical joints were investigated. The new composite panels were seen to offer greater thermal protection to the studs in comparison to the conventional panels. In the third phase of fire tests, nine full scale load bearing wall specimens were tested to study the thermal and structural performance of the load bearing wall assemblies. A full scale test was also conducted at ambient temperature. These tests showed that the use of cavity insulation led to inferior fire performance of walls, and provided good explanations and supporting research data to overcome the incorrect industry assumptions about cavity insulation. They demonstrated that the use of insulation externally in a composite panel enhanced the thermal and structural performance of stud walls and increased their fire resistance rating significantly. Hence this research recommends the use of the new composite panel system for cold-formed LSF walls. This research also included steady state tensile tests at ambient and elevated temperatures to address the lack of reliable mechanical properties for high grade cold-formed steels at elevated temperatures. Suitable predictive equations were developed for calculating the yield strength and elastic modulus at elevated temperatures. In summary, this research has developed comprehensive experimental thermal and structural performance data for both the conventional and the proposed non-load bearing and load bearing stud wall systems under fire conditions. Idealized hot flange temperature profiles have been developed for non-insulated, cavity insulated and externally insulated load bearing wall models along with suitable equations for predicting their failure times. A graphical method has also been proposed to predict the failure times (fire rating) of non-load bearing and load bearing walls under different load ratios. The results from this research are useful to both fire researchers and engineers working in this field. Most importantly, this research has significantly improved the knowledge and understanding of cold-formed LSF walls under fire conditions, and developed an innovative LSF wall system with increased fire rating. It has clearly demonstrated the detrimental effects of using cavity insulation, and has paved the way for Australian building industries to develop new wall panels with increased fire rating for commercial applications worldwide.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Just Fast Keying (JFK) is a simple, efficient and secure key exchange protocol proposed by Aiello et al. (ACM TISSEC, 2004). JFK is well known for its novel design features, notably its resistance to denial-of-service (DoS) attacks. Using Meadows’ cost-based framework, we identify a new DoS vulnerability in JFK. The JFK protocol is claimed secure in the Canetti-Krawczyk model under the Decisional Diffie-Hellman (DDH) assumption. We show that security of the JFK protocol, when reusing ephemeral Diffie-Hellman keys, appears to require the Gap Diffie-Hellman (GDH) assumption in the random oracle model. We propose a new variant of JFK that avoids the identified DoS vulnerability and provides perfect forward secrecy even under the DDH assumption, achieving the full security promised by the JFK protocol.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For millennia humans have sought, organized, and used information as they learned and evolved patterns of human information behaviors to resolve their human problems and survive. However, despite the current focus on living in an "information age," we have a limited evolutionary understanding of human information behavior. In this article the authors examine the current three interdisciplinary approaches to conceptualizing how humans have sought information including (a) the everyday life information seeking-sense-making approach, (b) the information foraging approach, and (c) the problem-solution perspective on information seeking approach. In addition, due to the lack of clarity regarding the role of information use in information behavior, a fourth information approach is provided based on a theory of information use. The use theory proposed starts from an evolutionary psychology notion that humans are able to adapt to their environment and survive because of our modular cognitive architecture. Finally, the authors begin the process of conceptualizing these diverse approaches, and the various aspects or elements of these approaches, within an integrated model with consideration of information use. An initial integrated model of these different approaches with information use is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Client puzzles are moderately-hard cryptographic problems neither easy nor impossible to solve that can be used as a counter-measure against denial of service attacks on network protocols. Puzzles based on modular exponentiation are attractive as they provide important properties such as non-parallelisability, deterministic solving time, and linear granularity. We propose an efficient client puzzle based on modular exponentiation. Our puzzle requires only a few modular multiplications for puzzle generation and verification. For a server under denial of service attack, this is a significant improvement as the best known non-parallelisable puzzle proposed by Karame and Capkun (ESORICS 2010) requires at least 2k-bit modular exponentiation, where k is a security parameter. We show that our puzzle satisfies the unforgeability and difficulty properties defined by Chen et al. (Asiacrypt 2009). We present experimental results which show that, for 1024-bit moduli, our proposed puzzle can be up to 30 times faster to verify than the Karame-Capkun puzzle and 99 times faster than the Rivest et al.'s time-lock puzzle.