14 resultados para pattern matching protocols

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regular expressions are used to parse textual data to match patterns and extract variables. They have been implemented in a vast number of programming languages with a significant quantity of research devoted to improving their operational efficiency. However, regular expressions are limited to finding linear matches. Little research has been done in the field of object-oriented results which would allow textual or binary data to be converted to multi-layered objects. This is significantly relevant as many of todaypsilas data formats are object-based. This paper extends our previous work by detailing an algorithmic approach to perform object-oriented parsing, and provides an initial study of benchmarks of the algorithms of our contribution

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Database query verification schemes attempt to provide authenticity, completeness, and freshness guarantees for queries executed on untrusted cloud servers. A number of such schemes currently exist in the literature, allowing query verification for queries that are based on matching whole values (such as numbers, dates, etc.) or for queries based on keyword matching. However, there is a notable gap in the research with regard to query verification schemes for pattern-matching queries. Our contribution here is to provide such a verification scheme that provides correctness guarantees for pattern-matching queries executed on the cloud. We describe a trivial scheme, ȃŸż and show how it does not provide completeness guarantees, and then proceed to describe our scheme based on efficient primitives such as cryptographic hashing and Merkle hash trees along with suffix arrays. We also provide experimental results based on a working prototype to show the practicality of our scheme.Ÿż

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Provisioning of real-time multimedia sessions over wireless cellular network poses unique challenges due to frequent handoff and rerouting of a connection. For this reason, the wireless networks with cellular architecture require efficient user mobility estimation and prediction. This paper proposes using Robust Extended Kalman Filter as a location heading altitude estimator of mobile user for next cell prediction in order to improve the connection reliability and bandwidth efficiency of the underlying system. Through analysis we demonstrate that our algorithm reduces the system complexity (compared to existing approach using pattern matching and Kalman filter) as it requires only two base station measurements or only the measurement from the closest base station. Further, the technique is robust against system uncertainties due to inherent deterministic nature in the mobility model. Through simulation, we show the accuracy and simplicity in implementation of our prediction algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Provisioning of real-time multimedia sessions over wireless cellular network poses unique challenges due to frequent handoff and rerouting of a connection. For this reason, the wireless networks with cellular architecture require efficient user mobility estimation and prediction. This paper proposes using robust extended Kalman filter (REKF) as a location heading altitude estimator of mobile user for next cell prediction in order to improve the connection reliability and bandwidth efficiency of the underlying system. Through analysis we demonstrate that our algorithm reduces the system complexity (compared to existing approach using pattern matching and Kalman filter) as it requires only two base station measurements or only the measurement from the closest base station. Further, the technique is robust against system uncertainties due to inherent deterministic nature in the mobility model and more effective in comparison with the standard Kalman filter.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There are two statistical decision making questions regarding statistically detecting sings of denial-of-service flooding attacks. One is how to represent the distributions of detection probability, false alarm probability and miss probability. The other is how to quantitatively express a decision region within which one may make a decision that has high detection probability, low false alarm probability and low miss probability. This paper gives the answers to the above questions. In addition, a case study is demonstrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The organizational context in which healthcare is delivered is thought to play an important role in mediating the use of knowledge in practice. Additionally, a number of potentially modifiable contextual factors have been shown to make an organizational context more amenable to change. However, understanding of how these factors operate to influence organizational context and knowledge use remains limited. In particular, research to understand knowledge translation in the long-term care setting is scarce. Further research is therefore required to provide robust explanations of the characteristics of organizational context in relation to knowledge use.
Aim: To develop a robust explanation of the way organizational context mediates the use of knowledge in practice in long-term care facilities.
Design: This is longitudinal, in-depth qualitative case study research using exploratory and interpretive methods to explore the role of organizational context in influencing knowledge translation. The study will be conducted in two phases. In phase one, comprehensive case studies will be conducted in three facilities. Following data analysis and proposition development, phase two will continue with focused case studies to elaborate emerging themes and theory. Study sites will be purposively selected. In both phases, data will be collected using a variety of approaches, including non-participant observation, key informant interviews, family perspectives, focus groups, and documentary evidence (including, but not limited to, policies, notices, and photographs of physical resources). Data analysis will comprise an iterative process of identifying convergent evidence within each case study and then examining and comparing the evidence across multiple case studies to draw conclusions from the study as a whole. Additionally, findings that emerge through this project will be compared and considered alongside those that are emerging from project one. In this way, pattern matching based on explanation building will be used to frame the analysis and develop an explanation of organizational context and knowledge use over time. An improved understanding of the contextual factors that mediate knowledge use will inform future development and testing of interventions to enhance knowledge use, with the ultimate aim of improving the outcomes for residents in long-term care settings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a hierarchical pattern matching and generalisation technique which is applied to the problem of locating the correct speaker of quoted speech found in fiction books. Patterns from a training set are generalised to create a small number of rules, which can be used to identify items of interest within the text. The pattern matching technique is applied to finding the Speech-Verb, Actor and Speaker of quotes found in ction books. The technique performs well over the training data, resulting in rule-sets many times smaller than the training set, but providing very high accuracy. While the rule-set generalised from one book is less effective when applied to different books than an approach based on hand coded heuristics, performance is comparable when testing on data closely related to the training set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existing texture synthesis-from-example strategies for polygon meshes typically make use of three components: a multi-resolution mesh hierarchy that allows the overall nature of the pattern to be reproduced before filling in detail; a matching strategy that extends the synthesized texture using the best fit from a texture sample; and a transfer mechanism that copies the selected portion of the texture sample to the target surface. We introduce novel alternatives for each of these components. Use of p2-subdivision surfaces provides the mesh hierarchy and allows fine control over the surface complexity. Adaptive subdivision is used to create an even vertex distribution over the surface. Use of the graph defined by a surface region for matching, rather than a regular texture neighbourhood, provides for flexible control over the scale of the texture and allows simultaneous matching against multiple levels of an image pyramid created from the texture sample. We use graph cuts for texture transfer, adapting this scheme to the context of surface synthesis. The resulting surface textures are realistic, tolerant of local mesh detail and are comparable to results produced by texture neighbourhood sampling approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Graph matching is an important class of methods in pattern recognition. Typically, a graph representing an unknown pattern is matched with a database of models. If the database of model graphs is large, an additional factor in induced into the overall complexity of the matching process. Various techniques for reducing the influence of this additional factor have been described in the literature. In this paper we propose to extract simple features from a graph and use them to eliminate candidate graphs from the database. The most powerful set of features and a decision tree useful for candidate elimination are found by means of the C4.5 algorithm, which was originally proposed for inductive learning of classication rules. Experimental results are reported demonstrating that effcient candidate elimination can be achieved by the proposed procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The selection of two high performance liquid chromatography (HPLC) columns with vastly different retention mechanisms is vital for performing effective two-dimensional (2D-) HPLC. This paper reports on a systematic method to select a pair of HPLC columns that provide the most different separations for a given sample. This was completed with the aid of a HPLC simulator that predicted retention profiles on the basis of real experimental data, which is difficult when the contents of sample matrices are largely-or completely-unknown. Peaks from the same compounds must first be matched between chromatograms to compare the retention profiles and optimised 2D-HPLC column selection. In this work, two methods of matching peaks between chromatograms were explored and an optimal pair of chromatography columns was selected for 2D-HPLC. First, a series of 17 antioxidants were selected as an analogue for a coffee extract. The predicted orthogonality of the standards was 39%, according to the fractional surface coverage 'bins' method, which was close to the actual space utilisation of the standard mixture, 44%. Moreover, the orthogonality for the 2D-HPLC of coffee matched the predicted value of 38%. The second method employed a complex sample matrix of urine to optimise the column selections. Seven peaks were confidently matched between chromatograms by comparing relative peak areas of two detection strategies: UV absorbance and potassium permanganate chemiluminescence. It was found that the optimal combinations had an orthogonality of 35% while the actual value was closer to 30%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The past decade has seen a lot of research on statistics-based network protocol identification using machine learning techniques. Prior studies have shown promising results in terms of high accuracy and fast classification speed. However, most works have embodied an implicit assumption that all protocols are known in advance and presented in the training data, which is unrealistic since real-world networks constantly witness emerging traffic patterns as well as unknown protocols in the wild. In this paper, we revisit the problem by proposing a learning scheme with unknown pattern extraction for statistical protocol identification. The scheme is designed with a more realistic setting, where the training dataset contains labeled samples from a limited number of protocols, and the goal is to tell these known protocols apart from each other and from potential unknown ones. Preliminary results derived from real-world traffic are presented to show the effectiveness of the scheme.