61 resultados para Norden Metric


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anonymous web browsing is an emerging hot topic with many potential applications for privacy and security. However, research on low latency anonymous communication, such as web browsing, is quite limited; one reason is the intolerable delay caused by the current dominant dummy packet padding strategy, as a result, it is hard to satisfy perfect anonymity and limited delay at the same time for web browsing. In this paper, we extend our previous proposal on using prefetched web pages as cover traffic to obtain perfect anonymity for anonymous web browsing, we further explore different aspects in this direction. Based on Shannon’s perfect secrecy theory, we formally established a mathematical model for the problem, and defined a metric to measure the cost of achieving perfect anonymity. The experiments on a real world data set demonstrated that the proposed strategy can reduce delay more than ten times compared to the dummy packet padding methods, which confirmed the vast potentials of the proposed strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anonymous web browsing is a hot topic with many potential applications for privacy reasons. The current dominant strategy to achieve anonymity is packet padding with dummy packets as cover traffic. However, this method introduces extra bandwidth cost and extra delay. Therefore, it is not practical for anonymous web browsing applications. In order to solve this problem, we propose to use the predicted web pages that users are going to access as the cover traffic rather than dummy packets. Moreover, we defined anonymity level as a metric to measure anonymity degrees, and established a mathematical model for anonymity systems, and transformed the anonymous communication problem into an optimization problem. As a result, users can find tradeoffs among anonymity level and cost. With the proposed model, we can describe and compare our proposal and the previous schemas in a theoretical style. The preliminary experiments on the real data set showed the huge potential of the proposed strategy in terms of resource saving.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rationale, aims and objectives A person’s beliefs about their illness may contribute to recovery and prognosis. Some degree of acceptance of illness and its impact is necessary to integrate the presence of a chronic disorder into one’s lifestyle and adhere to necessary components of illness management; however, some individuals can become ‘stuck’ and have difficulty adjusting out of the sick role. Inventories exist to measure illness cognitions, attitudes and behaviours as they relate to hypochondria and psychosomatic illness, but there is no extant measure of sick role inertia.We describe the psychometric properties of a new scale, the Illness Cognitions Scale (ICS), a metric of investment in the sick role.

Methods The ICS was administered to 97 individuals with bipolar or schizoaffective disorder, and the psychometric properties of the scale measured. Dimensionality was assessed using Principal Components Analysis with Oblimin rotation.

Results The scale has a strong internal consistency, with a Cronbach’s alpha of 0.858. Results of a factor analysis suggested the presence of one main factor, with three other smaller, related sub-factors, capturing aspects of maladaptive illness beliefs.

Conclusion The ICS is a 17-item, internally validated scale measuring difficulty adjusting out of the sick role. The scale predominantly measures a single construct. Further research on external validity of the ICS is required as well as determination of the clinical significance and patient acceptability of the scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective The Clinical Global Impression Scale (CGI) is established as a core metric in psychiatric research. This study aims to test the validity of CGI as a clinical outcome measure suitable for routine use in a private inpatient setting.

Methods The CGI was added to a standard battery of routine outcome measures in a private psychiatric hospital. Data were collected on consecutive admissions over a period of 24 months, which included clinical diagnosis, demographics, service utilization and four routine measures (CGI, HoNOS, MHQ-14 and DASS-21) at both admission and discharge. Descriptive and comparative data analyses were performed.

Results Of 786 admissions in total, there were 624 and 614 CGI-S ratings completed at the point of admission and discharge, respectively, and 610 completed CGI-I ratings. The admission and discharge CGI-S scores were correlated (r = 0.40), and the indirect improvement measures obtained from their differences were highly correlated with the direct CGI-I scores (r = 0.71). The CGI results reflected similar trends seen in the other three outcome measures.

Conclusions The CGI is a valid clinical outcome measure suitable for routine use in an inpatient setting. It offers a number of advantages, including its established utility in psychiatric research, sensitivity to change, quick and simple administration, utility across diagnostic groupings, and reliability in the hands of skilled clinicians.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This empirical research of tourists’ cultural experiences aims to advance theory by developing a measurement model of attitudes towards attending cultural experiences for a sample of international tourists visiting Melbourne, Australia. Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were used to cross-validate the underlying dimensionality structure of cultural experience attitudes in the model. A five-factor model was extracted from the EFA and some further modifications were required to establish discriminant validity. A four-factor model was retained in the CFA, which included three factors based on a liking for different types of cultural experiences and one factor indicating that social interaction was the most liked socio-psychological attitude towards attending cultural experiences. Although the sample were all English-speaking international tourists, cross-cultural validation of the model was also examined for factor configural and metric invariance of the measurement model as there were three different groups of international tourists within the sample: North Americans; New Zealanders; and tourists from United Kingdom and Ireland. This measurement structure was found to be relatively invariant for the factor loadings across the three groups of international tourists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modeling of first-dimension retention of peaks based on modulation phase and period allows reliable prediction of the modulated peak distributions generated in the comprehensive two-dimensional chromatography experiment. By application of the inverse process, it is also possible to use the profile of the modulated peaks (their heights or areas) to predict the shape and parameters of the original input chromatographic band (retention time, standard deviation, area) for the primary column dimension. This allows an accurate derivation of the firstdimension retention time (RSD 0.02%) which is equal to that for the non-modulated experiment, rather than relying upon the retention time of the major modulated peak generated by the modulation process (RSD 0.16%). The latter metric can produce a retention time that differs by at least the modulation period employed in the experiment, which displays a discontinuity in the retention time vs modulation phase plot at the point of the 180° out-ofphase modulation. In contrast, the new procedure proposed here gives a result that is essentially independent of modulation phase and period. This permits an accurate value to be assigned to the first-dimension retention. The proposed metric accounts for the time on the seconddimension, the phase of the distribution, and the holdup time that the sampled solute is retained in the modulating interface. The approach may also be based on the largest three modulated peaks, rather than all modulated peaks. This simplifies the task of assigning the retention time with little loss of precision in band standard deviation or retention time, provided that these peaks are not all overloaded in the first or second dimension.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents parts-of-speech tagging as a first step towards an autonomous text-to-scene conversion system. It categorizes some freely available taggers, according to the techniques used by each in order to automatically identify word-classes. In addition, the performance of each identified tagger is verified experimentally. The SUSANNE corpus is used for testing and reveals the complexity of working with different tagsets, resulting in substantially lower accuracies in our tests than in those reported by the developers of each tagger. The taggers are then grouped to form a voting system to attempt to raise accuracies, but in no cases do the combined results improve upon the individual accuracies. Additionally a new metric, agreement, is tentatively proposed as an indication of confidence in the output of a group of taggers where such output cannot be validated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Texture synthesis employs neighbourhood matching to generate appropriate new content. Terrain synthesis has the added constraint that new content must be geographically plausible. The profile recognition and polygon breaking algorithm (PPA) [Chang et al. 1998] provides a robust mechanism for characterizing terrain as systems of valley and ridge lines in digital elevation maps. We exploit this to create a terrain characterization metric that is robust, efficient to compute and is sensitive to terrain properties.

Terrain regions are characterized as a minimum spanning tree derived from a graph created from the sample points of the elevation map which are encoded as weights in the edges of the graph. This formulation allows us to provide a single consistent feature definition that is sensitive to the pattern of ridges and valleys in the terrain Alternative formulations of these weights provide richer characteristicmeasures and we provide examples of alternate definitions based on curvature and contour measures.

We show that the measure is robust, with a significant portion derived directly from information local to the terrain sample. Global terrain characteristics introduce the issue of over- and underconnected valley/ridge lines when working with sub-regions. This is addressed by providing two graph construction strategies, which respectively provide an upper bound on connectivity as a single spanning tree, and a lower bound as a forest of trees.

Efficient minimum spanning tree algorithms are adapted to the context of terrain data and are shown to provide substantially better performance than previous PPA implementations. In particular, these are able to characterize valley and ridge behaviour at every point even in large elevation maps, providing a measure sensitive to terrain features at all scales.

The resulting graph based formulation provides an efficient and elegant algorithm for characterizing terrain features. The measure can be calculated efficiently, is robust under changes of neighbourhood position, size and resolution and the hybrid measure is sensitive to terrain features both locally and globally.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The folding of proteins is usually studied in dilute aqueous solutions of controlled pH, but it has recently been demonstrated that reversible unfolding can occur in other media. Particular stability is conferred on the protein (folded or unfolded) when the process occurs in ‘protic ionic liquids’ (pILs) of controlled proton activity. This activity (‘effective pH’) is determined by the acid and base components of the pIL and is characterized in the present study by the proton chemical shift of the N–H proton. Here we propose a ‘refoldability’ or ‘refolding index’ (RFI) metric for assessing the stability of folded biomolecules in different solvent media, and demarcate high RFI zones in hydrated pIL media using ribonuclease A and hen egg white lysozyme as examples. Then we show that, unexpectedly, the same high RFIs can be obtained in pIL media that are 90% inorganic in character (simple ammonium salts). This leads us to a conjecture related to the objections that have been raised to ‘primordial soup’ theories for biogenesis, objections that are based on the observation that all the bonds involved in biomacromolecule formation are hydrolyzed in ordinary aqueous solutions unless specifically protected. The ingredients for primitive ionic liquids (NH3, CO, HCN, CO2, and water) were abundant in the early earth atmosphere, and many experiments have shown how amino acids could form from them also. Cyclical concentration in evaporating inland seas could easily produce the type of ambient-temperature, non-hydrolyzing, media that we have demonstrated here may be hospitable to biomolecules, and that may be actually encouraging of biopolymer assembly. Thus a plausible variant of the conventional ‘primordial soup’ model of biogenesis is suggested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a document clustering framework incorporating instance-level knowledge in the form of pairwise constraints and attribute-level knowledge in the form of keyphrases. Firstly, we initialize weights based on metric learning with pairwise constraints, then simultaneously learn two kinds of knowledge by combining the distance-based and the constraint-based approaches, finally evaluate and select clustering result based on the degree of users’ satisfaction. The experimental results demonstrate the effectiveness and potential of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Static detection of polymorphic malware variants plays an important role to improve system security. Control flow has shown to be an effective characteristic that represents polymorphic malware instances. In our research, we propose a similarity search of malware using novel distance metrics of malware signatures. We describe a malware signature by the set of control flow graphs the malware contains. We propose two approaches and use the first to perform pre-filtering. Firstly, we use a distance metric based on the distance between feature vectors. The feature vector is a decomposition of the set of graphs into either fixed size k-sub graphs, or q-gram strings of the high-level source after decompilation. We also propose a more effective but less computationally efficient distance metric based on the minimum matching distance. The minimum matching distance uses the string edit distances between programs' decompiled flow graphs, and the linear sum assignment problem to construct a minimum sum weight matching between two sets of graphs. We implement the distance metrics in a complete malware variant detection system. The evaluation shows that our approach is highly effective in terms of a limited false positive rate and our system detects more malware variants when compared to the detection rates of other algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network traffic classification is an essential component for network management and security systems. To address the limitations of traditional port-based and payload-based methods, recent studies have been focusing on alternative approaches. One promising direction is applying machine learning techniques to classify traffic flows based on packet and flow level statistics. In particular, previous papers have illustrated that clustering can achieve high accuracy and discover unknown application classes. In this work, we present a novel semi-supervised learning method using constrained clustering algorithms. The motivation is that in network domain a lot of background information is available in addition to the data instances themselves. For example, we might know that flow ƒ1 and ƒ2 are using the same application protocol because they are visiting the same host address at the same port simultaneously. In this case, ƒ1 and ƒ2 shall be grouped into the same cluster ideally. Therefore, we describe these correlations in the form of pair-wise must-link constraints and incorporate them in the process of clustering. We have applied three constrained variants of the K-Means algorithm, which perform hard or soft constraint satisfaction and metric learning from constraints. A number of real-world traffic traces have been used to show the availability of constraints and to test the proposed approach. The experimental results indicate that by incorporating constraints in the course of clustering, the overall accuracy and cluster purity can be significantly improved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Researchers have been endeavoring to discover concise sets of episode rules instead of complete sets in sequences. Existing approaches, however, are not able to process complex sequences and can not guarantee the accuracy of resulting sets due to the violation of anti-monotonicity of the frequency metric. In some real applications, episode rules need to be extracted from complex sequences in which multiple items may appear in a time slot. This paper investigates the discovery of concise episode rules in complex sequences. We define a concise representation called non-derivable episode rules and formularize the mining problem. Adopting a novel anti-monotonic frequency metric, we then develop a fast approach to discover non-derivable episode rules in complex sequences. Experimental results demonstrate that the utility of the proposed approach substantially reduces the number of rules and achieves fast processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Subsequence frequency measurement is a basic and essential problem in knowledge discovery in single sequences. Frequency based knowledge discovery in single sequences tends to be unreliable since different resulting sets may be obtained from a same sequence when different frequency metrics are adopted. In this chapter, we investigate subsequence frequency measurement and its impact on the reliability of knowledge discovery in single sequences. We analyse seven previous frequency metrics, identify their inherent inaccuracies, and explore their impacts on two kinds of knowledge discovered from single sequences, frequent episodes and episode rules. We further give three suggestions for frequency metrics and introduce a new frequency metric in order to improve the reliability. Empirical evaluation reveals the inaccuracies and verifies our findings.