299 resultados para assortative matching
Resumo:
A security system based on the recognition of the iris of human eyes using the wavelet transform is presented. The zero-crossings of the wavelet transform are used to extract the unique features obtained from the grey-level profiles of the iris. The recognition process is performed in two stages. The first stage consists of building a one-dimensional representation of the grey-level profiles of the iris, followed by obtaining the wavelet transform zerocrossings of the resulting representation. The second stage is the matching procedure for iris recognition. The proposed approach uses only a few selected intermediate resolution levels for matching, thus making it computationally efficient as well as less sensitive to noise and quantisation errors. A normalisation process is implemented to compensate for size variations due to the possible changes in the camera-to-face distance. The technique has been tested on real images in both noise-free and noisy conditions. The technique is being investigated for real-time implementation, as a stand-alone system, for access control to high-security areas.
Resumo:
A large number of methods have been published that aim to evaluate various components of multi-view geometry systems. Most of these have focused on the feature extraction, description and matching stages (the visual front end), since geometry computation can be evaluated through simulation. Many data sets are constrained to small scale scenes or planar scenes that are not challenging to new algorithms, or require special equipment. This paper presents a method for automatically generating geometry ground truth and challenging test cases from high spatio-temporal resolution video. The objective of the system is to enable data collection at any physical scale, in any location and in various parts of the electromagnetic spectrum. The data generation process consists of collecting high resolution video, computing accurate sparse 3D reconstruction, video frame culling and down sampling, and test case selection. The evaluation process consists of applying a test 2-view geometry method to every test case and comparing the results to the ground truth. This system facilitates the evaluation of the whole geometry computation process or any part thereof against data compatible with a realistic application. A collection of example data sets and evaluations is included to demonstrate the range of applications of the proposed system.
Resumo:
The support for typically out-of-vocabulary query terms such as names, acronyms, and foreign words is an important requirement of many speech indexing applications. However, to date many unrestricted vocabulary indexing systems have struggled to provide a balance between good detection rate and fast query speeds. This paper presents a fast and accurate unrestricted vocabulary speech indexing technique named Dynamic Match Lattice Spotting (DMLS). The proposed method augments the conventional lattice spotting technique with dynamic sequence matching, together with a number of other novel algorithmic enhancements, to obtain a system that is capable of searching hours of speech in seconds while maintaining excellent detection performance
Resumo:
This work aims to contribute to the reliability and integrity of perceptual systems of unmanned ground vehicles (UGV). A method is proposed to evaluate the quality of sensor data prior to its use in a perception system by utilising a quality metric applied to heterogeneous sensor data such as visual and infrared camera images. The concept is illustrated specifically with sensor data that is evaluated prior to the use of the data in a standard SIFT feature extraction and matching technique. The method is then evaluated using various experimental data sets that were collected from a UGV in challenging environmental conditions, represented by the presence of airborne dust and smoke. In the first series of experiments, a motionless vehicle is observing a ’reference’ scene, then the method is extended to the case of a moving vehicle by compensating for its motion. This paper shows that it is possible to anticipate degradation of a perception algorithm by evaluating the input data prior to any actual execution of the algorithm.
Resumo:
Importance Approximately one-third of patients with peripheral artery disease experience intermittent claudication, with consequent loss of quality of life. Objective To determine the efficacy of ramipril for improving walking ability, patient-perceived walking performance, and quality of life in patients with claudication. Design, Setting, and Patients Randomized, double-blind, placebo-controlled trial conducted among 212 patients with peripheral artery disease (mean age, 65.5 [SD, 6.2] years), initiated in May 2008 and completed in August 2011 and conducted at 3 hospitals in Australia. Intervention Patients were randomized to receive 10 mg/d of ramipril (n = 106) or matching placebo (n = 106) for 24 weeks. Main Outcome Measures Maximum and pain-free walking times were recorded during a standard treadmill test. The Walking Impairment Questionnaire (WIQ) and Short-Form 36 Health Survey (SF-36) were used to assess walking ability and quality of life, respectively. Results At 6 months, relative to placebo, ramipril was associated with a 75-second (95% CI, 60-89 seconds) increase in mean pain-free walking time (P < .001) and a 255-second (95% CI, 215-295 seconds) increase in maximum walking time (P < .001). Relative to placebo, ramipril improved the WIQ median distance score by 13.8 (Hodges-Lehmann 95% CI, 12.2-15.5), speed score by 13.3 (95% CI, 11.9-15.2), and stair climbing score by 25.2 (95% CI, 25.1-29.4) (P < .001 for all). The overall SF-36 median Physical Component Summary score improved by 8.2 (Hodges-Lehmann 95% CI, 3.6-11.4; P = .02) in the ramipril group relative to placebo. Ramipril did not affect the overall SF-36 median Mental Component Summary score. Conclusions and Relevance Among patients with intermittent claudication, 24-week treatment with ramipril resulted in significant increases in pain-free and maximum treadmill walking times compared with placebo. This was associated with a significant increase in the physical functioning component of the SF-36 score. Trial Registration clinicaltrials.gov Identifier: NCT00681226
Resumo:
Whole image descriptors have recently been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of these arbitrary thresholds limits the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graph’s functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.
Resumo:
In hyper competition, firms that are agile: sensing and responding better to customer requirements tend to be more successful and achieve supernormal profits. In spite of the widely accepted importance of customer agility, research is limited on this construct. The limited research also has predominantly focussed on the firm’s perspective of agility. However, we propose that the customers are better positioned to determine how well a firm is responding to their requirements (aka a firm’s customer agility). Taking the customers’ stand point, we address the issue of sense and respond alignment in two perspectives-matching and mediating. Based on data collected from customers in a field study, we tested hypothesis pertaining to the two methods of alignment using polynomial regression and response surface methodology. The results provide a good explanation for the role of both forms of alignment on customer satisfaction. Implication for research and practice are discussed.
Resumo:
We present an approach to automatically de-identify health records. In our approach, personal health information is identified using a Conditional Random Fields machine learning classifier, a large set of linguistic and lexical features, and pattern matching techniques. Identified personal information is then removed from the reports. The de-identification of personal health information is fundamental for the sharing and secondary use of electronic health records, for example for data mining and disease monitoring. The effectiveness of our approach is first evaluated on the 2007 i2b2 Shared Task dataset, a widely adopted dataset for evaluating de-identification techniques. Subsequently, we investigate the robustness of the approach to limited training data; we study its effectiveness on different type and quality of data by evaluating the approach on scanned pathology reports from an Australian institution. This data contains optical character recognition errors, as well as linguistic conventions that differ from those contained in the i2b2 dataset, for example different date formats. The findings suggest that our approach compares to the best approach from the 2007 i2b2 Shared Task; in addition, the approach is found to be robust to variations of training size, data type and quality in presence of sufficient training data.
Resumo:
We present two unconditional secure protocols for private set disjointness tests. In order to provide intuition of our protocols, we give a naive example that applies Sylvester matrices. Unfortunately, this simple construction is insecure as it reveals information about the intersection cardinality. More specifically, it discloses its lower bound. By using the Lagrange interpolation, we provide a protocol for the honest-but-curious case without revealing any additional information. Finally, we describe a protocol that is secure against malicious adversaries. In this protocol, a verification test is applied to detect misbehaving participants. Both protocols require O(1) rounds of communication. Our protocols are more efficient than the previous protocols in terms of communication and computation overhead. Unlike previous protocols whose security relies on computational assumptions, our protocols provide information theoretic security. To our knowledge, our protocols are the first ones that have been designed without a generic secure function evaluation. More important, they are the most efficient protocols for private disjointness tests in the malicious adversary case.
Resumo:
This paper describes a novel obstacle detection system for autonomous robots in agricultural field environments that uses a novelty detector to inform stereo matching. Stereo vision alone erroneously detects obstacles in environments with ambiguous appearance and ground plane such as in broad-acre crop fields with harvested crop residue. The novelty detector estimates the probability density in image descriptor space and incorporates image-space positional understanding to identify potential regions for obstacle detection using dense stereo matching. The results demonstrate that the system is able to detect obstacles typical to a farm at day and night. This system was successfully used as the sole means of obstacle detection for an autonomous robot performing a long term two hour coverage task travelling 8.5 km.
Resumo:
"The focus of this chapter is on context-resonant systems perspectives in career theory and their implications for practice in diverse cultural and contextual settings. For over two decades, the potential of systems theory to offer a context-resonant approach to career development has been acknowledged. Career development theory and practice, however, have been dominated for most of their history by more narrowly defined theories informed by a trait-and-factor tradition of matching the characteristics of individuals to occupations. In contrast, systems theory challenges this parts-in-isolation approach and offers a response that can accommodate the complexity of both the lives of individuals and the world of the 21st century by taking a more holistic approach that considers individuals in context. These differences in theory and practice may be attributed to the underlying philosophies that inform them. For example, the philosophy informing the trait-and-factor theoretical position, logical positivism, places value on: studying individuals in isolation from their environments; content over process; facts over feelings; objectivity over subjectivity; and views individual behavior as observable, measurable, and linear. In practice, this theory base manifests in expert-driven practices founded on the assessment of personal traits such as interests, personality, values, or beliefs which may be matched to particular occupations. The philosophy informing more recent theoretical positions, constructivism, places value on: studying individuals in their contexts; making meaning of experience through the use of subjective narrative accounts; and a belief in the capacity of individuals known as agency. In practice, this theory base manifests in practices founded on collaborative relationships with clients, narrative approaches, and a reduced emphasis on expert-driven linear processes. Thus, the tenets of constructivism which inform the systems perspectives in career theory are context-resonant. Systems theory stresses holism where the interconnectedness of all elements of a system is considered. Systems may be open or closed. Closed systems have no relationship with their external environment whereas open systems interact with their external environment and are open to external influence which is necessary for regeneration. Congruent with general systems theory, the systems perspectives emerging within career theory are based on open systems. Such systems are complex and dynamic and comprise many elements and subsystems which recursively interact with each other as well as with influences from the surrounding environment. As elements of a system should not be considered in isolation, a systems approach is holistic. Patterns of behavior are found in the relationships between the elements of dynamic systems. Because of the multiplicity of relationships within and between elements of subsystems, the possibility of linear causal explanations is reduced. Story is the mechanism through which the relationships and patterns within systems are recounted by individuals. Thus the career guidance practices emanating from theories informed by systems perspectives are inherently narrative in orientation. Narrative career counseling encourages career development to be understood from the subjective perspective of clients. The application of systemic thinking in practice takes greater account of context. In so doing, practices informed by systems theory may facilitate relevance to a diverse client group in diverse settings. In a world that has become increasingly global and diverse it seems that context-resonant systems perspectives in career theory are essential to ensure the future of career development. Translating context-resonant systems perspectives into practice offers important possibilities for methods and approaches that are respectful of diversity."--publisher website
Resumo:
Objective Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. Methods and Materials The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Results Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Conclusion Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data.
Resumo:
The porosity and pore size distribution of coals determine many of their properties, from gas release to their behavior on carbonization, and yet most methods of determining pore size distribution can only examine a restricted size range. Even then, only accessible pores can be investigated with these methods. Small-angle neutron scattering (SANS) and ultra small-angle neutron scattering (USANS) are increasingly used to characterize the size distribution of all of the pores non-destructively. Here we have used USANS/SANS to examine 24 well-characterized bituminous and subbituminous coals: three from the eastern US, two from Poland, one from New Zealand and the rest from the Sydney and Bowen Basins in Eastern Australia, and determined the relationships of the scattering intensity corresponding to different pore sizes with other coal properties. The range of pore radii examinable with these techniques is 2.5nm to 7μm. We confirm that there is a wide range of pore sizes in coal. The pore size distribution was found to be strongly affected by both rank and type (expressed as either hydrogen or vitrinite content) in the size range 250nm to 7μm and 5 to 10nm, but weakly in intermediate regions. The results suggest that different mechanisms control coal porosity on different scales. Contrast-matching USANS and SANS were also used to determine the size distribution of the fraction of the pores in these coals that are inaccessible to deuterated methane, CD4, at ambient temperature. In some coals most of the small (~10nm) pores were found to be inaccessible to CD4 on the time scale of the measurement (~30min–16h). This inaccessibility suggests that in these coals a considerable fraction of inherent methane may be trapped for extended periods of time, thus reducing the effectiveness of methane release from (or sorption by) these coals. Although the number of small pores was less in higher rank coals, the fraction of total pores that was inaccessible was not rank dependent. In the Australian coals, at the 10nm to 50nm size scales the pores in inertinites appeared to be completely accessible to CD4, whereas the pores in the vitrinite were about 75% inaccessible. Unlike the results for total porosity that showed no regional effects on relationships between porosity and coal properties, clear regional differences in the relationships between fraction of closed porosity and coal properties were found. The 10 to 50nm-sized pores of inertinites of the US and Polish coals examined appeared less accessible to methane than those of the inertinites of Australian coals. This difference in pore accessibility in inertinites may explain why empirical relationships between fluidity and coking properties developed using Carboniferous coals do not apply to Australian coals.
Resumo:
Contrast-matching ultrasmall-angle neutron scattering (USANS) and small-angle neutron scattering (SANS) techniques were used for the first time to determine both the total pore volume and the fraction of the pore volume that is inaccessible to deuterated methane, CD4, in four bituminous coals in the range of pore sizes between ∼10 Å and ∼5 μm. Two samples originated from the Illinois Basin in the U.S.A., and the other two samples were commercial Australian bituminous coals from the Bowen Basin. The total and inaccessible porosity were determined in each coal using both Porod invariant and the polydisperse spherical particle (PDSP) model analysis of the scattering data acquired from coals both in vacuum and at the pressure of CD4, at which the scattering length density of the pore-saturating fluid is equal to that of the solid coal matrix (zero average contrast pressure). The total porosity of the coals studied ranged from 7 to 13%, and the volume of pores inaccessible to CD4 varied from ∼13 to ∼36% of the total pore volume. The volume fraction of inaccessible pores shows no correlation with the maceral composition; however, it increases with a decreasing total pore volume. In situ measurements of the structure of one coal saturated with CO2 and CD4 were conducted as a function of the pressure in the range of 1−400 bar. The neutron scattering intensity from small pores with radii less than 35 Å in this coal increased sharply immediately after the fluid injection for both gases, which demonstrates strong condensation and densification of the invading subcritical CO2 and supercritical methane in small pores.
Resumo:
We present efficient protocols for private set disjointness tests. We start from an intuition of our protocols that applies Sylvester matrices. Unfortunately, this simple construction is insecure as it reveals information about the cardinality of the intersection. More specifically, it discloses its lower bound. By using the Lagrange interpolation we provide a protocol for the honest-but-curious case without revealing any additional information. Finally, we describe a protocol that is secure against malicious adversaries. The protocol applies a verification test to detect misbehaving participants. Both protocols require O(1) rounds of communication. Our protocols are more efficient than the previous protocols in terms of communication and computation overhead. Unlike previous protocols whose security relies on computational assumptions, our protocols provide information theoretic security. To our knowledge, our protocols are first ones that have been designed without a generic secure function evaluation. More importantly, they are the most efficient protocols for private disjointness tests for the malicious adversary case.