935 resultados para surrogate pair
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.
Resumo:
Much of the research on the delivery of advice by professionals such as physicians, health workers and counsellors, both on the telephone and in face to face interaction more generally, has focused on the theme of client resistance and the consequent need for professionals to adopt particular formats to assist in the uptake of the advice. In this paper we consider one setting, Kid’s Helpline, the national Australian counselling service for children and young people, where there is an institutional mandate not to give explicit advice in accordance with the values of self-direction and empowerment. The paper examines one practice, the use of script proposals by counsellors, which appears to offer a way of providing support which is consistent with these values. Script proposals entail the counsellors packaging their advice as something that the caller might say – at some future time – to a third party such as a friend, teacher, parent, or partner, and involve the counsellor adopting the speaking position of the caller in what appears as a rehearsal of a forthcoming strip of interaction. Although the core feature of a script proposal is the counsellor’s use of direct reported speech they appear to be delivered, not so much as exact words to be followed, but as the type of conversation that the client needs to have with the 3rd party. Script proposals, in short, provide models of what to say as well as alluding to how these could be emulated by the client. In their design script proposals invariably incorporate one or more of the most common rhetorical formats for maximising the persuasive force of an utterance such as a three part list or a contrastive pair. Script proposals, moreover, stand in a complex relation to the prior talk and one of their functions appears to be to summarise, respecify or expand upon the client’s own ideas or suggestions for problem solving that have emerged in these preceding sequences.
Resumo:
This paper proposes the use of the Bayes Factor as a distance metric for speaker segmentation within a speaker diarization system. The proposed approach uses a pair of constant sized, sliding windows to compute the value of the Bayes Factor between the adjacent windows over the entire audio. Results obtained on the 2002 Rich Transcription Evaluation dataset show an improved segmentation performance compared to previous approaches reported in literature using the Generalized Likelihood Ratio. When applied in a speaker diarization system, this approach results in a 5.1% relative improvement in the overall Diarization Error Rate compared to the baseline.
Resumo:
Objective: to assess the accuracy of data linkage across the spectrum of emergency care in the absence of a unique patient identifier, and to use the linked data to examine service delivery outcomes in an emergency department setting. Design: automated data linkage and manual data linkage were compared to determine their relative accuracy. Data were extracted from three separate health information systems: ambulance, ED and hospital inpatients, then linked to provide information about the emergency journey of each patient. The linking was done manually through physical review of records and automatically using a data linking tool (Health Data Integration) developed by the CSIRO. Match rate and quality of the linking were compared. Setting: 10, 835 patient presentations to a large, regional teaching hospital ED over a two month period (August-September 2007). Results: comparison of the manual and automated linkage outcomes for each pair of linked datasets demonstrated a sensitivity of between 95% and 99%; a specificity of between 75% and 99%; and a positive predictive value of between 88% and 95%. Conclusions: Our results indicate that automated linking provides a sound basis for health service analysis, even in the absence of a unique patient identifier. The use of an automated linking tool yields accurate data suitable for planning and service delivery purposes and enables the data to be linked regularly to examine service delivery outcomes.
Resumo:
Analytical and closed form solutions are presented in this paper for the vibration response of an L-shaped plate under a point force or a moment excitation. Inter-relationships between wave components of the source and the receiving plates are clearly defined. Explicit expressions are given for the quadratic quantities such as input power, energy flow and kinetic energy distributions of the L-shaped plate. Applications of statistical energy analysis (SEA) formulation in the prediction of the vibration response of finite coupled plate structures under a single deterministic forcing are examined and quantified. It is found that the SEA method can be employed to predict the frequency averaged vibration response and energy flow of coupled plate structures under a deterministic force or moment excitation when the structural system satisfies the following conditions: (1) the coupling loss factors of the coupled subsystems are known; (2) the source location is more than a quarter of the plate bending wavelength away from the source plate edges in the point force excitation case, or is more than a quarter wavelength away from the pair of source plate edges perpendicular to the moment axis in the moment excitation case due to the directional characteristic of moment excitations. SEA overestimates the response of the L-shaped plate when the source location is less than a quarter bending wavelength away from the respective plate edges owing to wave coherence effect at the plate boundary
Resumo:
Urban water quality can be significantly impaired by the build-up of pollutants such as heavy metals and volatile organics on urban road surfaces due to vehicular traffic. Any control strategy for the mitigation of traffic related build-up of heavy metals and volatile organic pollutants should be based on the knowledge of their build-up processes. In the study discussed in this paper, the outcomes of a detailed experiment investigation into build-up processes of heavy metals and volatile organics are presented. It was found that traffic parameters such as average daily traffic, volume over capacity ratio and surface texture depth had similar strong correlations with the build-up of heavy metals and volatile organics. Multicriteria decision analyses revealed that the 1 - 74 um particulate fraction of total suspended solids (TSS) could be regarded as a surrogate indicator for particulate heavy metals in build-up and this same fraction of total organic carbon could be regarded as a surrogate indicator for particulate volatile organics build-up. In terms of pollutants affinity, TSS was found to be the predominant parameter for particulate heavy metals build-up and total dissolved solids was found to be the predominant parameter for he potential dissolved particulate fraction in heavy metals build-up. It was also found that land use did not play a significant role in the build-up of traffic generated heavy metals and volatile organics.
Resumo:
Suburbanisation has been internationally a major phenomenon in the last decades. Suburb-to-suburb routes are nowadays the most widespread road journeys; and this resulted in an increment of distances travelled, particularly on faster suburban highways. The design of highways tends to over-simplify the driving task and this can result in decreased alertness. Driving behaviour is consequently impaired and drivers are then more likely to be involved in road crashes. This is particularly dangerous on highways where the speed limit is high. While effective countermeasures to this decrement in alertness do not currently exist, the development of in-vehicle sensors opens avenues for monitoring driving behaviour in real-time. The aim of this study is to evaluate in real-time the level of alertness of the driver through surrogate measures that can be collected from in-vehicle sensors. Slow EEG activity is used as a reference to evaluate driver's alertness. Data are collected in a driving simulator instrumented with an eye tracking system, a heart rate monitor and an electrodermal activity device (N=25 participants). Four different types of highways (driving scenario of 40 minutes each) are implemented through the variation of the road design (amount of curves and hills) and the roadside environment (amount of buildings and traffic). We show with Neural Networks that reduced alertness can be detected in real-time with an accuracy of 92% using lane positioning, steering wheel movement, head rotation, blink frequency, heart rate variability and skin conductance level. Such results show that it is possible to assess driver's alertness with surrogate measures. Such methodology could be used to warn drivers of their alertness level through the development of an in-vehicle device monitoring in real-time drivers' behaviour on highways, and therefore it could result in improved road safety.
Resumo:
A number of pictorial based texts for children use animals as models for displaying or approaching aspects of childhood. Although authors and illustrators utilise various tactics for including anthropomorphic animals in their books, those that are used as 'surrogate' children can be seen to focus in the main on issues of behaviour, socialisation and maturity - issues that reflect the everyday life of the growing child. This paper aims to explore three pictorial texts that specifically utilise the pig character as a child model, to facilitate for authors/illustrators the opportunity to deal with examples of childhood experience. The paper also tentatively examines how such roles might encourage a reassessment of other more stereotypical associations some audiences have historically/culturally formed about the pig.
Resumo:
An initialisation process is a key component in modern stream cipher design. A well-designed initialisation process should ensure that each key-IV pair generates a different key stream. In this paper, we analyse two ciphers, A5/1 and Mixer, for which this does not happen due to state convergence. We show how the state convergence problem occurs and estimate the effective key-space in each case.
Resumo:
In this paper we extend the concept of speaker annotation within a single-recording, or speaker diarization, to a collection wide approach we call speaker attribution. Accordingly, speaker attribution is the task of clustering expectantly homogenous intersession clusters obtained using diarization according to common cross-recording identities. The result of attribution is a collection of spoken audio across multiple recordings attributed to speaker identities. In this paper, an attribution system is proposed using mean-only MAP adaptation of a combined-gender UBM to model clusters from a perfect diarization system, as well as a JFA-based system with session variability compensation. The normalized cross-likelihood ratio is calculated for each pair of clusters to construct an attribution matrix and the complete linkage algorithm is employed to conduct clustering of the inter-session clusters. A matched cluster purity and coverage of 87.1% was obtained on the NIST 2008 SRE corpus.
Resumo:
In this paper, I would like to outline the approach we have taken to mapping and assessing integrity systems and how this has led us to see integrity systems in a new light. Indeed, it has led us to a new visual metaphor for integrity systems – a bird’s nest rather than a Greek temple. This was the result of a pair of major research projects completed in partnership with Transparency International (TI). One worked on refining and extending the measurement of corruption. This, the second, looked at what was then the emerging institutional means for reducing corruption – ‘national integrity systems’
Resumo:
Columns are one of the key load bearing elements that are highly susceptible to vehicle impacts. The resulting severe damages to columns may leads to failures of the supporting structure that are catastrophic in nature. However, the columns in existing structures are seldom designed for impact due to inadequacies of design guidelines. The impact behaviour of columns designed for gravity loads and actions other than impact is, therefore, of an interest. A comprehensive investigation is conducted on reinforced concrete column with a particular focus on investigating the vulnerability of the exposed columns and to implement mitigation techniques under low to medium velocity car and truck impacts. The investigation is based on non-linear explicit computer simulations of impacted columns followed by a comprehensive validation process. The impact is simulated using force pulses generated from full scale vehicle impact tests. A material model capable of simulating triaxial loading conditions is used in the analyses. Circular columns adequate in capacity for five to twenty story buildings, designed according to Australian standards are considered in the investigation. The crucial parameters associated with the routine column designs and the different load combinations applied at the serviceability stage on the typical columns are considered in detail. Axially loaded columns are examined at the initial stage and the investigation is extended to analyse the impact behaviour under single axis bending and biaxial bending. The impact capacity reduction under varying axial loads is also investigated. Effects of the various load combinations are quantified and residual capacity of the impacted columns based on the status of the damage and mitigation techniques are also presented. In addition, the contribution of the individual parameter to the failure load is scrutinized and analytical equations are developed to identify the critical impulses in terms of the geometrical and material properties of the impacted column. In particular, an innovative technique was developed and introduced to improve the accuracy of the equations where the other techniques are failed due to the shape of the error distribution. Above all, the equations can be used to quantify the critical impulse for three consecutive points (load combinations) located on the interaction diagram for one particular column. Consequently, linear interpolation can be used to quantify the critical impulse for the loading points that are located in-between on the interaction diagram. Having provided a known force and impulse pair for an average impact duration, this method can be extended to assess the vulnerability of columns for a general vehicle population based on an analytical method that can be used to quantify the critical peak forces under different impact durations. Therefore the contribution of this research is not only limited to produce simplified yet rational design guidelines and equations, but also provides a comprehensive solution to quantify the impact capacity while delivering new insight to the scientific community for dealing with impacts.
Resumo:
Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Resumo:
We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.