244 resultados para Set-Valued Functions


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the outcomes of a research project, which focused on developing a set of surrogate parameters to evaluate urban stormwater quality using simulated rainfall. Use of surrogate parameters has the potential to enhance the rapid generation of urban stormwater quality data based on on-site measurements and thereby reduce resource intensive laboratory analysis. The samples collected from rainfall simulations were tested for a range of physico-chemical parameters which are key indicators of nutrients, solids and organic matter. The analysis revealed that [total dissolved solids (TDS) and dissolved organic carbon (DOC)]; [total solids (TS) and total organic carbon (TOC)]; [turbidity (TTU)]; [electrical conductivity (EC)]; [TTU and EC] as appropriate surrogate parameters for dissolved total nitrogen (DTN), total phosphorus (TP), total suspended solids (TSS), TDS and TS respectively. Relationships obtained for DTN-TDS, DTN-DOC, and TP-TS demonstrated good portability potential. The portability of the relationship developed for TP and TOC was found to be unsatisfactory. The relationship developed for TDS-EC and TS-EC also demonstrated poor portability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years the development and use of crash prediction models for roadway safety analyses have received substantial attention. These models, also known as safety performance functions (SPFs), relate the expected crash frequency of roadway elements (intersections, road segments, on-ramps) to traffic volumes and other geometric and operational characteristics. A commonly practiced approach for applying intersection SPFs is to assume that crash types occur in fixed proportions (e.g., rear-end crashes make up 20% of crashes, angle crashes 35%, and so forth) and then apply these fixed proportions to crash totals to estimate crash frequencies by type. As demonstrated in this paper, such a practice makes questionable assumptions and results in considerable error in estimating crash proportions. Through the use of rudimentary SPFs based solely on the annual average daily traffic (AADT) of major and minor roads, the homogeneity-in-proportions assumption is shown not to hold across AADT, because crash proportions vary as a function of both major and minor road AADT. For example, with minor road AADT of 400 vehicles per day, the proportion of intersecting-direction crashes decreases from about 50% with 2,000 major road AADT to about 15% with 82,000 AADT. Same-direction crashes increase from about 15% to 55% for the same comparison. The homogeneity-in-proportions assumption should be abandoned, and crash type models should be used to predict crash frequency by crash type. SPFs that use additional geometric variables would only exacerbate the problem quantified here. Comparison of models for different crash types using additional geometric variables remains the subject of future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Railway signaling facilitates two main functions, namely, train detection and train control, in order to maintain safe separations among the trains. Track circuits are the most commonly used train detection means with the simple open/close circuit principles; and subsequent adoption of axle counters further allows the detection of trains under adverse track conditions. However, with electrification and power electronics traction drive systems, aggravated by the electromagnetic interference in the vicinity of the signaling system, railway engineers often find unstable or even faulty operations of track circuits and axle counting systems, which inevitably jeopardizes the safe operation of trains. A new means of train detection, which is completely free from electromagnetic interference, is therefore required for the modern railway signaling system. This paper presents a novel optical fiber sensor signaling system. The sensor operation, field setup, axle detection solution set, and test results of an installation in a trial system on a busy suburban railway line are given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Australian universities, journalism educators usually come to the academy from the journalism profession and consequently place a high priority on leading students to develop a career-focussed skill set. The changing nature of the technological, political and economic environments and the professional destinations of journalism graduates place demands on journalism curricula and educators alike. The profession is diverse, such that the better description is of many ‘journalisms’ rather than one ‘journalism’ with consequential pressures being placed on curricula to extend beyond the traditional skill set, where practical ‘writing’ and ‘editing’ skills dominate, to the incorporation of critical theory and the social construction of knowledge. A parallel set of challenges faces academic staff operating in a higher education environment where change is the only constant and research takes precedent over curriculum development. In this paper, three educators at separate universities report on their attempts to implement curriculum change to imbue graduates with better skills and attributes such as enhanced team work, problem solving and critical thinking, to operate in the divergent environment of 21st century journalism. The paper uses narrative case study to illustrate the different approaches. Data collected from formal university student evaluations inform the narratives along with rich but less formal qualitative data including anecdotal student comments and student reflective assessment presentations. Comparison of the three approaches illustrates the dilemmas academic staff face when teaching in disciplines that are impacted by rapid changes in technology requiring new pedagogical approaches. Recommendations for future directions are considered against the background or learning purpose.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A SNP genotyping method was developed for E. faecalis and E. faecium using the 'Minimum SNPs' program. SNP sets were interrogated using allele-specific real-time PCR. SNP-typing sub-divided clonal complexes 2 and 9 of E. faecalis and 17 of E. faecium, members of which cause the majority of nosocomial infections globally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reads season 1 of the critically-acclaimed Canadian television series “Slings & Arrows” (2003). This six-episode series is set in a fictionalised version of the Stratford Festival, and tells the story of a plagued production of Shakespeare’s Hamlet. It follows the play’s rehearsal after the death of the festival’s artistic director; Geoffrey Tennant (himself a plagued Hamlet) takes over the role of director, and must face his past in order to produce a Hamlet that will save the festival, redeem his reputation, and repair his interpersonal relationships. Drawing on popular and theatrical understandings of Shakespeare’s play, the series negotiates tropes of metatheatre, filiality, cultural production and consumption, in order to demonstrate the ongoing relevance and legitimacy of “Shakespeare” in the twenty-first century. The “Slings & Arrows” narrative revolves around the doubled-plot of Hamlet and the experiences of the company mounting Hamlet. In quite obvious ways, the show thus thematises ways in which Shakespeare can be used to read one’s own life and world. In the broader sense, however, the show also offers theatre/performance as a catalyst for affect. In doing so, the show functions as a relatively straight adaptation of Hamlet, and a metatheatrical/metafictional commentary on the functions of Hamlet within contemporary culture. In Shakespeare’s play, the production of “The Mouse-Trap” proves, both to Hamlet and the audience, the legitimacy of the ghost’s claims. Similarly, in “Slings & Arrows”, the successful performance of Hamlet legitimises Geoffrey’s position as artistic director of the festival, and affirms for the viewer the value of Shakespearean production in contemporary culture. In each text, theatre/performance enables and legitimises a son carrying out a dead father’s wishes in order to restore or reproduce socio-cultural order. The metatheatrics of these gestures engage the reader/viewer in a self-reflexive process whereby the ‘value’ of theatre is thematised and performed, and the consumer is positioned as the arbiter and agent of that value: complicit in its production even as they are the site of its consumption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In computational linguistics, information retrieval and applied cognition, words and concepts are often represented as vectors in high dimensional spaces computed from a corpus of text. These high dimensional spaces are often referred to as Semantic Spaces. We describe a novel and efficient approach to computing these semantic spaces via the use of complex valued vector representations. We report on the practical implementation of the proposed method and some associated experiments. We also briefly discuss how the proposed system relates to previous theoretical work in Information Retrieval and Quantum Mechanics and how the notions of probability, logic and geometry are integrated within a single Hilbert space representation. In this sense the proposed system has more general application and gives rise to a variety of opportunities for future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

These cards are designed as a resource for implementing participatory action research (PAR) in social programs. Each card covers one of the five key stages of PAR as outlined in the manual 'On PAR- Using participatory Action Research to Improve Early Intervention' (Crane and O'Regan 2010).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.