290 resultados para Effective rainfall
Resumo:
As the popularity of video as an information medium rises, the amount of video content that we produce and archive keeps growing. This creates a demand for shorter representations of videos in order to assist the task of video retrieval. The traditional solution is to let humans watch these videos and write textual summaries based on what they saw. This summarisation process, however, is time-consuming. Moreover, a lot of useful audio-visual information contained in the original video can be lost. Video summarisation aims to turn a full-length video into a more concise version that preserves as much information as possible. The problem of video summarisation is to minimise the trade-off between how concise and how representative a summary is. There are also usability concerns that need to be addressed in a video summarisation scheme. To solve these problems, this research aims to create an automatic video summarisation framework that combines and improves on existing video summarisation techniques, with the focus on practicality and user satisfaction. We also investigate the need for different summarisation strategies in different kinds of videos, for example news, sports, or TV series. Finally, we develop a video summarisation system based on the framework, which is validated by subjective and objective evaluation. The evaluation results shows that the proposed framework is effective for creating video skims, producing high user satisfaction rate and having reasonably low computing requirement. We also demonstrate that the techniques presented in this research can be used for visualising video summaries in the form web pages showing various useful information, both from the video itself and from external sources.
Resumo:
Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.
Resumo:
Individuals, community organisations and industry have always been involved to varying degrees in efforts to address the Queensland road toll. Traditionally, road crash prevention efforts have been led by state and local government organisations. While community and industry groups have sometimes become involved (e.g. Driver Reviver campaign), their efforts have largely been uncoordinated and under-resourced. A common strength of these initiatives lies in the energy, enthusiasm and persistence of community-based efforts. Conversely, a weakness has sometimes been the lack of knowledge, awareness or prioritisation of evidence-based interventions or their capacity to build on collaborative efforts. In 2000, the Queensland University of Technology’s Centre for Accident Research and Road Safety – Queensland (CARRS-Q) identified this issue as an opportunity to bridge practice and research and began acknowledging a selection of these initiatives, in partnership with the RACQ, through the Queensland Road Safety Awards program. After nine years it became apparent there was need to strengthen this connection, with the Centre establishing a Community Engagement Workshop in 2009 as part of the overall Awards program. With an aim of providing community participants opportunities to see, hear and discuss the experiences of others, this event was further developed in 2010, and with the collaboration of the Queensland Department of Transport and Main Roads, the RACQ, Queensland Police Service and Leighton Contractors Pty Ltd, a stand-alone Queensland Road Safety Awards Community Engagement Workshop was held in 2010. Each collaborating organisation recognised a need to mobilise the community through effective information and knowledge sharing, and recognised that learning and discussion can influence lasting behaviour change and action in this often emotive, yet not always evidence-based, area. This free event featured a number of speakers representing successful projects from around Australia and overseas. Attendees were encouraged to interact with the speakers, to ask questions, and most importantly, build connections with other attendees to build a ‘community road safety army’ all working throughout Australia on projects underpinned by evaluated research. The workshop facilitated the integration of research, policy and grass-roots action enhancing the success of community road safety initiatives. For collaboration partners, the event enabled them to transfer their knowledge in an engaged approach, working within a more personal communication process. An analysis of the success factors for this event identified openness to community groups and individuals, relevance of content to local initiatives, generous support with the provision of online materials and ongoing communication with key staff members as critical and supports the view that the university can directly provide both the leadership and the research needed for effective and credible community-based initiatives to address injury and death on the roads.
Resumo:
Evasive change-of-direction manoeuvres (agility skills) are a fundamental ability in rugby union. In this study, we explored the attributes of agility skill execution as they relate to effective attacking strategies in rugby union. Seven Super 14 games were coded using variables that assessed team patterns and individual movement characteristics during attacking ball carries. The results indicated that tackle-breaks are a key determinant of try-scoring ability and team success in rugby union. The ability of the attacking ball carrier to receive the ball at high speed with at least two body lengths from the defence line against an isolated defender promoted tackle-breaks. Furthermore, the execution of a side-step evasive manoeuvre at a change of direction angle of 20–60° and a distance of one to two body lengths from the defence, and then straightening the running line following the initial direction change at an angle of 20–60°, was associated with tackle-breaks. This study provides critical insight regarding the attributes of agility skill execution that are associated with effective ball carries in rugby union.
Resumo:
Microbial pollution in water periodically affects human health in Australia, particularly in times of drought and flood. There is an increasing need for the control of waterborn microbial pathogens. Methods, allowing the determination of the origin of faecal contamination in water, are generally referred to as Microbial Source Tracking (MST). Various approaches have been evaluated as indicatorsof microbial pathogens in water samples, including detection of different microorganisms and various host-specific markers. However, until today there have been no universal MST methods that could reliably determine the source (human or animal) of faecal contamination. Therefore, the use of multiple approaches is frequently advised. MST is currently recognised as a research tool, rather than something to be included in routine practices. The main focus of this research was to develop novel and universally applicable methods to meet the demands for MST methods in routine testing of water samples. Escherichia coli was chosen initially as the object organism for our studies as, historically and globally, it is the standard indicator of microbial contamination in water. In this thesis, three approaches are described: single nucleotide polymorphism (SNP) genotyping, clustered regularly interspaced short palindromic repeats (CRISPR) screening using high resolution melt analysis (HRMA) methods and phage detection development based on CRISPR types. The advantage of the combination SNP genotyping and CRISPR genes has been discussed in this study. For the first time, a highly discriminatory single nucleotide polymorphism interrogation of E. coli population was applied to identify the host-specific cluster. Six human and one animal-specific SNP profile were revealed. SNP genotyping was successfully applied in the field investigations of the Coomera watershed, South-East Queensland, Australia. Four human profiles [11], [29], [32] and [45] and animal specific SNP profile [7] were detected in water. Two human-specific profiles [29] and [11] were found to be prevalent in the samples over a time period of years. The rainfall (24 and 72 hours), tide height and time, general land use (rural, suburban), seasons, distance from the river mouth and salinity show a lack of relashionship with the diversity of SNP profiles present in the Coomera watershed (p values > 0.05). Nevertheless, SNP genotyping method is able to identify and distinquish between human- and non-human specific E. coli isolates in water sources within one day. In some samples, only mixed profiles were detected. To further investigate host-specificity in these mixed profiles CRISPR screening protocol was developed, to be used on the set of E. coli, previously analysed for SNP profiles. CRISPR loci, which are the pattern of previous DNA coliphages attacks, were considered to be a promising tool for detecting host-specific markers in E. coli. Spacers in CRISPR loci could also reveal the dynamics of virulence in E. coli as well in other pathogens in water. Despite the fact that host-specificity was not observed in the set of E. coli analysed, CRISPR alleles were shown to be useful in detection of the geographical site of sources. HRMA allows determination of ‘different’ and ‘same’ CRISPR alleles and can be introduced in water monitoring as a cost-effective and rapid method. Overall, we show that the identified human specific SNP profiles [11], [29], [32] and [45] can be useful as marker genotypes globally for identification of human faecal contamination in water. Developed in the current study, the SNP typing approach can be used in water monitoring laboratories as an inexpensive, high-throughput and easy adapted protocol. The unique approach based on E. coli spacers for the search for unknown phage was developed to examine the host-specifity in phage sequences. Preliminary experiments on the recombinant plasmids showed the possibility of using this method for recovering phage sequences. Future studies will determine the host-specificity of DNA phage genotyping as soon as first reliable sequences can be acquired. No doubt, only implication of multiple approaches in MST will allow identification of the character of microbial contamination with higher confidence and readability.
Resumo:
In natural estuaries, scalar diffusion and dispersion are driven by turbulence. In the present study, detailed turbulence measurements were conducted in a small subtropical estuary with semi-diurnal tides under neap tide conditions. Three acoustic Doppler velocimeters were installed mid-estuary at fixed locations close together. The units were sampled simultaneously and continuously at relatively high frequency for 50 h. The results illustrated the influence of tidal forcing in the small estuary, although low frequency longitudinal velocity oscillations were observed and believed to be induced by external resonance. The boundary shear stress data implied that the turbulent shear in the lower flow region was one order of magnitude larger than the boundary shear itself. The observation differed from turbulence data in a laboratory channel, but a key feature of natural estuary flow was the significant three dimensional effects associated with strong secondary currents including transverse shear events. The velocity covariances and triple correlations, as well as the backscatter intensity and covariances, were calculated for the entire field study. The covariances of the longitudinal velocity component showed some tidal trend, while the covariances of the transverse horizontal velocity component exhibited trends that reflected changes in secondary current patterns between ebb and flood tides. The triple correlation data tended to show some differences between ebb and flood tides. The acoustic backscatter intensity data were characterised by large fluctuations during the entire study, with dimensionless fluctuation intensity I0b =Ib between 0.46 and 0.54. An unusual feature of the field study was some moderate rainfall prior to and during the first part of the sampling period. Visual observations showed some surface scars and marked channels, while some mini transient fronts were observed.
Resumo:
Intuitively, any ‘bag of words’ approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distributions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document’s initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur’s search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.
Resumo:
Building an efficient and an effective search engine is a very challenging task. In this paper, we present the efficiency and effectiveness of our search engine at the INEX 2009 Efficiency and Ad Hoc Tracks. We have developed a simple and effective pruning method for fast query evaluation, and used a two-step process for Ad Hoc retrieval. The overall results from both tracks show that our search engine performs very competitively in terms of both efficiency and effectiveness.
Resumo:
Most recommendation methods employ item-item similarity measures or use ratings data to generate recommendations. These methods use traditional two dimensional models to find inter relationships between alike users and products. This paper proposes a novel recommendation method using the multi-dimensional model, tensor, to group similar users based on common search behaviour, and then finding associations within such groups for making effective inter group recommendations. Web log data is multi-dimensional data. Unlike vector based methods, tensors have the ability to highly correlate and find latent relationships between such similar instances, consisting of users and searches. Non redundant rules from such associations of user-searches are then used for making recommendations to the users.
Resumo:
With the growth of the Web, E-commerce activities are also becoming popular. Product recommendation is an effective way of marketing a product to potential customers. Based on a user’s previous searches, most recommendation methods employ two dimensional models to find relevant items. Such items are then recommended to a user. Further too many irrelevant recommendations worsen the information overload problem for a user. This happens because such models based on vectors and matrices are unable to find the latent relationships that exist between users and searches. Identifying user behaviour is a complex process, and usually involves comparing searches made by him. In most of the cases traditional vector and matrix based methods are used to find prominent features as searched by a user. In this research we employ tensors to find relevant features as searched by users. Such relevant features are then used for making recommendations. Evaluation on real datasets show the effectiveness of such recommendations over vector and matrix based methods.