935 resultados para Byrsonima basiloba extract
Resumo:
Research in the early years places increasing importance on participatory methods to engage children. The playback of video-recording to stimulate conversation is a research method that enables children’s accounts to be heard and attends to a participatory view. During video-stimulated sessions, participants watch an extract of video-recording of a specific event in which they were involved, and then account for their participation in that event. Using an interactional perspective, this paper draws distinctions between video-stimulated accounts and a similar research method, popular in education, that of video-stimulated recall. Reporting upon a study of young children’s interactions in a playground, video-stimulated accounts are explicated to show how the participants worked toward the construction of events in the video-stimulated session. This paper discusses how the children account for complex matters within their social worlds, and manage the accounting of others in the video-stimulated session. When viewed from an interactional perspective and used alongside fine grained analytic approaches, video-stimulated accounts are an effective method to provide the standpoint of the children involved and further the competent child paradigm.
Resumo:
Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.
Resumo:
An episodic recreation of Hibberd's Stretch of the Imagination presented as a performance extract as part of the Enter the New Wave Symposium at Melbourne University September 2007.
Resumo:
Expert panels have been used extensively in the development of the "Highway Safety Manual" to extract research information from highway safety experts. While the panels have been used to recommend agendas for new and continuing research, their primary role has been to develop accident modification factors—quantitative relationships between highway safety and various highway safety treatments. Because the expert panels derive quantitative information in a “qualitative” environment and because their findings can have significant impacts on highway safety investment decisions, the expert panel process should be described and critiqued. This paper is the first known written description and critique of the expert panel process and is intended to serve professionals wishing to conduct such panels.
Resumo:
Since its debut in 2001 Wikipedia has attracted the attention of many researchers in different fields. In recent years researchers in the area of ontology learning have realised the huge potential of Wikipedia as a source of semi-structured knowledge and several systems have used it as their main source of knowledge. However, the techniques used to extract semantic information vary greatly, as do the resulting ontologies. This paper introduces a framework to compare ontology learning systems that use Wikipedia as their main source of knowledge. Six prominent systems are compared and contrasted using the framework.
Resumo:
The burden of rising health care expenditures has created a demand for information regarding the clinical and economic outcomes associated with complementary and alternative medicines. Meta-analyses of randomized controlled trials have found Hypericum perforatum preparations to be superior to placebo and similarly effective as standard antidepressants in the acute treatment of mild to moderate depression. A clear advantage over antidepressants has been demonstrated in terms of the reduced frequency of adverse effects and lower treatment withdrawal rates, low rates of side effects and good compliance, key variables affecting the cost-effectiveness of a given form of therapy. The most important risk associated with use is potential interactions with other drugs, but this may be mitigated by using extracts with low hyperforin content. As the indirect costs of depression are greater than five times direct treatment costs, given the rising cost of pharmaceutical antidepressants, the comparatively low cost of Hypericum perforatum extract makes it worthy of consideration in the economic evaluation of mild to moderate depression treatments.
Resumo:
The economiser is a critical component for efficient operation of coal-fired power stations. It consists of a large system of water-filled tubes which extract heat from the exhaust gases. When it fails, usually due to erosion causing a leak, the entire power station must be shut down to effect repairs. Not only are such repairs highly expensive, but the overall repair costs are significantly affected by fluctuations in electricity market prices, due to revenue lost during the outage. As a result, decisions about when to repair an economiser can alter the repair costs by millions of dollars. Therefore, economiser repair decisions are critical and must be optimised. However, making optimal repair decisions is difficult because economiser leaks are a type of interactive failure. If left unfixed, a leak in a tube can cause additional leaks in adjacent tubes which will need more time to repair. In addition, when choosing repair times, one also needs to consider a number of other uncertain inputs such as future electricity market prices and demands. Although many different decision models and methodologies have been developed, an effective decision-making method specifically for economiser repairs has yet to be defined. In this paper, we describe a Decision Tree based method to meet this need. An industrial case study is presented to demonstrate the application of our method.
Resumo:
A better understanding of the behaviour of prepared cane and bagasse during the crushing process is believed to be an essential prerequisite for further improvements to the crushing process. Improvements could be made, for example, in throughput, sugar extraction, and bagasse moisture. The ability to model the mechanical behaviour of bagasse as it is squeezed in a milling unit to extract juice would help identify how to improve the current process to reduce final bagasse moisture. However an adequate mechanical model for bagasse is currently not available. Previous investigations have proven with certainty that juice flow through bagasse obeys Darcy’s permeability law, that the grip of the rough surface of the grooves on the bagasse can be represented by the Mohr- Coulomb failure criterion for soils, and that the internal mechanical behaviour of the bagasse is critical state behaviour similar to that for sand and clay. Current Finite Element Models (FEM) available in commercial software have adequate permeability models. However, the same commercial software do not contain an adequate mechanical model for bagasse. Progress has been made in the last ten years towards implementing a mechanical model for bagasse in finite element software code. This paper builds on that progress and carries out a further step towards obtaining an adequate material model.
Resumo:
In recent years, ocean scientists have started to employ many new forms of technology as integral pieces in oceanographic data collection for the study and prediction of complex and dynamic ocean phenomena. One area of technological advancement in ocean sampling if the use of Autonomous Underwater Vehicles (AUVs) as mobile sensor plat- forms. Currently, most AUV deployments execute a lawnmower- type pattern or repeated transects for surveys and sampling missions. An advantage of these missions is that the regularity of the trajectory design generally makes it easier to extract the exact path of the vehicle via post-processing. However, if the deployment region for the pattern is poorly selected, the AUV can entirely miss collecting data during an event of specific interest. Here, we consider an innovative technology toolchain to assist in determining the deployment location and executed paths for AUVs to maximize scientific information gain about dynamically evolving ocean phenomena. In particular, we provide an assessment of computed paths based on ocean model predictions designed to put AUVs in the right place at the right time to gather data related to the understanding of algal and phytoplankton blooms.
Resumo:
This paper presents a general methodology for learning articulated motions that, despite having non-linear correlations, are cyclical and have a defined pattern of behavior Using conventional algorithms to extract features from images, a Bayesian classifier is applied to cluster and classify features of the moving object. Clusters are then associated in different frames and structure learning algorithms for Bayesian networks are used to recover the structure of the motion. This framework is applied to the human gait analysis and tracking but applications include any coordinated movement such as multi-robots behavior analysis.
Resumo:
This paper presents an overview of the experiments conducted using Hybrid Clustering of XML documents using Constraints (HCXC) method for the clustering task in the INEX 2009 XML Mining track. This technique utilises frequent subtrees generated from the structure to extract the content for clustering the XML documents. It also presents the experimental study using several data representations such as the structure-only, content-only and using both the structure and the content of XML documents for the purpose of clustering them. Unlike previous years, this year the XML documents were marked up using the Wiki tags and contains categories derived by using the YAGO ontology. This paper also presents the results of studying the effect of these tags on XML clustering using the HCXC method.
Resumo:
For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.
Resumo:
Road surface macro-texture is an indicator used to determine the skid resistance levels in pavements. Existing methods of quantifying macro-texture include the sand patch test and the laser profilometer. These methods utilise the 3D information of the pavement surface to extract the average texture depth. Recently, interest in image processing techniques as a quantifier of macro-texture has arisen, mainly using the Fast Fourier Transform (FFT). This paper reviews the FFT method, and then proposes two new methods, one using the autocorrelation function and the other using wavelets. The methods are tested on pictures obtained from a pavement surface extending more than 2km's. About 200 images were acquired from the surface at approx. 10m intervals from a height 80cm above ground. The results obtained from image analysis methods using the FFT, the autocorrelation function and wavelets are compared with sensor measured texture depth (SMTD) data obtained from the same paved surface. The results indicate that coefficients of determination (R2) exceeding 0.8 are obtained when up to 10% of outliers are removed.
Resumo:
A novel and comprehensive testing approach to examine the performance of gross pollutant traps (GPTs) was developed. A proprietary GPT with internal screens for capturing gross pollutants—organic matter and anthropogenic litter—was used as a case study. This work is the first investigation of its kind and provides valuable practical information for the design, selection and operation of GPTs and also the management of street waste in an urban environment. It used a combination of physical and theoretical models to examine in detail the hydrodynamic and capture/retention characteristics of the GPT. The results showed that the GPT operated efficiently until at least 68% of the screens were blocked, particularly at high flow rates. At lower flow rates, the high capture/retention performance trend was reversed. It was also found that a raised inlet GPT offered a better capture/retention performance. This finding indicates that cleaning operations could be more effectively planned in conjunction with the deterioration in GPT’s capture/retention performance.
Resumo:
We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.