986 resultados para Camera parameters


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous work on pattern-forming dynamics of team sports has investigated sub-phases of basketball and rugby union by focussing on one-versus-one (1v1) attacker-defender dyads. This body of work has identified the role of candidate control parameters, interpersonal distance and relative velocity, in predicting the outcomes of team player interactions. These two control parameters have been described as functioning in a nested relationship where relative velocity between players comes to the fore within a critical range of interpersonal distance. The critical influence of constraints on the intentionality of player behaviour has also been identified through the study of 1v1 attacker-defender dyads. This thesis draws from previous work adopting an ecological dynamics approach, which encompasses both Dynamical Systems Theory and Ecological Psychology concepts, to describe attacker-defender interactions in 1v1 dyads in association football. Twelve male youth association football players (average age 15.3 ± 0.5 yrs) performed as both attackers and defenders in 1v1 dyads in three field positions in an experimental manipulation of the proximity to goal and the role of players. Player and ball motion was tracked using TACTO 8.0 software (Fernandes & Caixinha, 2003) to produce two-dimensional (2D) trajectories of players and the ball on the ground. Significant differences were found for player-to-ball interactions depending on proximity to goal manipulations, indicating how key reference points in the environment such as the location of the goal may act as a constraint that shapes decision-making behaviour. Results also revealed that interpersonal distance and relative velocity alone were insufficient for accurately predicting the outcome of a dyad in association football. Instead, combined values of interpersonal distance, ball-to-defender distance, attacker-to-ball distance, attacker-to-ball relative velocity and relative angles were found to indicate the state of dyad outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Road surface macrotexture is identified as one of the factors contributing to the surface's skid resistance. Existing methods of quantifying the surface macrotexture, such as the sand patch test and the laser profilometer test, are either expensive or intrusive, requiring traffic control. High-resolution cameras have made it possible to acquire good quality images from roads for the automated analysis of texture depth. In this paper, a granulometric method based on image processing is proposed to estimate road surface texture coarseness distribution from their edge profiles. More than 1300 images were acquired from two different sites, extending to a total of 2.96 km. The images were acquired using camera orientations of 60 and 90 degrees. The road surface is modeled as a texture of particles, and the size distribution of these particles is obtained from chord lengths across edge boundaries. The mean size from each distribution is compared with the sensor measured texture depth obtained using a laser profilometer. By tuning the edge detector parameters, a coefficient of determination of up to R2 = 0.94 between the proposed method and the laser profilometer method was obtained. The high correlation is also confirmed by robust calibration parameters that enable the method to be used for unseen data after the method has been calibrated over road surface data with similar surface characteristics and under similar imaging conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Texture analysis and textural cues have been applied for image classification, segmentation and pattern recognition. Dominant texture descriptors include directionality, coarseness, line-likeness etc. In this dissertation a class of textures known as particulate textures are defined, which are predominantly coarse or blob-like. The set of features that characterise particulate textures are different from those that characterise classical textures. These features are micro-texture, macro-texture, size, shape and compaction. Classical texture analysis techniques do not adequately capture particulate texture features. This gap is identified and new methods for analysing particulate textures are proposed. The levels of complexity in particulate textures are also presented ranging from the simplest images where blob-like particles are easily isolated from their back- ground to the more complex images where the particles and the background are not easily separable or the particles are occluded. Simple particulate images can be analysed for particle shapes and sizes. Complex particulate texture images, on the other hand, often permit only the estimation of particle dimensions. Real life applications of particulate textures are reviewed, including applications to sedimentology, granulometry and road surface texture analysis. A new framework for computation of particulate shape is proposed. A granulometric approach for particle size estimation based on edge detection is developed which can be adapted to the gray level of the images by varying its parameters. This study binds visual texture analysis and road surface macrotexture in a theoretical framework, thus making it possible to apply monocular imaging techniques to road surface texture analysis. Results from the application of the developed algorithm to road surface macro-texture, are compared with results based on Fourier spectra, the auto- correlation function and wavelet decomposition, indicating the superior performance of the proposed technique. The influence of image acquisition conditions such as illumination and camera angle on the results was systematically analysed. Experimental data was collected from over 5km of road in Brisbane and the estimated coarseness along the road was compared with laser profilometer measurements. Coefficient of determination R2 exceeding 0.9 was obtained when correlating the proposed imaging technique with the state of the art Sensor Measured Texture Depth (SMTD) obtained using laser profilometers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

CCTV and surveillance networks are increasingly being used for operational as well as security tasks. One emerging area of technology that lends itself to operational analytics is soft biometrics. Soft biometrics can be used to describe a person and detect them throughout a sparse multi-camera network. This enables them to be used to perform tasks such as determining the time taken to get from point to point, and the paths taken through an environment by detecting and matching people across disjoint views. However, in a busy environment where there are 100's if not 1000's of people such as an airport, attempting to monitor everyone is highly unrealistic. In this paper we propose an average soft biometric, that can be used to identity people who look distinct, and are thus suitable for monitoring through a large, sparse camera network. We demonstrate how an average soft biometric can be used to identify unique people to calculate operational measures such as the time taken to travel from point to point.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Micro aerial vehicles (MAVs) are a rapidly growing area of research and development in robotics. For autonomous robot operations, localization has typically been calculated using GPS, external camera arrays, or onboard range or vision sensing. In cluttered indoor or outdoor environments, onboard sensing is the only viable option. In this paper we present an appearance-based approach to visual SLAM on a flying MAV using only low quality vision. Our approach consists of a visual place recognition algorithm that operates on 1000 pixel images, a lightweight visual odometry algorithm, and a visual expectation algorithm that improves the recall of place sequences and the precision with which they are recalled as the robot flies along a similar path. Using data gathered from outdoor datasets, we show that the system is able to perform visual recognition with low quality, intermittent visual sensory data. By combining the visual algorithms with the RatSLAM system, we also demonstrate how the algorithms enable successful SLAM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Zeolite-based technology can provide a cost effective solution for stormwater treatment for the removal of toxic heavy metals under increasing demand of safe water from alternative sources. This paper reviews the currently available knowledge relating to the effect of properties of zeolites such as pore size, surface area and Si:Al ratio and the physico-chemical conditions of the system such as pH, temperature, initial metal concentration and zeolite concentration on heavy metal removal performance. The primary aims are, to consolidate available knowledge and identify knowledge gaps. It was established that an in-depth understanding of operational issues such as, diffusion of metal ions into the zeolite pore structure, pore clogging, zeolite surface coverage by particulates in stormwater as well as the effect of pH on stormwater quality in the presence of zeolites is essential for developing a zeolite-based technology for the treatment of polluted stormwater. The optimum zeolite concentration to treat typical volumes of stormwater and initial heavy metal concentrations in stormwater should also be considered as operational issues in this regard. Additionally, leaching of aluminium and sodium ions from the zeolite structure to solution were identified as key issues requiring further research in the effort to develop cost effective solutions for the removal of heavy metals from stormwater.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modelling an environmental process involves creating a model structure and parameterising the model with appropriate values to accurately represent the process. Determining accurate parameter values for environmental systems can be challenging. Existing methods for parameter estimation typically make assumptions regarding the form of the Likelihood, and will often ignore any uncertainty around estimated values. This can be problematic, however, particularly in complex problems where Likelihoods may be intractable. In this paper we demonstrate an Approximate Bayesian Computational method for the estimation of parameters of a stochastic CA. We use as an example a CA constructed to simulate a range expansion such as might occur after a biological invasion, making parameter estimates using only count data such as could be gathered from field observations. We demonstrate ABC is a highly useful method for parameter estimation, with accurate estimates of parameters that are important for the management of invasive species such as the intrinsic rate of increase and the point in a landscape where a species has invaded. We also show that the method is capable of estimating the probability of long distance dispersal, a characteristic of biological invasions that is very influential in determining spread rates but has until now proved difficult to estimate accurately.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Twin studies offer the opportunity to determine the relative contribution of genes versus environment in traits of interest. Here, we investigate the extent to which variance in brain structure is reduced in monozygous twins with identical genetic make-up. We investigate whether using twins as compared to a control population reduces variability in a number of common magnetic resonance (MR) structural measures, and we investigate the location of areas under major genetic influences. This is fundamental to understanding the benefit of using twins in studies where structure is the phenotype of interest. Twenty-three pairs of healthy MZ twins were compared to matched control pairs. Volume, T2 and diffusion MR imaging were performed as well as spectroscopy (MRS). Images were compared using (i) global measures of standard deviation and effect size, (ii) voxel-based analysis of similarity and (iii) intra-pair correlation. Global measures indicated a consistent increase in structural similarity in twins. The voxel-based and correlation analyses indicated a widespread pattern of increased similarity in twin pairs, particularly in frontal and temporal regions. The areas of increased similarity were most widespread for the diffusion trace and least widespread for T2. MRS showed consistent reduction in metabolite variation that was significant in the temporal lobe N-acetylaspartate (NAA). This study has shown the distribution and magnitude of reduced variability in brain volume, diffusion, T2 and metabolites in twins. The data suggest that evaluation of twins discordant for disease is indeed a valid way to attribute genetic or environmental influences to observed abnormalities in patients since evidence is provided for the underlying assumption of decreased variability in twins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Experimental measurements have been made to investigate meaning of the change in voltage for the pulse gas metal arc welding (GMAW-P) process operating under different drop transfer modes. Design/methodology/approach: Welding experiments with different values of pulsing parameter and simultaneous recording of high speed camera pictures and welding signals (such as current and voltage) were used to identify different drop transfer modes in GMAW-P. The investigation is based on the synchronization of welding signals and high speed camera to study the behaviour of voltage signal under different drop transfer modes. Findings: The results reveal that the welding arc is significantly affected by the molten droplet detachment. In fact, results indicate that sudden increase and drop in voltage just before and after the drop detachment can be used to characterize the voltage behaviour of different drop transfer mode in GMAW-P. Research limitations/implications: The results show that voltage signal carry rich information about different drop transfer occurring in GMAW-P. Hence it’s possible to detect different drop transfer modes. Future work should concentrate on development of filters for detection of different drop transfer modes. Originality/value: Determination of drop transfer mode with GMAW-P is crucial for the appropriate selection of pulse welding parameters. As change in drop transfer mode results in poor weld quality in GMAW-P, so in order to estimate the working parameters and ensure stable GMAW-P understanding the voltage behaviour of different drop transfer modes in GMAW-P will be useful. However, in case of GMAW-P hardly any attempt is made to analyse the behaviour of voltage signal for different drop transfer modes. This paper analyses the voltage signal behaviour of different drop transfer modes for GMAW-P.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 31st TTRA conference was held in California’s San Fernando Valley, home of Hollywood and Burbank’s movie and television studios. The twin themes of Hollywood and the new Millennium promised and delivered “something old, yet something new”. The meeting offered a historical summary, not only of the year in review but also of many features of travel research since the first literature in the field appeared in the 1970s. Also, the millennium theme set the scene for some stimulating and forward thinking discussions. The Hollywood location offered an opportunity to ponder on the value of the movie-induced tourism for Los Angeles, at a time when Hollywood Boulevard was in the midst of a much needed redevelopment programme. Hollywood Chamber of Commerce speaker Oscar Arslanian acknowledged that the face of the famous district had become tired, and that its ability to continue to attract visitors in the future lay in redeveloping its past heritage. In line with the Hollywood theme a feature of the conference was a series of six special sessions with “Stars of Travel Research”. These sessions featured: Clare Gunn, Stanley Plog, Charles Gouldner, John Hunt, Brent Ritchie, Geoffrey Crouch, Peter Williams, Douglas Frechtling, Turgut Var, Robert Christie-Mill, and John Crotts. Delegates were indeed privileged to hear from many of the pioneers of tourism research. Clare Gunn, Charles Goeldner, Turgut Var and Stanley Plog, for example, traced the history of different aspects of the tourism literature, and in line with the millennium theme, offered some thought provoking discussion on the future challenges facing tourism. These included; the commodotisation of airlines and destinations, airport and traffic congestion, environment sustainability responsibility and the looming burst of the baby-boomer bubble. Included in the conference proceedings are four papers presented by five of the “Stars”. Brent Ritchie and Geoffrey Crouch discuss the critical success factors for destinations, Clare Gunn shares his concerns about tourism being a smokestack industry, Doug Frechtling provides forecasts of outbound travel from 20 countries, and Charles Gouldner, who has attended all 31 TTRA conferences, reflects on the changes that have taken place in tourism research over 35 years...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Colour is one of the most important parameters in sugar quality and its presence in raw sugar plays a key role in the marketing strategy of sugar industries worldwide. This study investigated the degradation of a mixture of colour precursors using the Fenton oxidation process. These colour precursors are caffeic acid, p–coumaric acid and ferulic acid, which are present in cane juice. Results showed that with a Fe(II) to H2O2 molar ratio of 1:15 in an aqueous system at 25 °C, 77% of the total phenolic acid content was removed at pH 4.72. However, in a synthetic juice solution which contained 13 mass % sucrose (35 °C, pH 5.4), only 60% of the total phenolic acid content was removed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motorcycles are particularly vulnerable in right-angle crashes at signalized intersections. The objective of this study is to explore how variations in roadway characteristics, environmental factors, traffic factors, maneuver types, human factors as well as driver demographics influence the right-angle crash vulnerability of motorcycles at intersections. The problem is modeled using a mixed logit model with a binary choice category formulation to differentiate how an at-fault vehicle collides with a not-at-fault motorcycle in comparison to other collision types. The mixed logit formulation allows randomness in the parameters and hence takes into account the underlying heterogeneities potentially inherent in driver behavior, and other unobserved variables. A likelihood ratio test reveals that the mixed logit model is indeed better than the standard logit model. Night time riding shows a positive association with the vulnerability of motorcyclists. Moreover, motorcyclists are particularly vulnerable on single lane roads, on the curb and median lanes of multi-lane roads, and on one-way and two-way road type relative to divided-highway. Drivers who deliberately run red light as well as those who are careless towards motorcyclists especially when making turns at intersections increase the vulnerability of motorcyclists. Drivers appear more restrained when there is a passenger onboard and this has decreased the crash potential with motorcyclists. The presence of red light cameras also significantly decreases right-angle crash vulnerabilities of motorcyclists. The findings of this study would be helpful in developing more targeted countermeasures for traffic enforcement, driver/rider training and/or education, safety awareness programs to reduce the vulnerability of motorcyclists.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Changing sodium intake from 70-200 mmol/day elevates blood pressure in normotensive volunteers by 6/4 mmHg. Older people, people with reduced renal function on a low sodium diet and people with a family history of hypertension are more likely to show this effect. The rise in blood pressure was associated with a fall in plasma volume suggesting that plasma volume changes do not initiate hypertension. In normotensive individuals the most common abnormality in membrane sodium transport induced by an extra sodium load was an increased permeability of the red cell to sodium. Some normotensive individuals also had an increase in the level of a plasma inhibitor that inhibited Na-K ATPase. These individuals also appeared to have a rise in blood pressure. Sodium intake and blood pressure are related. The relationship differs in different people and is probably controlled by the genetically inherited capacity of systems involved in membrane sodium transport.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A total histological grade does not necessarily distinguish between different manifestations of cartilage damage or degeneration. An accurate and reliable histological assessment method is required to separate normal and pathological tissue within a joint during treatment of degenerative joint conditions and to sub-classify the latter in meaningful ways. The Modified Mankin method may be adaptable for this purpose. We investigated how much detail may be lost by assigning one composite score/grade to represent different degenerative components of the osteoarthritic condition. We used four ovine injury models (sham surgery, anterior cruciate ligament/medial collateral ligament instability, simulated anatomic anterior cruciate ligament reconstruction and meniscal removal) to induce different degrees and potentially 'types' (mechanisms) of osteoarthritis. Articular cartilage was systematically harvested, prepared for histological examination and graded in a blinded fashion using a Modified Mankin grading method. Results showed that the possible permutations of cartilage damage were significant and far more varied than the current intended use that histological grading systems allow. Of 1352 cartilage specimens graded, 234 different manifestations of potential histological damage were observed across 23 potential individual grades of the Modified Mankin grading method. The results presented here show that current composite histological grading may contain additional information that could potentially discern different stages or mechanisms of cartilage damage and degeneration in a sheep model. This approach may be applicable to other grading systems.