202 resultados para Direct Sum of Cyclics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines whether voluntary national governance codes have a significant effect on company disclosure practices. Two direct effects of the codes are expected: 1) an overall improvement in company disclosure practices, which is greater when the codes have a greater emphasis on disclosure; and 2) a leveling out of disclosure practices across companies (i.e., larger improvements in companies that were previously poorer disclosers) due to the codes new comply-or-explain requirements. The codes are also expected to have an indirect effect on disclosure practices through their effect on company governance practices. The results show that the introduction of the codes in eight East Asian countries has been associated with lower analyst forecast error and a leveling out of disclosure practices across companies. The codes are also found to have an indirect effect on company disclosure practices through their effect on board independence. This study shows that a regulatory approach to improving disclosure practices is not always necessary. Voluntary national governance codes are found to have both a significant direct effect and a significant indirect effect on company disclosure practices. In addition, the results indicate that analysts in Asia do react to changes in disclosure practices, so there is an incentive for small companies and family-owned companies to further improve their disclosure practices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pooled serum samples collected from 8132 residents in 2002/03 and 2004/05 were analyzed to assess human polybrominated diphenyl ether (PBDE) concentrations from specified strata of the Australian population. The strata were defined by age (0−4 years, 5−15 years, < 16 years, 16−30 years, 31−45 years, 46−60 years, and >60 years); region; and gender. For both time periods, infants and older children had substantially higher PBDE concentrations than adults. For samples collected in 2004/05, the mean ± standard deviation ΣPBDE (sum of the homologue groups for the mono-, di-, tri-, tetra-, penta-, hexa-, hepta-, octa-, nona-, and deca-BDEs) concentrations for 0−4 and 5−15 years were 73 ± 7 and 29 ± 7 ng g−1 lipid, respectively, while for all adults >16 years, the mean concentration was lower at 18 ± 5 ng g−1 lipid. A similar trend was observed for the samples collected in 2002/03, with the mean ΣPBDE concentration for children <16 years being 28 ± 8 ng g−1 lipid and for the adults >16 years, 15 ± 5 ng g−1 lipid. No regional or gender specific differences were observed. Measured data were compared with a model that we developed to incorporate the primary known exposure pathways (food, air, dust, breast milk) and clearance (half-life) data. The model was used to predict PBDE concentration trends and indicated that the elevated concentrations in infants were primarily due to maternal transfer and breast milk consumption with inhalation and ingestion of dust making a comparatively lower contribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Polybrominated diphenyl ethers (PBDEs) are used as flame retardants in many products and have been detected in human samples worldwide. Limited data show that concentrations are elevated in young children. Objectives: We investigated the association between PBDEs and age with an emphasis on young children from Australia in 2006–2007. Methods: We collected human blood serum samples (n = 2,420), which we stratified by age and sex and pooled for analysis of PBDEs. Results: The sum of BDE-47, -99, -100, and -153 concentrations (Σ4PBDE) increased from 0–0.5 years (mean ± SD, 14 ± 3.4 ng/g lipid) to peak at 2.6–3 years (51 ± 36 ng/g lipid; p < 0.001) and then decreased until 31–45 years (9.9 ± 1.6 ng/g lipid). We observed no further significant decrease among ages 31–45, 45–60 (p = 0.964), or > 60 years (p = 0.894). The mean Σ4PBDE concentration in cord blood (24 ± 14 ng/g lipid) did not differ significantly from that in adult serum at ages 15–30 (p = 0.198) or 31–45 years (p = 0.140). We found no temporal trend when we compared the present results with Australian PBDE data from 2002–2005. PBDE concentrations were higher in males than in females; however, this difference reached statistical significance only for BDE-153 (p = 0.05). Conclusions: The observed peak concentration at 2.6–3 years of age is later than the period when breast-feeding is typically ceased. This suggests that in addition to the exposure via human milk, young children have higher exposure to these chemicals and/or a lower capacity to eliminate them. Key words: Australia, children, cord blood, human blood serum, PBDEs, polybrominated diphenyl ethers. Environ Health Perspect 117:1461–1465 (2009). doi:10.1289/ehp.0900596

Relevância:

100.00% 100.00%

Publicador:

Resumo:

the (dis)orientation of thought in its encounter with art can be understood as the direct result of an encounter with indeterminacy as a lack in meaning. As an artist I am aware of how this indeterminacy impacts on the perceived value and authority of the artistic voice and in particular its value as a research voice. This paper explores this indeterminacy of meaning, as a profound and disturbing unknowing characteristic of the sublime and argues its value to advanced thought and for any methodological understanding of practice-led research. Lyotard described the sublime as an ‘understanding’ through which art and its associated practices may be able to resist an all too easy assimilation by the public as just a consumer commodity. His thought represents an attempt to both politically and philosophically understand art’s, and particularly abstract painting’s, affect as a state of profound and positive unknowing. To talk of the sublime in art is to speak of the suspension of any comfortable certainty in being and instead to engage with the real as a limit to meaning and knowing. It is to talk of the presentation of the unpresentable as a momentary but significant dissolution of representation. This understanding of the sublime is then further explored through the cultural phenomena of the monochrome painting and applied to the work of the two contemporary artists, Franz Erhard Walter and Günter Umberg. Initially the monochrome was understood as an attempt to go beyond traditional representation and present the unpresentable. In the one hundred years or so since that initial move this understanding has broadened. The monochrome now presents itself as a genre or even project within visual art but it still has much to teach us. In the concretely abstract and performative artworks of Franz Erhard Walter and Günter Umberg, traces of this ambition remain and their work can be seen to pose questions probing our understandings and experiences of artistic meaning, its value and the real.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several brain imaging studies have assumed that response conflict is present in Stroop tasks. However, this has not been demonstrated directly. We examined the time-course of stimulus and response conflict resolution in a numerical Stroop task by combining single-trial electro-myography (EMG) and event-related brain potentials (ERP). EMG enabled the direct tracking of response conflict and the peak latency of the P300 ERP wave was used to index stimulus conflict. In correctly responded trials of the incongruent condition EMG detected robust incorrect response hand activation which appeared consistently in single trials. In 50–80% of the trials correct and incorrect response hand activation coincided temporally, while in 20–50% of the trials incorrect hand activation preceded correct hand activation. EMG data provides robust direct evidence for response conflict. However, congruency effects also appeared in the peak latency of the P300 wave which suggests that stimulus conflict also played a role in the Stroop paradigm. Findings are explained by the continuous flow model of information processing: Partially processed task-irrelevant stimulus information can result in stimulus conflict and can prepare incorrect response activity. A robust congruency effect appeared in the amplitude of incongruent vs. congruent ERPs between 330–400 ms, this effect may be related to the activity of the anterior cingulate cortex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a novel technique to directly optimise the Figure of Merit (FOM) for phonetic spoken term detection. The FOM is a popular measure of sTD accuracy, making it an ideal candiate for use as an objective function. A simple linear model is introduced to transform the phone log-posterior probabilities output by a phe classifier to produce enhanced log-posterior features that are more suitable for the STD task. Direct optimisation of the FOM is then performed by training the parameters of this model using a non-linear gradient descent algorithm. Substantial FOM improvements of 11% relative are achieved on held-out evaluation data, demonstrating the generalisability of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses the areawide Dynamic ROad traffic NoisE (DRONE) simulator, and its implementation as a tool for noise abatement policy evaluation. DRONE involves integrating a road traffic noise estimation model with a traffic simulator to estimate road traffic noise in urban networks. An integrated traffic simulation-noise estimation model provides an interface for direct input of traffic flow properties from simulation model to noise estimation model that in turn estimates the noise on a spatial and temporal scale. The output from DRONE is linked with a geographical information system for visual representation of noise levels in the form of noise contour maps.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Landscape scale environmental gradients present variable spatial patterns and ecological processes caused by climate, topography and soil characteristics and, as such, offer candidate sites to study environmental change. Data are presented on the spatial pattern of dominant species, biomass, and carbon pools and the temporal pattern of fluxes across a transitional zone shifting from Great Basin Desert scrub, up through pinyon-juniper woodlands and into ponderosa pine forest and the ecotones between each vegetation type. The mean annual temperature (MAT) difference across the gradient is approximately 3 degrees C from bottom to top (MAT 8.5-5.5) and annual precipitation averages from 320 to 530 mm/yr, respectively. The stems of the dominant woody vegetation approach a random spatial pattern across the entire gradient, while the canopy cover shows a clustered pattern. The size of the clusters increases with elevation according to available soil moisture which in turn affects available nutrient resources. The total density of woody species declines with increasing soil moisture along the gl-adient, but total biomass increases. Belowground carbon and nutrient pools change from a heterogenous to a homogenous distribution on either side of the woodlands. Although temperature controls the: seasonal patterns of carbon efflux from the soils, soil moisture appears to be the primary driving variable, but response differs underneath the different dominant species, Similarly, decomposition of dominant litter occurs faster-at the cooler and more moist sites, but differs within sites due to litter quality of the different species. The spatial pattern of these communities provides information on the direction of future changes, The ecological processes that we documented are not statistically different in the ecotones as compared to the: adjoining communities, but are different at sites above the woodland than those below the woodland. We speculate that an increase in MAT will have a major impact on C pools and C sequestering and release processes in these semiarid landscapes. However, the impact will be primarily related to moisture availability rather than direct effects of an increase in temperature. (C) 1998 Elsevier Science B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to examine the role of three strategies - organisational, business and information system – in post implementation of technological innovations. The findings reported in the paper are that improvements in operational performance can only be achieved by aligning technological innovation effectiveness with operational effectiveness. Design/methodology/approach – A combination of qualitative and quantitative methods was used to apply a two-stage methodological approach. Unstructured and semi structured interviews, based on the findings of the literature, were used to identify key factors used in the survey instrument design. Confirmatory factor analysis (CFA) was used to examine structural relationships between the set of observed variables and the set of continuous latent variables. Findings – Initial findings suggest that organisations looking for improvements in operational performance through adoption of technological innovations need to align with operational strategies of the firm. Impact of operational effectiveness and technological innovation effectiveness are related directly and significantly to improved operational performance. Perception of increase of operational effectiveness is positively and significantly correlated with improved operational performance. The findings suggest that technological innovation effectiveness is also positively correlated with improved operational performance. However, the study found that there is no direct influence of strategiesorganisational, business and information systems (IS) - on improvement of operational performance. Improved operational performance is the result of interactions between the implementation of strategies and related outcomes of both technological innovation and operational effectiveness. Practical implications – Some organisations are using technological innovations such as enterprise information systems to innovate through improvements in operational performance. However, they often focus strategically only on effectiveness of technological innovation or on operational effectiveness. Such a focus will be detrimental in the long-term of the enterprise. This research demonstrated that it is not possible to achieve maximum returns through technological innovations as dimensions of operational effectiveness need to be aligned with technological innovations to improve their operational performance. Originality/value – No single technological innovation implementation can deliver a sustained competitive advantage; rather, an advantage is obtained through the capacity of an organisation to exploit technological innovations’ functionality on a continuous basis. To achieve sustainable results, technology strategy must be aligned with organisational and operational strategies. This research proposes the key performance objectives and dimensions that organisations should focus to achieve a strategic alignment. Research limitations/implications – The principal limitation of this study is that the findings are based on investigation of small sample size. There is a need to explore the appropriateness of influence of scale prior to generalizing the results of this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main causes of above knee or transfemoral amputation (TFA) in the developed world is trauma to the limb. The number of people undergoing TFA due to limb trauma, particularly due to war injuries, has been increasing. Typically the trauma amputee population, including war-related amputees, are otherwise healthy, active and desire to return to employment and their usual lifestyle. Consequently there is a growing need to restore long-term mobility and limb function to this population. Traditionally transfemoral amputees are provided with an artificial or prosthetic leg that consists of a fabricated socket, knee joint mechanism and a prosthetic foot. Amputees have reported several problems related to the socket of their prosthetic limb. These include pain in the residual limb, poor socket fit, discomfort and poor mobility. Removing the socket from the prosthetic limb could eliminate or reduce these problems. A solution to this is the direct attachment of the prosthesis to the residual bone (femur) inside the residual limb. This technique has been used on a small population of transfemoral amputees since 1990. A threaded titanium implant is screwed in to the shaft of the femur and a second component connects between the implant and the prosthesis. A period of time is required to allow the implant to become fully attached to the bone, called osseointegration (OI), and be able to withstand applied load; then the prosthesis can be attached. The advantages of transfemoral osseointegration (TFOI) over conventional prosthetic sockets include better hip mobility, sitting comfort and prosthetic retention and fewer skin problems on the residual limb. However, due to the length of time required for OI to progress and to complete the rehabilitation exercises, it can take up to twelve months after implant insertion for an amputee to be able to load bear and to walk unaided. The long rehabilitation time is a significant disadvantage of TFOI and may be impeding the wider adoption of the technique. There is a need for a non-invasive method of assessing the degree of osseointegration between the bone and the implant. If such a method was capable of determining the progression of TFOI and assessing when the implant was able to withstand physiological load it could reduce the overall rehabilitation time. Vibration analysis has been suggested as a potential technique: it is a non destructive method of assessing the dynamic properties of a structure. Changes in the physical properties of a structure can be identified from changes in its dynamic properties. Consequently vibration analysis, both experimental and computational, has been used to assess bone fracture healing, prosthetic hip loosening and dental implant OI with varying degrees of success. More recently experimental vibration analysis has been used in TFOI. However further work is needed to assess the potential of the technique and fully characterise the femur-implant system. The overall aim of this study was to develop physical and computational models of the TFOI femur-implant system and use these models to investigate the feasibility of vibration analysis to detect the process of OI. Femur-implant physical models were developed and manufactured using synthetic materials to represent four key stages of OI development (identified from a physiological model), simulated using different interface conditions between the implant and femur. Experimental vibration analysis (modal analysis) was then conducted using the physical models. The femur-implant models, representing stage one to stage four of OI development, were excited and the modal parameters obtained over the range 0-5kHz. The results indicated the technique had limited capability in distinguishing between different interface conditions. The fundamental bending mode did not alter with interfacial changes. However higher modes were able to track chronological changes in interface condition by the change in natural frequency, although no one modal parameter could uniquely distinguish between each interface condition. The importance of the model boundary condition (how the model is constrained) was the key finding; variations in the boundary condition altered the modal parameters obtained. Therefore the boundary conditions need to be held constant between tests in order for the detected modal parameter changes to be attributed to interface condition changes. A three dimensional Finite Element (FE) model of the femur-implant model was then developed and used to explore the sensitivity of the modal parameters to more subtle interfacial and boundary condition changes. The FE model was created using the synthetic femur geometry and an approximation of the implant geometry. The natural frequencies of the FE model were found to match the experimental frequencies within 20% and the FE and experimental mode shapes were similar. Therefore the FE model was shown to successfully capture the dynamic response of the physical system. As was found with the experimental modal analysis, the fundamental bending mode of the FE model did not alter due to changes in interface elastic modulus. Axial and torsional modes were identified by the FE model that were not detected experimentally; the torsional mode exhibited the largest frequency change due to interfacial changes (103% between the lower and upper limits of the interface modulus range). Therefore the FE model provided additional information on the dynamic response of the system and was complementary to the experimental model. The small changes in natural frequency over a large range of interface region elastic moduli indicated the method may only be able to distinguish between early and late OI progression. The boundary conditions applied to the FE model influenced the modal parameters to a far greater extent than the interface condition variations. Therefore the FE model, as well as the experimental modal analysis, indicated that the boundary conditions need to be held constant between tests in order for the detected changes in modal parameters to be attributed to interface condition changes alone. The results of this study suggest that in a clinical setting it is unlikely that the in vivo boundary conditions of the amputated femur could be adequately controlled or replicated over time and consequently it is unlikely that any longitudinal change in frequency detected by the modal analysis technique could be attributed exclusively to changes at the femur-implant interface. Therefore further development of the modal analysis technique would require significant consideration of the clinical boundary conditions and investigation of modes other than the bending modes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Particulate pollution has been widely recognised as an important risk factor to human health. In addition to increases in respiratory and cardiovascular morbidity associated with exposure to particulate matter (PM), WHO estimates that urban PM causes 0.8 million premature deaths globally and that 1.5 million people die prematurely from exposure to indoor smoke generated from the combustion of solid fuels. Despite the availability of a huge body of research, the underlying toxicological mechanisms by which particles induce adverse health effects are not yet entirely understood. Oxidative stress caused by generation of free radicals and related reactive oxygen species (ROS) at the sites of deposition has been proposed as a mechanism for many of the adverse health outcomes associated with exposure to PM. In addition to particle-induced generation of ROS in lung tissue cells, several recent studies have shown that particles may also contain ROS. As such, they present a direct cause of oxidative stress and related adverse health effects. Cellular responses to oxidative stress have been widely investigated using various cell exposure assays. However, for a rapid screening of the oxidative potential of PM, less time-consuming and less expensive, cell-free assays are needed. The main aim of this research project was to investigate the application of a novel profluorescent nitroxide probe, synthesised at QUT, as a rapid screening assay in assessing the oxidative potential of PM. Considering that this was the first time that a profluorescent nitroxide probe was applied in investigating the oxidative stress potential of PM, the proof of concept regarding the detection of PM–derived ROS by using such probes needed to be demonstrated and a sampling methodology needed to be developed. Sampling through an impinger containing profluorescent nitroxide solution was chosen as a means of particle collection as it allowed particles to react with the profluorescent nitroxide probe during sampling, avoiding in that way any possible chemical changes resulting from delays between the sampling and the analysis of the PM. Among several profluorescent nitroxide probes available at QUT, bis(phenylethynyl)anthracene-nitroxide (BPEAnit) was found to be the most suitable probe, mainly due to relatively long excitation and emission wavelengths (λex= 430 nm; λem= 485 and 513 nm). These wavelengths are long enough to avoid overlap with the background fluorescence coming from light absorbing compounds which may be present in PM (e.g. polycyclic aromatic hydrocarbons and their derivatives). Given that combustion, in general, is one of the major sources of ambient PM, this project aimed at getting an insight into the oxidative stress potential of combustion-generated PM, namely cigarette smoke, diesel exhaust and wood smoke PM. During the course of this research project, it was demonstrated that the BPEAnit probe based assay is sufficiently sensitive and robust enough to be applied as a rapid screening test for PM-derived ROS detection. Considering that for all three aerosol sources (i.e. cigarette smoke, diesel exhaust and wood smoke) the same assay was applied, the results presented in this thesis allow direct comparison of the oxidative potential measured for all three sources of PM. In summary, it was found that there was a substantial difference between the amounts of ROS per unit of PM mass (ROS concentration) for particles emitted by different combustion sources. For example, particles from cigarette smoke were found to have up to 80 times less ROS per unit of mass than particles produced during logwood combustion. For both diesel and wood combustion it has been demonstrated that the type of fuel significantly affects the oxidative potential of the particles emitted. Similarly, the operating conditions of the combustion source were also found to affect the oxidative potential of particulate emissions. Moreover, this project has demonstrated a strong link between semivolatile (i.e. organic) species and ROS and therefore, clearly highlights the importance of semivolatile species in particle-induced toxicity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.