110 resultados para 3D object detection

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Choice of the operational frequency is one of the most responsible parts of any radar design process. Parameters of radars for buried object detection (BOD) are very sensitive to both carrier frequency and ranging signal bandwidth. Such radars have a specific propagation environment with a strong frequency-dependent attenuation and, as a result, short operational range. This fact dictates some features of the radar's parameters: wideband signal-to provide a high range resolution (fractions of a meter) and a low carrier frequency (tens or hundreds megahertz) for deeper penetration. The requirement to have a wideband ranging signal and low carrier frequency are partly in contradiction. As a result, low-frequency (LF) ultrawide-band (UWB) signals are used. The major goal of this paper is to examine the influence of the frequency band choice on the radar performance and develop relevant methodologies for BOD radar design and optimization. In this article, high-efficient continuous wave (CW) signals with most advanced stepped frequency (SF) modulation are considered; however, the main conclusions can be applied to any kind of ranging signals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Extraction and reconstruction of rectal wall structures from an ultrasound image is helpful for surgeons in rectal clinical diagnosis and 3-D reconstruction of rectal structures from ultrasound images. The primary task is to extract the boundary of the muscular layers on the rectal wall. However, due to the low SNR from ultrasound imaging and the thin muscular layer structure of the rectum, this boundary detection task remains a challenge. An active contour model is an effective high-level model, which has been used successfully to aid the tasks of object representation and recognition in many image-processing applications. We present a novel multigradient field active contour algorithm with an extended ability for multiple-object detection, which overcomes some limitations of ordinary active contour models—"snakes." The core part in the algorithm is the proposal of multigradient vector fields, which are used to replace image forces in kinetic function for alternative constraints on the deformation of active contour, thereby partially solving the initialization limitation of active contour for rectal wall boundary detection. An adaptive expanding force is also added to the model to help the active contour go through the homogenous region in the image. The efficacy of the model is explained and tested on the boundary detection of a ring-shaped image, a synthetic image, and an ultrasound image. The experimental results show that the proposed multigradient field-active contour is feasible for multilayer boundary detection of rectal wall

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Axial X-ray Computed tomography (CT) scanning provides a convenient means of recording the three-dimensional form of soil structure. The technique has been used for nearly two decades, but initial development has concentrated on qualitative description of images. More recently, increasing effort has been put into quantifying the geometry and topology of macropores likely to contribute to preferential now in soils. Here we describe a novel technique for tracing connected macropores in the CT scans. After object extraction, three-dimensional mathematical morphological filters are applied to quantify the reconstructed structure. These filters consist of sequences of so-called erosions and/or dilations of a 32-face structuring element to describe object distances and volumes of influence. The tracing and quantification methodologies were tested on a set of undisturbed soil cores collected in a Swiss pre-alpine meadow, where a new earthworm species (Aporrectodea nocturna) was accidentally introduced. Given the reduced number of samples analysed in this study, the results presented only illustrate the potential of the method to reconstruct and quantify macropores. Our results suggest that the introduction of the new species induced very limited chance to the soil structured for example, no difference in total macropore length or mean diameter was observed. However. in the zone colonised by, the new species. individual macropores tended to have a longer average length. be more vertical and be further apart at some depth. Overall, the approach proved well suited to the analysis of the three-dimensional architecture of macropores. It provides a framework for the analysis of complex structures, which are less satisfactorily observed and described using 2D imaging. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Beyond the inherent technical challenges, current research into the three dimensional surface correspondence problem is hampered by a lack of uniform terminology, an abundance of application specific algorithms, and the absence of a consistent model for comparing existing approaches and developing new ones. This paper addresses these challenges by presenting a framework for analysing, comparing, developing, and implementing surface correspondence algorithms. The framework uses five distinct stages to establish correspondence between surfaces. It is general, encompassing a wide variety of existing techniques, and flexible, facilitating the synthesis of new correspondence algorithms. This paper presents a review of existing surface correspondence algorithms, and shows how they fit into the correspondence framework. It also shows how the framework can be used to analyse and compare existing algorithms and develop new algorithms using the framework's modular structure. Six algorithms, four existing and two new, are implemented using the framework. Each implemented algorithm is used to match a number of surface pairs. Results demonstrate that the correspondence framework implementations are faithful implementations of existing algorithms, and that powerful new surface correspondence algorithms can be created. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Capacity limits in visual attention have traditionally been studied using static arrays of elements from which an observer must detect a target defined by a certain visual feature or combination of features. In the current study we use this visual search paradigm, with accuracy as the dependent variable, to examine attentional capacity limits for different visual features undergoing change over time. In Experiment 1, detectability of a single changing target was measured under conditions where the type of change (size, speed, colour), the magnitude of change, the set size and homogeneity of the unchanging distractors were all systematically varied. Psychometric function slopes were calculated for different experimental conditions and ‘change thresholds’extracted from these slopes were used in Experiment 2, in which multiple supra-threshold changes were made, simultaneously, either to a single or to two or three different stimulus elements. These experiments give an objective psychometric paradigm for measuring changes in visual features over time. Results favour object-based accounts of visual attention, and show consistent differences in the allocation of attentional capacity to different perceptual dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deformable models are a highly accurate and flexible approach to segmenting structures in medical images. The primary drawback of deformable models is that they are sensitive to initialisation, with accurate and robust results often requiring initialisation close to the true object in the image. Automatically obtaining a good initialisation is problematic for many structures in the body. The cartilages of the knee are a thin elastic material that cover the ends of the bone, absorbing shock and allowing smooth movement. The degeneration of these cartilages characterize the progression of osteoarthritis. The state of the art in the segmentation of the cartilage are 2D semi-automated algorithms. These algorithms require significant time and supervison by a clinical expert, so the development of an automatic segmentation algorithm for the cartilages is an important clinical goal. In this paper we present an approach towards this goal that allows us to automatically providing a good initialisation for deformable models of the patella cartilage, by utilising the strong spatial relationship of the cartilage to the underlying bone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Results of two experiments are reported that examined how people respond to rectangular targets of different sizes in simple hitting tasks. If a target moves in a straight line and a person is constrained to move along a linear track oriented perpendicular to the targetrsquos motion, then the length of the target along its direction of motion constrains the temporal accuracy and precision required to make the interception. The dimensions of the target perpendicular to its direction of motion place no constraints on performance in such a task. In contrast, if the person is not constrained to move along a straight track, the targetrsquos dimensions may constrain the spatial as well as the temporal accuracy and precision. The experiments reported here examined how people responded to targets of different vertical extent (height): the task was to strike targets that moved along a straight, horizontal path. In experiment 1 participants were constrained to move along a horizontal linear track to strike targets and so target height did not constrain performance. Target height, length and speed were co-varied. Movement time (MT) was unaffected by target height but was systematically affected by length (briefer movements to smaller targets) and speed (briefer movements to faster targets). Peak movement speed (Vmax) was influenced by all three independent variables: participants struck shorter, narrower and faster targets harder. In experiment 2, participants were constrained to move in a vertical plane normal to the targetrsquos direction of motion. In this task target height constrains the spatial accuracy required to contact the target. Three groups of eight participants struck targets of different height but of constant length and speed, hence constant temporal accuracy demand (different for each group, one group struck stationary targets = no temporal accuracy demand). On average, participants showed little or no systematic response to changes in spatial accuracy demand on any dependent measure (MT, Vmax, spatial variable error). The results are interpreted in relation to previous results on movements aimed at stationary targets in the absence of visual feedback.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Studies concerning the processing of natural scenes using eye movement equipment have revealed that observers retain surprisingly little information from one fixation to the next. Other studies, in which fixation remained constant while elements within the scene were changed, have shown that, even without refixation, objects within a scene are surprisingly poorly represented. Although this effect has been studied in some detail in static scenes, there has been relatively little work on scenes as we would normally experience them, namely dynamic and ever changing. This paper describes a comparable form of change blindness in dynamic scenes, in which detection is performed in the presence of simulated observer motion. The study also describes how change blindness is affected by the manner in which the observer interacts with the environment, by comparing detection performance of an observer as the passenger or driver of a car. The experiments show that observer motion reduces the detection of orientation and location changes, and that the task of driving causes a concentration of object analysis on or near the line of motion, relative to passive viewing of the same scene.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A narrow absorption feature in an atomic or molecular gas (such as iodine or methane) is used as the frequency reference in many stabilized lasers. As part of the stabilization scheme an optical frequency dither is applied to the laser. In optical heterodyne experiments, this dither is transferred to the RF beat signal, reducing the spectral power density and hence the signal to noise ratio over that in the absence of dither. We removed the dither by mixing the raw beat signal with a dithered local oscillator signal. When the dither waveform is matched to that of the reference laser the output signal from the mixer is rendered dither free. Application of this method to a Winters iodine-stabilized helium-neon laser reduced the bandwidth of the beat signal from 6 MHz to 390 kHz, thereby lowering the detection threshold from 5 pW of laser power to 3 pW. In addition, a simple signal detection model is developed which predicts similar threshold reductions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While a number of studies have shown that object-extracted relative clauses are more difficult to understand than subject-extracted counterparts for second language (L2) English learners (e.g., Izumi, 2003), less is known about why this is the case and how they process these complex sentences. This exploratory study examines the potential applicability of Gibson's (1998, 2000) Syntactic Prediction Locality Theory (SPLT), a theory proposed to predict first language (L1) processing difficulty, to L2 processing and considers whether the theory might also account for the processing difficulties of subject- and object-extracted relative clauses encountered by L2 learners. Results of a self-paced reading time experiment from 15 Japanese learners of English are mainly consistent with the reading time profile predicted by the SPLT and thus suggest that the L1 processing theory might also be able to account for L2 processing difficulty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses a multi-layer feedforward (MLF) neural network incident detection model that was developed and evaluated using field data. In contrast to published neural network incident detection models which relied on simulated or limited field data for model development and testing, the model described in this paper was trained and tested on a real-world data set of 100 incidents. The model uses speed, flow and occupancy data measured at dual stations, averaged across all lanes and only from time interval t. The off-line performance of the model is reported under both incident and non-incident conditions. The incident detection performance of the model is reported based on a validation-test data set of 40 incidents that were independent of the 60 incidents used for training. The false alarm rates of the model are evaluated based on non-incident data that were collected from a freeway section which was video-taped for a period of 33 days. A comparative evaluation between the neural network model and the incident detection model in operation on Melbourne's freeways is also presented. The results of the comparative performance evaluation clearly demonstrate the substantial improvement in incident detection performance obtained by the neural network model. The paper also presents additional results that demonstrate how improvements in model performance can be achieved using variable decision thresholds. Finally, the model's fault-tolerance under conditions of corrupt or missing data is investigated and the impact of loop detector failure/malfunction on the performance of the trained model is evaluated and discussed. The results presented in this paper provide a comprehensive evaluation of the developed model and confirm that neural network models can provide fast and reliable incident detection on freeways. (C) 1997 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A technique based on the polymerase chain reaction (PCR) for the specific detection of Phytophthora medicaginis was developed using nucleotide sequence information of the ribosomal DNA (rDNA) regions. The complete IGS 2 region between the 5 S gene of one rDNA repeat and the small subunit of the adjacent repeat was sequenced for P. medicaginis and related species. The entire nucleotide sequence length of the IGS 2 of P. medicaginis was 3566 bp. A pair of oligonucleotide primers (PPED04 and PPED05), which allowed amplification of a specific fragment (364 bp) within the IGS 2 of P. medicaginis using the PCR, was designed. Specific amplification of this fragment from P. medicaginis was highly sensitive, detecting template DNA as low as 4 ng and in a host-pathogen DNA ratio of 1000000:1. Specific PCR amplification using PPED04 and PPED05 was successful in detecting P. medicaginis in lucerne stems infected under glasshouse conditions and field infected lucerne roots. The procedures developed in this work have application to improved identification and detection of a wide range of Phytophthora spp. in plants and soil.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This communication describes an improved one-step solid-phase extraction method for the recovery of morphine (M), morphine-3-glucuronide (M3G), and morphine-6-glucuronide (M6G) from human plasma with reduced coextraction of endogenous plasma constituents, compared to that of the authors' previously reported method. The magnitude of the peak caused by endogenous plasma components in the chromatogram that eluted immediately before the retention time of M3G has been reduced (similar to 80%) significantly (p < 0.01) while achieving high extraction efficiencies for the compounds of interest, viz morphine, M6G, and M3G (93.8 +/- 2.5, 91.7 +/- 1.7, and 93.1 +/- 2.2%, respectively). Furthermore, when the improved solid-phase extraction method was used, the extraction cartridge-derived late-eluting peak (retention time 90 to 100 minutes) reported in our previous method, was no longer present in the plasma extracts. Therefore the combined effect of reducing the recovery of the endogenous components of plasma that chromatographed just before the retention time of M3G and the removal of the late-eluting, extraction cartridge-derived peak has resulted in a decrease in the chromatographic run-time to 20 minutes, thereby increasing the sample throughput by up to 100%.