77 resultados para SEGMENTED POLYNOMIALS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent research on bicycle helmets and concerns about how public bicycle hire schemes will function in the context of compulsory helmet wearing laws have drawn media attention. This monograph presents the results of research commissioned by the Queensland Department of Transport and Main Roads to review the national and international literature regarding the health outcomes of cycling and bicycle helmets and examine crash and hospital data. It also includes critical examinations of the methodology used by Voukelatos and Rissel (2010), and estimates the likely effects of possible segmented approaches to bicycle helmet wearing legislation. The research concludes that current bicycle helmet wearing rates are halving the number of head injuries experienced by Queensland cyclists. Helmet wearing legislation discouraged people from cycling when it was first introduced but there is little evidence that it continues to do so. Cycling has significant health benefits and should be encouraged in ways that reduce the risk of the most serious injuries. Infrastructure and speed management approaches to improving the safety of cycling should be undertaken as part of a Safe System approach, but protection of the individual by simple and cost-effective methods such as bicycle helmets should also be part of an overall package of measures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a method for measuring the in-bucket payload volume on a dragline excavator for the purpose of estimating the material's bulk density in real-time. Knowledge of the payload's bulk density can provide feedback to mine planning and scheduling to improve blasting and therefore provide a more uniform bulk density across the excavation site. This allows a single optimal bucket size to be used for maximum overburden removal per dig and in turn reduce costs and emissions in dragline operation and maintenance. The proposed solution uses a range bearing laser to locate and scan full buckets between the lift and dump stages of the dragline cycle. The bucket is segmented from the scene using cluster analysis, and the pose of the bucket is calculated using the Iterative Closest Point (ICP) algorithm. Payload points are identified using a known model and subsequently converted into a height grid for volume estimation. Results from both scaled and full scale implementations show that this method can achieve an accuracy of above 95%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the analysis of medical images for computer-aided diagnosis and therapy, segmentation is often required as a preliminary step. Medical image segmentation is a complex and challenging task due to the complex nature of the images. The brain has a particularly complicated structure and its precise segmentation is very important for detecting tumors, edema, and necrotic tissues in order to prescribe appropriate therapy. Magnetic Resonance Imaging is an important diagnostic imaging technique utilized for early detection of abnormal changes in tissues and organs. It possesses good contrast resolution for different tissues and is, thus, preferred over Computerized Tomography for brain study. Therefore, the majority of research in medical image segmentation concerns MR images. As the core juncture of this research a set of MR images have been segmented using standard image segmentation techniques to isolate a brain tumor from the other regions of the brain. Subsequently the resultant images from the different segmentation techniques were compared with each other and analyzed by professional radiologists to find the segmentation technique which is the most accurate. Experimental results show that the Otsu’s thresholding method is the most suitable image segmentation method to segment a brain tumor from a Magnetic Resonance Image.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of appropriate features to characterise an output class or object is critical for all classification problems. In order to find optimal feature descriptors for vegetation species classification in a power line corridor monitoring application, this article evaluates the capability of several spectral and texture features. A new idea of spectral–texture feature descriptor is proposed by incorporating spectral vegetation indices in statistical moment features. The proposed method is evaluated against several classic texture feature descriptors. Object-based classification method is used and a support vector machine is employed as the benchmark classifier. Individual tree crowns are first detected and segmented from aerial images and different feature vectors are extracted to represent each tree crown. The experimental results showed that the proposed spectral moment features outperform or can at least compare with the state-of-the-art texture descriptors in terms of classification accuracy. A comprehensive quantitative evaluation using receiver operating characteristic space analysis further demonstrates the strength of the proposed feature descriptors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stem cells have attracted tremendous interest in recent times due to their promise in providing innovative new treatments for a great range of currently debilitating diseases. This is due to their potential ability to regenerate and repair damaged tissue, and hence restore lost body function, in a manner beyond the body's usual healing process. Bone marrow-derived mesenchymal stem cells or bone marrow stromal cells are one type of adult stem cells that are of particular interest. Since they are derived from a living human adult donor, they do not have the ethical issues associated with the use of human embryonic stem cells. They are also able to be taken from a patient or other donors with relative ease and then grown readily in the laboratory for clinical application. Despite the attractive properties of bone marrow stromal cells, there is presently no quick and easy way to determine the quality of a sample of such cells. Presently, a sample must be grown for weeks and subject to various time-consuming assays, under the direction of an expert cell biologist, to determine whether it will be useful. Hence there is a great need for innovative new ways to assess the quality of cell cultures for research and potential clinical application. The research presented in this thesis investigates the use of computerised image processing and pattern recognition techniques to provide a quicker and simpler method for the quality assessment of bone marrow stromal cell cultures. In particular, aim of this work is to find out whether it is possible, through the use of image processing and pattern recognition techniques, to predict the growth potential of a culture of human bone marrow stromal cells at early stages, before it is readily apparent to a human observer. With the above aim in mind, a computerised system was developed to classify the quality of bone marrow stromal cell cultures based on phase contrast microscopy images. Our system was trained and tested on mixed images of both healthy and unhealthy bone marrow stromal cell samples taken from three different patients. This system, when presented with 44 previously unseen bone marrow stromal cell culture images, outperformed human experts in the ability to correctly classify healthy and unhealthy cultures. The system correctly classified the health status of an image 88% of the time compared to an average of 72% of the time for human experts. Extensive training and testing of the system on a set of 139 normal sized images and 567 smaller image tiles showed an average performance of 86% and 85% correct classifications, respectively. The contributions of this thesis include demonstrating the applicability and potential of computerised image processing and pattern recognition techniques to the task of quality assessment of bone marrow stromal cell cultures. As part of this system, an image normalisation method has been suggested and a new segmentation algorithm has been developed for locating cell regions of irregularly shaped cells in phase contrast images. Importantly, we have validated the efficacy of both the normalisation and segmentation method, by demonstrating that both methods quantitatively improve the classification performance of subsequent pattern recognition algorithms, in discriminating between cell cultures of differing health status. We have shown that the quality of a cell culture of bone marrow stromal cells may be assessed without the need to either segment individual cells or to use time-lapse imaging. Finally, we have proposed a set of features, that when extracted from the cell regions of segmented input images, can be used to train current state of the art pattern recognition systems to predict the quality of bone marrow stromal cell cultures earlier and more consistently than human experts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An application of image processing techniques to recognition of hand-drawn circuit diagrams is presented. The scanned image of a diagram is pre-processed to remove noise and converted to bilevel. Morphological operations are applied to obtain a clean, connected representation using thinned lines. The diagram comprises of nodes, connections and components. Nodes and components are segmented using appropriate thresholds on a spatially varying object pixel density. Connection paths are traced using a pixel-stack. Nodes are classified using syntactic analysis. Components are classified using a combination of invariant moments, scalar pixel-distribution features, and vector relationships between straight lines in polygonal representations. A node recognition accuracy of 82% and a component recognition accuracy of 86% was achieved on a database comprising 107 nodes and 449 components. This recogniser can be used for layout “beautification” or to generate input code for circuit analysis and simulation packages

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates profiling and differentiating customers through the use of statistical data mining techniques. The business application of our work centres on examining individuals’ seldomly studied yet critical consumption behaviour over an extensive time period within the context of the wireless telecommunication industry; consumption behaviour (as oppose to purchasing behaviour) is behaviour that has been performed so frequently that it become habitual and involves minimal intentions or decision making. Key variables investigated are the activity initialised timestamp and cell tower location as well as the activity type and usage quantity (e.g., voice call with duration in seconds); and the research focuses are on customers’ spatial and temporal usage behaviour. The main methodological emphasis is on the development of clustering models based on Gaussian mixture models (GMMs) which are fitted with the use of the recently developed variational Bayesian (VB) method. VB is an efficient deterministic alternative to the popular but computationally demandingMarkov chainMonte Carlo (MCMC) methods. The standard VBGMMalgorithm is extended by allowing component splitting such that it is robust to initial parameter choices and can automatically and efficiently determine the number of components. The new algorithm we propose allows more effective modelling of individuals’ highly heterogeneous and spiky spatial usage behaviour, or more generally human mobility patterns; the term spiky describes data patterns with large areas of low probability mixed with small areas of high probability. Customers are then characterised and segmented based on the fitted GMM which corresponds to how each of them uses the products/services spatially in their daily lives; this is essentially their likely lifestyle and occupational traits. Other significant research contributions include fitting GMMs using VB to circular data i.e., the temporal usage behaviour, and developing clustering algorithms suitable for high dimensional data based on the use of VB-GMM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gait recognition approaches continue to struggle with challenges including view-invariance, low-resolution data, robustness to unconstrained environments, and fluctuating gait patterns due to subjects carrying goods or wearing different clothes. Although computationally expensive, model based techniques offer promise over appearance based techniques for these challenges as they gather gait features and interpret gait dynamics in skeleton form. In this paper, we propose a fast 3D ellipsoidal-based gait recognition algorithm using a 3D voxel model derived from multi-view silhouette images. This approach directly solves the limitations of view dependency and self-occlusion in existing ellipse fitting model-based approaches. Voxel models are segmented into four components (left and right legs, above and below the knee), and ellipsoids are fitted to each region using eigenvalue decomposition. Features derived from the ellipsoid parameters are modeled using a Fourier representation to retain the temporal dynamic pattern for classification. We demonstrate the proposed approach using the CMU MoBo database and show that an improvement of 15-20% can be achieved over a 2D ellipse fitting baseline.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Object segmentation is one of the fundamental steps for a number of robotic applications such as manipulation, object detection, and obstacle avoidance. This paper proposes a visual method for incorporating colour and depth information from sequential multiview stereo images to segment objects of interest from complex and cluttered environments. Rather than segmenting objects using information from a single frame in the sequence, we incorporate information from neighbouring views to increase the reliability of the information and improve the overall segmentation result. Specifically, dense depth information of a scene is computed using multiple view stereo. Depths from neighbouring views are reprojected into the reference frame to be segmented compensating for imperfect depth computations for individual frames. The multiple depth layers are then combined with color information from the reference frame to create a Markov random field to model the segmentation problem. Finally, graphcut optimisation is employed to infer pixels belonging to the object to be segmented. The segmentation accuracy is evaluated over images from an outdoor video sequence demonstrating the viability for automatic object segmentation for mobile robots using monocular cameras as a primary sensor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – This study examines the nature of consumers’ perceptions of the value they derive from the everyday experiential consumption of mobile phones and how mobile marketing (m-marketing) can potentially enhance these value perceptions. Methodology – Q methodology is used to examine how consumers’ subjective perceptions and opinions are shared at a collective level. Forty participants undertook two Q sorts and the data was analysed using PQ-method. Findings – The first Q sort identified three profiles of perceived value: the Mobile Pragmatists, the Mobile Connectors and the Mobile Revellers. The second Q sort identified two profiles of perceived value of m-marketing: one emerging from the shared opinions of the Mobile Pragmatists and the Mobile Connectors, and the second from the Mobile Revellers. Implications/limitations – The findings show how consumers can be segmented based on their contextualised perceived value of consuming mobile phones and how the potential for m-marketing is perceived in ways that can enhance these value perceptions. Limitations relate to deriving statements for the Q sorts and the generalisability of the results. Practical implications – The findings highlight ways to tailor m-marketing strategies to complement consumers’ perceptions of the value offered through their mobile phones. Originality/value of paper – The study contributes to the literature through using Q methodology to examine two subjective areas of consumer behaviour, experiential consumption and consumer perceived value. Keywords mobile phones, mobile phone marketing, consumer perceived value, Q methodology, experiential consumption Classification Research paper

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Investigations of foveal aberrations assume circular pupils. However, the pupil becomes increasingly elliptical with increase in visual field eccentricity. We address this and other issues concerning peripheral aberration specification. Methods: One approach uses an elliptical pupil similar to the actual pupil shape, stretched along its minor axis to become a circle so that Zernike circular aberration polynomials may be used. Another approach uses a circular pupil whose diameter matches either the larger or smaller dimension of the elliptical pupil. Pictorial presentation of aberrations, influence of wavelength on aberrations, sign differences between aberrations for fellow eyes, and referencing position to either the visual field or the retina are considered. Results: Examples show differences between the two approaches. Each has its advantages and disadvantages, but there are ways to compensate for most disadvantages. Two representations of data are pupil aberration maps at each position in the visual field and maps showing the variation in individual aberration coefficients across the field. Conclusions: Based on simplicity of use, adequacy of approximation, possible departures of off-axis pupils from ellipticity, and ease of understanding by clinicians, the circular pupil approach is preferable to the stretched elliptical approach for studies involving field angles up to 30 deg.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite of a significant contribution of transport sector in the global economy and society, it is one of the largest sources of global energy consumption, green house gas emissions and environmental pollutions. A complete look onto the whole life cycle environmental inventory of this sector will be helpful to generate a holistic understanding of contributory factors causing emissions. Previous studies were mainly based on segmental views which mostly compare environmental impacts of different modes of transport, but very few consider impacts other than the operational phase. Ignoring the impacts of non-operational phases, e.g., manufacture, construction, maintenance, may not accurately reflect total contributions on emissions. Moreover an integrated study for all motorized modes of road transport is also needed to achieve a holistic estimation. The objective of this study is to develop a component based life cycle inventory model which considers impacts of both operational and non-operational phases of the whole life as well as different transport modes. In particular, the whole life cycle of road transport has been segmented into vehicle, infrastructure, fuel and operational components and inventories have been conducted on each component. The inventory model has been demonstrated using the road transport of Singapore. Results show that total life cycle green house gas emissions from the road transport sector of Singapore is 7.8 million tons per year, among which operational phase and non-operational phases contribute about 55% and about 45%, respectively. Total amount of criteria air pollutants are 46, 8.5, 33.6, 13.6 and 2.6 thousand tons per year for CO, SO2, NOx, VOC and PM10, respectively. From the findings, it can be deduced that stringent government policies on emission control measures have a significant impact on reducing environmental pollutions. In combating global warming and environmental pollutions the promotion of public transport over private modes is an effective sustainable policy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many construction industry decision-makers believe there is a lack of off-site manufacture (OSM) adoption for non-residential construction in Australia. Identification of construction business process was considered imperative in order to assist decision-makers to increase OSM utilisation. The premise that domain knowledge can be re-used to provide an intervention point in the construction process led a team of researchers to construct simple base-line process models for the complete construction process, segmented into six phases. Sixteen domain knowledge industry experts were asked to review the construction phase base-line models to answer the question “Where in the process illustrated by this base-line model phase is an OSM task?”. Through an iterative and generative process a number of off-site manufacture intervention points were identified and integrated into the process models. The re-use of industry expert domain knowledge provided suggestions for new ways to do basic tasks thus facilitating changes to current practice. It is expected that implementation of the new processes will lead to systemic industry change and thus a growth in productivity due to increased adoption of OSM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Real-world AI systems have been recently deployed which can automatically analyze the plan and tactics of tennis players. As the game-state is updated regularly at short intervals (i.e. point-level), a library of successful and unsuccessful plans of a player can be learnt over time. Given the relative strengths and weaknesses of a player’s plans, a set of proven plans or tactics from the library that characterize a player can be identified. For low-scoring, continuous team sports like soccer, such analysis for multi-agent teams does not exist as the game is not segmented into “discretized” plays (i.e. plans), making it difficult to obtain a library that characterizes a team’s behavior. Additionally, as player tracking data is costly and difficult to obtain, we only have partial team tracings in the form of ball actions which makes this problem even more difficult. In this paper, we propose a method to overcome these issues by representing team behavior via play-segments, which are spatio-temporal descriptions of ball movement over fixed windows of time. Using these representations we can characterize team behavior from entropy maps, which give a measure of predictability of team behaviors across the field. We show the efficacy and applicability of our method on the 2010-2011 English Premier League soccer data.