1000 resultados para Road extraction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is little discussion of fatalism in the road safety literature, and limited research. However, fatalism is a potential barrier to participation in health-promoting behaviours, particularly among the populations of developing countries and to some extent in developed countries. Many people still believe in divine discretion and magical powers as causes of road crashes in different parts of the world. Fatalistic beliefs and beliefs in mystical powers and superstition appear to influence perceptions of crash risk and consequently lead people to take risks and neglect safety measures. Fatalistic beliefs may cause individuals to be resigned to risks because they cannot do anything to reduce these risks.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Artificial neural network (ANN) learning methods provide a robust and non-linear approach to approximating the target function for many classification, regression and clustering problems. ANNs have demonstrated good predictive performance in a wide variety of practical problems. However, there are strong arguments as to why ANNs are not sufficient for the general representation of knowledge. The arguments are the poor comprehensibility of the learned ANN, and the inability to represent explanation structures. The overall objective of this thesis is to address these issues by: (1) explanation of the decision process in ANNs in the form of symbolic rules (predicate rules with variables); and (2) provision of explanatory capability by mapping the general conceptual knowledge that is learned by the neural networks into a knowledge base to be used in a rule-based reasoning system. A multi-stage methodology GYAN is developed and evaluated for the task of extracting knowledge from the trained ANNs. The extracted knowledge is represented in the form of restricted first-order logic rules, and subsequently allows user interaction by interfacing with a knowledge based reasoner. The performance of GYAN is demonstrated using a number of real world and artificial data sets. The empirical results demonstrate that: (1) an equivalent symbolic interpretation is derived describing the overall behaviour of the ANN with high accuracy and fidelity, and (2) a concise explanation is given (in terms of rules, facts and predicates activated in a reasoning episode) as to why a particular instance is being classified into a certain category.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Young drivers are at higher risk of crashes than other drivers when carrying passengers. Graduated Driver Licensing has demonstrated effectiveness in reducing fatalities however there is considerable potential for additional strategies to complement the approach. A survey with 276 young adults (aged 17-25 years, 64% females) was conducted to examine the potential and importance of strategies that are delivered via the Internet and potential strategies for passengers. Strategies delivered via the Internet represent opportunity for widespread dissemination and greater reach to young people at times convenient to them. The current study found some significant differences between males and females with regard to ways the Internet is used to obtain road safety information and the components valued in trusted road safety sites. There were also significant differences between males and females on the kinds of strategies used as passengers to promote driver safety and the context in which it occurred, with females tending to take more proactive strategies than males. In sum, young people see value in Internet delivery for passenger safety information (80% agreed/ strongly agreed) and more than 90% thought it was important to intervene while a passenger of a risky driver. Thus tailoring Internet road safety strategies to young people may differ for males and females however there is considerable potential for a passenger focus in strategies aimed at reducing young driver crashes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the areawide Dynamic ROad traffic NoisE (DRONE) simulator, and its implementation as a tool for noise abatement policy evaluation. DRONE involves integrating a road traffic noise estimation model with a traffic simulator to estimate road traffic noise in urban networks. An integrated traffic simulation-noise estimation model provides an interface for direct input of traffic flow properties from simulation model to noise estimation model that in turn estimates the noise on a spatial and temporal scale. The output from DRONE is linked with a geographical information system for visual representation of noise levels in the form of noise contour maps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A road traffic noise prediction model (ASJ MODEL-1998) has been integrated with a road traffic simulator (AVENUE) to produce the Dynamic areawide Road traffic NoisE simulator-DRONE. This traffic-noise-GIS based integrated tool is upgraded to predict noise levels in built-up areas. The integration of traffic simulation with a noise model provides dynamic access to traffic flow characteristics and hence automated and detailed predictions of traffic noise. The prediction is not only on the spatial scale but also on temporal scale. The linkage with GIS gives a visual representation to noise pollution in the form of dynamic areawide traffic noise contour maps. The application of DRONE on a real world built-up area is also presented.