927 resultados para feature inspection method
Resumo:
Masks are widely used in different industries, for example, traditional metal industry, hospitals or semiconductor industry. Quality is a critical issue in mask industry as it is related to public health and safety. Traditional quality practices for manufacturing process have some limitations in implementing them in mask industries. This paper aims to investigate the suitability of Six Sigma quality control method for the manufacturing process in the mask industry to provide high quality products, enhancing the process capacity, reducing the defects and the returned goods arising in a selected mask manufacturing company. This paper suggests that modifications necessary in Six Sigma method for effective implementation in mask industry.
Resumo:
Wide-angle images exhibit significant distortion for which existing scale-space detectors such as the scale-invariant feature transform (SIFT) are inappropriate. The required scale-space images for feature detection are correctly obtained through the convolution of the image, mapped to the sphere, with the spherical Gaussian. A new visual key-point detector, based on this principle, is developed and several computational approaches to the convolution are investigated in both the spatial and frequency domain. In particular, a close approximation is developed that has comparable computation time to conventional SIFT but with improved matching performance. Results are presented for monocular wide-angle outdoor image sequences obtained using fisheye and equiangular catadioptric cameras. We evaluate the overall matching performance (recall versus 1-precision) of these methods compared to conventional SIFT. We also demonstrate the use of the technique for variable frame-rate visual odometry and its application to place recognition.
Resumo:
In this paper, we develop the switching controller presented by Lee et al. for the pose control of a car-like vehicle, to allow the use of an omnidirectional vision sensor. To this end we incorporate an extension to a hypothesis on the navigation behaviour of the desert ant, cataglyphis bicolor, which leads to a correspondence free landmark based vision technique. The method we present allows positioning to a learnt location based on feature bearing angle and range discrepancies between the robot's current view of the environment, and that at a learnt location. We present simulations and experimental results, the latter obtained using our outdoor mobile platform.
Resumo:
Axial shortening in vertical load bearing elements of reinforced concrete high-rise buildings is caused by the time dependent effects of shrinkage, creep and elastic shortening of concrete under loads. Such phenomenon has to be predicted at design stage and then updated during and after construction of the buildings in order to provide mitigation against the adverse effects of differential axial shortening among the elements. Existing measuring methods for updating previous predictions of axial shortening pose problems. With this in mind, a innovative procedure with a vibration based parameter called axial shortening index is proposed to update axial shortening of vertical elements based on variations in vibration characteristics of the buildings. This paper presents the development of the procedure and illustrates it through a numerical example of an unsymmetrical high-rise building with two outrigger and belt systems. Results indicate that the method has the capability to capture influence of different tributary areas, shear walls of outrigger and belt systems as well as the geometric complexity of the building.
Resumo:
A point interpolation method with locally smoothed strain field (PIM-LS2) is developed for mechanics problems using a triangular background mesh. In the PIM-LS2, the strain within each sub-cell of a nodal domain is assumed to be the average strain over the adjacent sub-cells of the neighboring element sharing the same field node. We prove theoretically that the energy norm of the smoothed strain field in PIM-LS2 is equivalent to that of the compatible strain field, and then prove that the solution of the PIM- LS2 converges to the exact solution of the original strong form. Furthermore, the softening effects of PIM-LS2 to system and the effects of the number of sub-cells that participated in the smoothing operation on the convergence of PIM-LS2 are investigated. Intensive numerical studies verify the convergence, softening effects and bound properties of the PIM-LS2, and show that the very ‘‘tight’’ lower and upper bound solutions can be obtained using PIM-LS2.
Resumo:
Differential distortion comprising axial shortening and consequent rotation in concrete buildings is caused by the time dependent effects of “shrinkage”, “creep” and “elastic” deformation. Reinforcement content, variable concrete modulus, volume to surface area ratio of elements and environmental conditions influence these distortions and their detrimental effects escalate with increasing height and geometric complexity of structure and non vertical load paths. Differential distortion has a significant impact on building envelopes, building services, secondary systems and the life time serviceability and performance of a building. Existing methods for quantifying these effects are unable to capture the complexity of such time dependent effects. This paper develops a numerical procedure that can accurately quantify the differential axial shortening that contributes significantly to total distortion in concrete buildings by taking into consideration (i) construction sequence and (ii) time varying values of Young’s Modulus of reinforced concrete and creep and shrinkage. Finite element techniques are used with time history analysis to simulate the response to staged construction. This procedure is discussed herein and illustrated through an example.
Resumo:
Background Some dialysis patients fail to comply with their fluid restriction causing problems due to volume overload. These patients sometimes blame excessive thirst. There has been little work in this area and no work documenting polydipsia among peritoneal dialysis (PD) patients. Methods We measured motivation to drink and fluid consumption in 46 haemodialysis patients (HD), 39 PD patients and 42 healthy controls (HC) using a modified palmtop computer to collect visual analogue scores at hourly intervals. Results Mean thirst scores were markedly depressed on the dialysis day (day 1) for HD (P<0.0001). The profile for day 2 was similar to that of HC. PD generated consistently higher scores than HD day 1 and HC (P = 0.01 vs. HC and P<0.0001 vs HD day 1). Reported mean daily water consumption was similar for HD and PD with both significantly less than HC (P<0.001 for both). However, measured fluid losses were similar for PD and HC whilst HD were lower (P<0.001 for both) suggesting that the PD group may have underestimated their fluid intake. Conclusion Our results indicate that HD causes a protracted period of reduced thirst but that the population's thirst perception is similar to HC on the interdialytic day despite a reduced fluid intake. In contrast, the PD group recorded high thirst scores throughout the day and were apparently less compliant with their fluid restriction. This is potentially important because the volume status of PD patients influences their survival.
Resumo:
Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These image hashes can be used for watermarking, image integrity authentication or image indexing for fast retrieval. This paper introduces a new method of generating image hashes based on extracting Higher Order Spectral features from the Radon projection of an input image. The feature extraction process is non-invertible, non-linear and different hashes can be produced from the same image through the use of random permutations of the input. We show that the transform is robust to typical image transformations such as JPEG compression, noise, scaling, rotation, smoothing and cropping. We evaluate our system using a verification-style framework based on calculating false match, false non-match likelihoods using the publicly available Uncompressed Colour Image database (UCID) of 1320 images. We also compare our results to Swaminathan’s Fourier-Mellin based hashing method with at least 1% EER improvement under noise, scaling and sharpening.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
In this paper, both Distributed Generators (DG) and capacitors are allocated and sized optimally for improving line loss and reliability. The objective function is composed of the investment cost of DGs and capacitors along with loss and reliability which are converted to the genuine dollar. The bus voltage and line current are considered as constraints which should be satisfied during the optimization procedure. Hybrid Particle Swarm Optimization as a heuristic based technique is used as the optimization method. The IEEE 69-bus test system is modified and employed to evaluate the proposed algorithm. The results illustrate that the lowest cost planning is found by optimizing both DGs and capacitors in distribution networks.
Resumo:
This research project examines the application of the Suzuki Actor Training Method (the Suzuki Method) within the work ofTadashi Suzuki's company in Japan, the Shizuoka Performing Arts Complex (SPAC), within the work of Brisbane theatre company Frank:Austral Asian Performance Ensemble (Frank:AAPE), and as related to the development of the theatre performance Surfacing. These three theatrical contexts have been studied from the viewpoint of a "participant- observer". The researcher has trained in the Suzuki Method with Frank:AAPE and SP AC, performed with Frank:AAPE, and was the solo performer and collaborative developer in the performance Surfacing (directed by Leah Mercer). Observations of these three groups are based on a phenomenological definition of the "integrated actor", an actor who is able to achieve a totality or unity between the body and the mind, and between the body and the voice, through a powerful sense of intention. The term "integrated actor" has been informed by the philosophy of Merleau-Ponty and his concept of the "lived body". Three main hypotheses are presented in this study: that the Suzuki Method focuses on actors learning through their body; that the Suzuki Method presents an holistic approach to the body and the voice; and that the Suzuki Method develops actors with a strong sense of intention. These three aspects of the Suzuki Method are explored in relation to the stylistic features of the work of SPAC, Frank:AAPE and the performance Surfacing.
Resumo:
This thesis presents an original approach to parametric speech coding at rates below 1 kbitsjsec, primarily for speech storage applications. Essential processes considered in this research encompass efficient characterization of evolutionary configuration of vocal tract to follow phonemic features with high fidelity, representation of speech excitation using minimal parameters with minor degradation in naturalness of synthesized speech, and finally, quantization of resulting parameters at the nominated rates. For encoding speech spectral features, a new method relying on Temporal Decomposition (TD) is developed which efficiently compresses spectral information through interpolation between most steady points over time trajectories of spectral parameters using a new basis function. The compression ratio provided by the method is independent of the updating rate of the feature vectors, hence allows high resolution in tracking significant temporal variations of speech formants with no effect on the spectral data rate. Accordingly, regardless of the quantization technique employed, the method yields a high compression ratio without sacrificing speech intelligibility. Several new techniques for improving performance of the interpolation of spectral parameters through phonetically-based analysis are proposed and implemented in this research, comprising event approximated TD, near-optimal shaping event approximating functions, efficient speech parametrization for TD on the basis of an extensive investigation originally reported in this thesis, and a hierarchical error minimization algorithm for decomposition of feature parameters which significantly reduces the complexity of the interpolation process. Speech excitation in this work is characterized based on a novel Multi-Band Excitation paradigm which accurately determines the harmonic structure in the LPC (linear predictive coding) residual spectra, within individual bands, using the concept 11 of Instantaneous Frequency (IF) estimation in frequency domain. The model yields aneffective two-band approximation to excitation and computes pitch and voicing with high accuracy as well. New methods for interpolative coding of pitch and gain contours are also developed in this thesis. For pitch, relying on the correlation between phonetic evolution and pitch variations during voiced speech segments, TD is employed to interpolate the pitch contour between critical points introduced by event centroids. This compresses pitch contour in the ratio of about 1/10 with negligible error. To approximate gain contour, a set of uniformly-distributed Gaussian event-like functions is used which reduces the amount of gain information to about 1/6 with acceptable accuracy. The thesis also addresses a new quantization method applied to spectral features on the basis of statistical properties and spectral sensitivity of spectral parameters extracted from TD-based analysis. The experimental results show that good quality speech, comparable to that of conventional coders at rates over 2 kbits/sec, can be achieved at rates 650-990 bits/sec.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.