11 resultados para Dimensional Accuracy

em Deakin Research Online - Australia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A vision based approach for calculating accurate 3D models of the objects is presented. Generally industrial visual inspection systems capable of accurate 3D depth estimation rely on extra hardware tools like laser scanners or light pattern projectors. These tools improve the accuracy of depth estimation but also make the vision system costly and cumbersome. In the proposed algorithm, depth and dimensional accuracy of the produced 3D depth model depends on the existing reference model instead of the information from extra hardware tools. The proposed algorithm is a simple and cost effective software based approach to achieve accurate 3D depth estimation with minimal hardware involvement. The matching process uses the well-known coarse to fine strategy, involving the calculation of matching points at the coarsest level with consequent refinement up to the finest level. Vector coefficients of the wavelet transform-modulus are used as matching features, where wavelet transform-modulus maxima defines the shift invariant high-level features with phase pointing to the normal of the feature surface. The technique addresses the estimation of optimal corresponding points and the corresponding 2D disparity maps leading to the creation of accurate depth perception model.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Titanium alloys are of great demand in the aerospace and biomedical industries. Most the titanium products are either cast or sintered to required shape and finish machined to get the appropriate surface texture to meet the design requirements. Ti-6Al-4V is often referred as work horse among the titanium alloys due to its heavy use in the aerospace industry. This paper is an attempt to investigate and improve the machining performance of Ti-6Al-4V. Thin wall machining is an advance machining technique especially used in machining turbine blades which can be done both in a conventional way and using a special technique known as trochoidal milling. The experimental design consists of conducting trials using combination of cutting parameters such as cutting speed (vc), 90 and 120 m/min; feed/tooth (fz) of 0.25 and 0.35 mm/min; step over (ae) 0.3 and 0.2; at constant depth of cut (ap) 20mm and using coolant. A preliminary assessment of machinability of Ti-6Al-4V during thin wall machining using trochoidal milling is done. A correlation established using cutting force, surface texture and dimensional accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article correlates laboratory-based understanding in machining of titanium alloys with the industry based outputs and finds possible solutions to improve machining efficiency of titanium alloy Ti-6Al-4V. The machining outputs are explained based on different aspects of chip formation mechanism and practical issues faced by industries during titanium machining. This study also analyzed and linked the methods that effectively improve the machinability of titanium alloys. It is found that the deformation mechanism during machining of titanium alloys is complex and causes basic challenges, such as sawtooth chips, high temperature, high stress on cutting tool, high tool wear and undercut parts. These challenges are correlated and affected by each other. Sawtooth chips cause variation in cutting forces which results in high cyclic stress on cutting tools. On the other hand, low thermal conductivity of titanium alloy causes high temperature. These cause a favorable environment for high tool wear. Thus, improvements in machining titanium alloy depend mainly on overcoming the complexities associated with the inherent properties of this alloy. Vibration analysis kit, high pressure coolant, cryogenic cooling, thermally enhanced machining, hybrid machining and, use of high conductive cutting tool and tool holders improve the machinability of titanium alloy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microarray data provides quantitative information about the transcription profile of cells. To analyze microarray datasets, methodology of machine learning has increasingly attracted bioinformatics researchers. Some approaches of machine learning are widely used to classify and mine biological datasets. However, many gene expression datasets are extremely high dimensionality, traditional machine learning methods can not be applied effectively and efficiently. This paper proposes a robust algorithm to find out rule groups to classify gene expression datasets. Unlike the most classification algorithms, which select dimensions (genes) heuristically to form rules groups to identify classes such as cancerous and normal tissues, our algorithm guarantees finding out best-k dimensions (genes), which are most discriminative to classify samples in different classes, to form rule groups for the classification of expression datasets. Our experiments show that the rule groups obtained by our algorithm have higher accuracy than that of other classification approaches

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The existence of exons and introns has been known for thirty years. Despite this knowledge, there is a lack of formal research into the categorization of exons. Exon taxonomies used by researchers tend to be selected ad hoc or based on an information poor de-facto standard. Exons have been shown to have specific properties and functions based on among other things their location and order. These factors should play a role in the naming to increase specificity about which exon type(s) are in question.

Results: POEM (Protein Oriented Exon Monikers) is a new taxonomy focused on protein proximal exons. It integrates three dimensions of information (Global Position, Regional Position and Region), thus its exon categories are based on known statistical exon features. POEM is applied to two congruent untranslated exon datasets resulting in the following statistical properties. Using the POEM taxonomy previous wide ranging estimates of initial 5' untranslated region exons are resolved. According to our datasets, 29–36% of genes have wholly untranslated first exons. Untranslated exon containing sequences are shown to have consistently up to 6 times more 5' untranslated exons than 3' untranslated exons. Finally, three exon patterns are determined which account for 70% of untranslated exon genes.

Conclusion: We describe a thorough three-dimensional exon taxonomy called POEM, which is biologically and statistically relevant. No previous taxonomy provides such fine grained information and yet still includes all valid information dimensions. The use of POEM will improve the accuracy of genefinder comparisons and analysis by means of a common taxonomy. It will also facilitate unambiguous communication due to its fine granularity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microarray data provides quantitative information about the transcription profile of cells. To analyse microarray datasets, methodology of machine learning has increasingly attracted bioinformatics researchers. Some approaches of machine learning are widely used to classify and mine biological datasets. However, many gene expression datasets are extremely high dimensionality, traditional machine learning methods cannot be applied effectively and efficiently. This paper proposes a robust algorithm to find out rule groups to classify gene expression datasets. Unlike the most classification algorithms, which select dimensions (genes) heuristically to form rules groups to identify classes such as cancerous and normal tissues, our algorithm guarantees finding out best-k dimensions (genes) to form rule groups for the classification of expression datasets. Our experiments show that the rule groups obtained by our algorithm have higher accuracy than that of other classification approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The two-dimensional Principal Component Analysis (2DPCA) is a robust method in face recognition. Much recent research shows that the 2DPCA is more reliable than the well-known PCA method in recognising human face. However, in many cases, this method tends to be overfitted to sample data. In this paper, we proposed a novel method named random subspace two-dimensional PCA (RS-2DPCA), which combines the 2DPCA method with the random subspace (RS) technique. The RS-2DPCA inherits the advantages of both the 2DPCA and RS technique, thus it can avoid the overfitting problem and achieve high recognition accuracy. Experimental results in three benchmark face data sets -the ORL database, the Yale face database and the extended Yale face database B - confirm our hypothesis that the RS-2DPCA is superior to the 2DPCA itself.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precision edge feature extraction is a very important step in vision, Researchers mainly use step edges to model an edge at subpixel level. In this paper we describe a new technique for two dimensional edge feature extraction to subpixel accuracy using a general edge model. Using six basic edge types to model edges, the edge parameters at subpixel level are extracted by fitting a model to the image signal using least-.squared error fitting technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-dimensional problem domains pose significant challenges for anomaly detection. The presence of irrelevant features can conceal the presence of anomalies. This problem, known as the '. curse of dimensionality', is an obstacle for many anomaly detection techniques. Building a robust anomaly detection model for use in high-dimensional spaces requires the combination of an unsupervised feature extractor and an anomaly detector. While one-class support vector machines are effective at producing decision surfaces from well-behaved feature vectors, they can be inefficient at modelling the variation in large, high-dimensional datasets. Architectures such as deep belief networks (DBNs) are a promising technique for learning robust features. We present a hybrid model where an unsupervised DBN is trained to extract generic underlying features, and a one-class SVM is trained from the features learned by the DBN. Since a linear kernel can be substituted for nonlinear ones in our hybrid model without loss of accuracy, our model is scalable and computationally efficient. The experimental results show that our proposed model yields comparable anomaly detection performance with a deep autoencoder, while reducing its training and testing time by a factor of 3 and 1000, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of secondary (anticlastic) curvature and the stress state on the measurement of material properties in a free bending test is studied in order to improve the accuracy of the test. Experiments and numerical analysis are conducted on a medium strength 304L stainless steel and high strength dual-phase steels, DP780 and DP1000. The dependence of the secondary curvature on sample geometry is analysed and correction factors are introduced to improve the accuracy of the calculation of material properties when using plane strain or uniaxial stress two-dimensional assumptions. A free bending test procedure is proposed to characterize material behaviour close to yield. This will allow the quick and simple analysis of material properties for bending-dominated forming processes such as roll forming.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We are currently witnessing an era where interaction with computers is no longer limited to conventional methods (i.e. keyboard and mouse). Human Computer Interaction (HCI) as a progressive field of research, has opened up alternatives to the traditional interaction techniques. Embedded Infrared (IR) sensors, Accelerometers and RGBD cameras have become common inputs for devices to recognize gestures and body movements. These sensors are vision based and as a result the devices that incorporate them will be reliant on presence of light. Ultrasonic sensors on the other hand do not suffer this limitation as they utilize properties of sound waves. These sensors however, have been mainly used for distance detection and not with HCI devices. This paper presents our approach in developing a multi-dimensional interaction input method and tool Ultrasonic Gesture-based Interaction (UGI) that utilizes ultrasonic sensors. We demonstrate how these sensors can detect object movements and recognize gestures. We present our approach in building the device and demonstrate sample interactions with it. We have also conducted a user study to evaluate our tool and its distance and micro gesture detection accuracy. This paper reports these results and outlines our future work in the area.