916 resultados para Morphing Alteration Detection Image Warping


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent algorithms for monocular motion capture (MoCap) estimate weak-perspective camera matrices between images using a small subset of approximately-rigid points on the human body (i.e. the torso and hip). A problem with this approach, however, is that these points are often close to coplanar, causing canonical linear factorisation algorithms for rigid structure from motion (SFM) to become extremely sensitive to noise. In this paper, we propose an alternative solution to weak-perspective SFM based on a convex relaxation of graph rigidity. We demonstrate the success of our algorithm on both synthetic and real world data, allowing for much improved solutions to marker less MoCap problems on human bodies. Finally, we propose an approach to solve the two-fold ambiguity over bone direction using a k-nearest neighbour kernel density estimator.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

True stress-strain curve of railhead steel is required to investigate the behaviour of railhead under wheel loading through elasto-plastic Finite Element (FE) analysis. To reduce the rate of wear, the railhead material is hardened through annealing and quenching. The Australian standard rail sections are not fully hardened and hence suffer from non-uniform distribution of the material property; usage of average properties in the FE modelling can potentially induce error in the predicted plastic strains. Coupons obtained at varying depths of the railhead were, therefore, tested under axial tension and the strains were measured using strain gauges as well as an image analysis technique, known as the Particle Image Velocimetry (PIV). The head hardened steel exhibit existence of three distinct zones of yield strength; the yield strength as the ratio of the average yield strength provided in the standard (σyr=780MPa) and the corresponding depth as the ratio of the head hardened zone along the axis of symmetry are as follows: (1.17 σyr, 20%), (1.06 σyr, 20%- 80%) and (0.71 σyr, > 80%). The stress-strain curves exhibit limited plastic zone with fracture occurring at strain less than 0.1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe the population pharmacokinetics of an acepromazine (ACP) metabolite (2-(1-hydroxyethyl)promazine) (HEPS) in horses for the estimation of likely detection times in plasma and urine. Acepromazine (30 mg) was administered to 12 horses, and blood and urine samples were taken at frequent intervals for chemical analysis. A Bayesian hierarchical model was fitted to describe concentration-time data and cumulative urine amounts for HEPS. The metabolite HEPS was modelled separately from the parent ACP as the half-life of the parent was considerably less than that of the metabolite. The clearance ($Cl/F_{PM}$) and volume of distribution ($V/F_{PM}$), scaled by the fraction of parent converted to metabolite, were estimated as 769 L/h and 6874 L, respectively. For a typical horse in the study, after receiving 30 mg of ACP, the upper limit of the detection time was 35 hours in plasma and 100 hours in urine, assuming an arbitrary limit of detection of 1 $\mu$g/L, and a small ($\approx 0.01$) probability of detection. The model derived allowed the probability of detection to be estimated at the population level. This analysis was conducted on data collected from only 12 horses, but we assume that this is representative of the wider population.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many user studies in Web information searching have found the significant effect of task types on search strategies. However, little attention was given to Web image searching strategies, especially the query reformulation activity despite that this is a crucial part in Web image searching. In this study, we investigated the effects of topic domains and task types on user’s image searching behavior and query reformulation strategies. Some significant differences in user’s tasks specificity and initial concepts were identified among the task domains. Task types are also found to influence participant’s result reviewing behavior and query reformulation strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper illustrates the damage identification and condition assessment of a three story bookshelf structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). A major obstacle of using measured frequency response function data is a large size input variables to ANNs. This problem is overcome by applying a data reduction technique called principal component analysis (PCA). In the proposed procedure, ANNs with their powerful pattern recognition and classification ability were used to extract damage information such as damage locations and severities from measured FRFs. Therefore, simple neural network models are developed, trained by Back Propagation (BP), to associate the FRFs with the damage or undamaged locations and severity of the damage of the structure. Finally, the effectiveness of the proposed method is illustrated and validated by using the real data provided by the Los Alamos National Laboratory, USA. The illustrated results show that the PCA based artificial Neural Network method is suitable and effective for damage identification and condition assessment of building structures. In addition, it is clearly demonstrated that the accuracy of proposed damage detection method can also be improved by increasing number of baseline datasets and number of principal components of the baseline dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Survivin is a member of the family of proteins known as 'inhibitors of apoptosis proteins'. Survivin has a role in cellular decisions concerning division and survival and is frequently expressed in neoplastic cells. The aim of the present study was to investigate immunohistochemically the expression of survivin in normal canine tissues and in canine lymphoma. A representative range of fetal and adult normal tissues as well as biopsy samples from dogs with lymphoma were assembled in tissue arrays. The lymphomas were classified according to the revised Kiel and to the Revised European American Lymphoma - World Health Organization (REAL-WHO) schemes. Polyclonal and monoclonal antisera cross-reactive with canine survivin identified cytoplasmic expression of the molecule in a broad range of normal canine cells. The same reagents demonstrated cytoplasmic labelling of more than 5% of cells in all 83 lymphoma samples tested with polyclonal antiserum and in 67 of 82 (82%) of samples tested with monoclonal antiserum. Survivin was expressed by a wide range of canine lymphoma subtypes, but the expression of this molecule in normal canine tissues must be considered if novel therapies targeting survivin are applied to the management of canine lymphoma. © 2010 Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In most of the digital image watermarking schemes, it becomes a common practice to address security in terms of robustness, which is basically a norm in cryptography. Such consideration in developing and evaluation of a watermarking scheme may severely affect the performance and render the scheme ultimately unusable. This paper provides an explicit theoretical analysis towards watermarking security and robustness in figuring out the exact problem status from the literature. With the necessary hypotheses and analyses from technical perspective, we demonstrate the fundamental realization of the problem. Finally, some necessary recommendations are made for complete assessment of watermarking security and robustness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large amounts of money due to product recalls, consumer impact and subsequent loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and microorganisms to enter the package. In the food processing and packaging industry worldwide, there is an increasing demand for cost effective state of the art inspection technologies that are capable of reliably detecting leaky seals and delivering products at six-sigma. The new technology will develop non-destructive testing technology using digital imaging and sensing combined with a differential vacuum technique to assess seal integrity of food packages on a high-speed production line. The cost of leaky packages in Australian food industries is estimated close to AUD $35 Million per year. Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large sums of money due to product recalls, compensation claims and loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and micro-organisms to enter the package. Flexible plastic packages are widely used, and are the least expensive form of retaining the quality of the product. These packets can be used to seal, and therefore maximise, the shelf life of both dry and moist products. The seals of food packages need to be airtight so that the food content is not contaminated due to contact with microorganisms that enter as a result of air leakage. Airtight seals also extend the shelf life of packaged foods, and manufacturers attempt to prevent food products with leaky seals being sold to consumers. There are many current NDT (non-destructive testing) methods of testing the seal of flexible packages best suited to random sampling, and for laboratory purposes. The three most commonly used methods are vacuum/pressure decay, bubble test, and helium leak detection. Although these methods can detect very fine leaks, they are limited by their high processing time and are not viable in a production line. Two nondestructive in-line packaging inspection machines are currently available and are discussed in the literature review. The detailed design and development of the High-Speed Sensing and Detection System (HSDS) is the fundamental requirement of this project and the future prototype and production unit. Successful laboratory testing was completed and a methodical design procedure was needed for a successful concept. The Mechanical tests confirmed the vacuum hypothesis and seal integrity with good consistent results. Electrically, the testing also provided solid results to enable the researcher to move the project forward with a certain amount of confidence. The laboratory design testing allowed the researcher to confirm theoretical assumptions before moving into the detailed design phase. Discussion on the development of the alternative concepts in both mechanical and electrical disciplines enables the researcher to make an informed decision. Each major mechanical and electrical component is detailed through the research and design process. The design procedure methodically works through the various major functions both from a mechanical and electrical perspective. It opens up alternative ideas for the major components that although are sometimes not practical in this application, show that the researcher has exhausted all engineering and functionality thoughts. Further concepts were then designed and developed for the entire HSDS unit based on previous practice and theory. In the future, it would be envisaged that both the Prototype and Production version of the HSDS would utilise standard industry available components, manufactured and distributed locally. Future research and testing of the prototype unit could result in a successful trial unit being incorporated in a working food processing production environment. Recommendations and future works are discussed, along with options in other food processing and packaging disciplines, and other areas in the non-food processing industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deep Raman spectroscopy has been utilized for the standoff detection of concealed chemical threat agents from a distance of 15 meters under real life background illumination conditions. By using combined time and space resolved measurements, various explosive precursors hidden in opaque plastic containers were identified non-invasively. Our results confirm that combined time and space resolved Raman spectroscopy leads to higher selectivity towards the sub-layer over the surface layer as well as enhanced rejection of fluorescence from the container surface when compared to standoff spatially offset Raman spectroscopy. Raman spectra that have minimal interference from the packaging material and good signal-to-noise ratio were acquired within 5 seconds of measurement time. A new combined time and space resolved Raman spectrometer has been designed with nanosecond laser excitation and gated detection, making it of lower cost and complexity than picosecond-based laboratory systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of face recognition on video by employing the recently proposed probabilistic linear discrimi-nant analysis (PLDA). The PLDA has been shown to be robust against pose and expression in image-based face recognition. In this research, the method is extended and applied to video where image set to image set matching is performed. We investigate two approaches of computing similarities between image sets using the PLDA: the closest pair approach and the holistic sets approach. To better model face appearances in video, we also propose the heteroscedastic version of the PLDA which learns the within-class covariance of each individual separately. Our experi-ments on the VidTIMIT and Honda datasets show that the combination of the heteroscedastic PLDA and the closest pair approach achieves the best performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As organizations reach higher levels of business process management maturity, they often find themselves maintaining very large process model repositories, representing valuable knowledge about their operations. A common practice within these repositories is to create new process models, or extend existing ones, by copying and merging fragments from other models. We contend that if these duplicate fragments, a.k.a. ex- act clones, can be identified and factored out as shared subprocesses, the repository’s maintainability can be greatly improved. With this purpose in mind, we propose an indexing structure to support fast detection of clones in process model repositories. Moreover, we show how this index can be used to efficiently query a process model repository for fragments. This index, called RPSDAG, is based on a novel combination of a method for process model decomposition (namely the Refined Process Structure Tree), with established graph canonization and string matching techniques. We evaluated the RPSDAG with large process model repositories from industrial practice. The experiments show that a significant number of non-trivial clones can be efficiently found in such repositories, and that fragment queries can be handled efficiently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evidence exists that repositories of business process models used in industrial practice contain significant amounts of duplication. This duplication may stem from the fact that the repository describes variants of the same pro- cesses and/or because of copy/pasting activity throughout the lifetime of the repository. Previous work has put forward techniques for identifying duplicate fragments (clones) that can be refactored into shared subprocesses. However, these techniques are limited to finding exact clones. This paper analyzes the prob- lem of approximate clone detection and puts forward two techniques for detecting clusters of approximate clones. Experiments show that the proposed techniques are able to accurately retrieve clusters of approximate clones that originate from copy/pasting followed by independent modifications to the copied fragments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.