288 resultados para swd: Image segmentation
Resumo:
In the avian model of myopia, retinal image degradation quickly leads to ocular enlargement. We now give evidence that regionally specific changes in ocular size are correlated with both biomechanical indices of scleral remodeling, e.g. hydration capacity and with biochemical changes in proteinase activities. The latter include a 72 kDa matrix metalloproteinase (putatively MMP-2), other gelatin-binding MMPs, an acid pH MMP and a serine protease. Specifically, we have found that increases in scleral hydrational capacity parallel increases in collagen degrading activities. Gelatin zymography reveals that eyes with 7 days of retinal image degradation have elevated levels (1.4-fold) of gelatinolytic activities at 72 and 67 kDa M(r) in equatorial and posterior pole regions of the sclera while, after 14 days of treatment, increases are no longer apparent. Lower M(r) zymographic activities at 50, 46 and 37 kDa M(r) are collectively increased in eyes treated for both 7 and 14 days (1.4- and 2.4-fold respectively) in the equator and posterior pole areas of enlarging eyes. Western blot analyses of scleral extracts with an antibody to human MMP-2 reveals immunoreactive bands at 65, 30 and 25 kDa. Zymograms incubated under slightly acidic conditions reveal that, in enlarging eyes, MMP activities at 25 and 28 kDa M(r) are increased in scleral equator and posterior pole (1.6- and 4.5-fold respectively). A TIMP-like protein is also identified in sclera and cornea by Western blot analysis. Finally, retinal-image degradation also increases (~2.6-fold) the activity of a 23.5 kDa serine proteinase in limbus, equator and posterior pole sclera that is inhibited by aprotinin and soybean trypsin inhibitor. Taken together, these results indicate that eye growth induced by retinal-image degradation involves increases in the activities of multiple scleral proteinases that could modify the biomechanical properties of scleral structural components and contribute to tissue remodeling and growth.
Resumo:
In this paper we describe the approaches adopted to generate the runs submitted to ImageCLEFPhoto 2009 with an aim to promote document diversity in the rankings. Four of our runs are text based approaches that employ textual statistics extracted from the captions of images, i.e. MMR [1] as a state of the art method for result diversification, two approaches that combine relevance information and clustering techniques, and an instantiation of Quantum Probability Ranking Principle. The fifth run exploits visual features of the provided images to re-rank the initial results by means of Factor Analysis. The results reveal that our methods based on only text captions consistently improve the performance of the respective baselines, while the approach that combines visual features with textual statistics shows lower levels of improvements.
Resumo:
State and local governments frequently look to flagship cultural projects to improve the city image and catalyze tourism but, in the process, often overlook their potential to foster local arts development. To better understand this role, the article examines if and how cultural institutions in Los Angeles and San Francisco attract and support arts-related activity. The analysis reveals that cultural flagships have mixed success in generating arts-based development and that their ability may be improved through attention to the local context, facility and institutional characteristics, and the approach of the sponsoring agencies. Such knowledge is useful for planners to enhance their revitalization efforts, particularly as the economic development potential of arts organizations and artists has become more apparent.
Resumo:
Dealing with digital medical images is raising many new security problems with legal and ethical complexities for local archiving and distant medical services. These include image retention and fraud, distrust and invasion of privacy. This project was a significant step forward in developing a complete framework for systematically designing, analyzing, and applying digital watermarking, with a particular focus on medical image security. A formal generic watermarking model, three new attack models, and an efficient watermarking technique for medical images were developed. These outcomes contribute to standardizing future research in formal modeling and complete security and computational analysis of watermarking schemes.
Resumo:
While formal definitions and security proofs are well established in some fields like cryptography and steganography, they are not as evident in digital watermarking research. A systematic development of watermarking schemes is desirable, but at present their development is usually informal, ad hoc, and omits the complete realization of application scenarios. This practice not only hinders the choice and use of a suitable scheme for a watermarking application, but also leads to debate about the state-of-the-art for different watermarking applications. With a view to the systematic development of watermarking schemes, we present a formal generic model for digital image watermarking. Considering possible inputs, outputs, and component functions, the initial construction of a basic watermarking model is developed further to incorporate the use of keys. On the basis of our proposed model, fundamental watermarking properties are defined and their importance exemplified for different image applications. We also define a set of possible attacks using our model showing different winning scenarios depending on the adversary capabilities. It is envisaged that with a proper consideration of watermarking properties and adversary actions in different image applications, use of the proposed model would allow a unified treatment of all practically meaningful variants of watermarking schemes.
Resumo:
In increasingly competitive labour markets, attracting and retaining talent has become a prime concern of organisations. Employers need to understand the range of factors that influence career decision making and the role of employer branding in attracting human capital that best fits and contributes to the strategic aims of an organisation. This chapter identifies the changing factors that attract people to certain employment and industries and discusses the importance of aligning employer branding with employee branding to create a strong, genuine and lasting employer brand. Whilst organisations have long used marketing and branding practices to engender loyalty in customers, they are increasingly expanding this activity to differentiate organisations and make them attractive from an employee perspective. This chapter discusses employer branding and industry image as two important components of attraction strategies and describes ways companies can maximise their brand awareness in the employment market to both current and future employees.
Resumo:
The solutions proposed in this thesis contribute to improve gait recognition performance in practical scenarios that further enable the adoption of gait recognition into real world security and forensic applications that require identifying humans at a distance. Pioneering work has been conducted on frontal gait recognition using depth images to allow gait to be integrated with biometric walkthrough portals. The effects of gait challenging conditions including clothing, carrying goods, and viewpoint have been explored. Enhanced approaches are proposed on segmentation, feature extraction, feature optimisation and classification elements, and state-of-the-art recognition performance has been achieved. A frontal depth gait database has been developed and made available to the research community for further investigation. Solutions are explored in 2D and 3D domains using multiple images sources, and both domain-specific and independent modality gait features are proposed.
Resumo:
This paper outlines the approach taken by the Speech, Audio, Image and Video Technologies laboratory, and the Applied Data Mining Research Group (SAIVT-ADMRG) in the 2014 MediaEval Social Event Detection (SED) task. We participated in the event based clustering subtask (subtask 1), and focused on investigating the incorporation of image features as another source of data to aid clustering. In particular, we developed a descriptor based around the use of super-pixel segmentation, that allows a low dimensional feature that incorporates both colour and texture information to be extracted and used within the popular bag-of-visual-words (BoVW) approach.
Resumo:
Transit passenger market segmentation enables transit operators to target different classes of transit users for targeted surveys and various operational and strategic planning improvements. However, the existing market segmentation studies in the literature have been generally done using passenger surveys, which have various limitations. The smart card (SC) data from an automated fare collection system facilitate the understanding of the multiday travel pattern of transit passengers and can be used to segment them into identifiable types of similar behaviors and needs. This paper proposes a comprehensive methodology for passenger segmentation solely using SC data. After reconstructing the travel itineraries from SC transactions, this paper adopts the density-based spatial clustering of application with noise (DBSCAN) algorithm to mine the travel pattern of each SC user. An a priori market segmentation approach then segments transit passengers into four identifiable types. The methodology proposed in this paper assists transit operators to understand their passengers and provides them oriented information and services.
Resumo:
Most of the existing algorithms for approximate Bayesian computation (ABC) assume that it is feasible to simulate pseudo-data from the model at each iteration. However, the computational cost of these simulations can be prohibitive for high dimensional data. An important example is the Potts model, which is commonly used in image analysis. Images encountered in real world applications can have millions of pixels, therefore scalability is a major concern. We apply ABC with a synthetic likelihood to the hidden Potts model with additive Gaussian noise. Using a pre-processing step, we fit a binding function to model the relationship between the model parameters and the synthetic likelihood parameters. Our numerical experiments demonstrate that the precomputed binding function dramatically improves the scalability of ABC, reducing the average runtime required for model fitting from 71 hours to only 7 minutes. We also illustrate the method by estimating the smoothing parameter for remotely sensed satellite imagery. Without precomputation, Bayesian inference is impractical for datasets of that scale.
Resumo:
Texture enhancement is an important component of image processing that finds extensive application in science and engineering. The quality of medical images, quantified using the imaging texture, plays a significant role in the routine diagnosis performed by medical practitioners. Most image texture enhancement is performed using classical integral order differential mask operators. Recently, first order fractional differential operators were used to enhance images. Experimentation with these methods led to the conclusion that fractional differential operators not only maintain the low frequency contour features in the smooth areas of the image, but they also nonlinearly enhance edges and textures corresponding to high frequency image components. However, whilst these methods perform well in particular cases, they are not routinely useful across all applications. To this end, we apply the second order Riesz fractional differential operator to improve upon existing approaches of texture enhancement. Compared with the classical integral order differential mask operators and other first order fractional differential operators, we find that our new algorithms provide higher signal to noise values and superior image quality.
Resumo:
We describe an investigation into how Massey University’s Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University’s pollen reference collection (2,890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set.We additionally work through a real world case study where we assess the ability of the system to determine the pollen make-up of samples of New Zealand honey. In addition to the Classifynder’s native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples.
Resumo:
The detection of line-like features in images finds many applications in microanalysis. Actin fibers, microtubules, neurites, pilis, DNA, and other biological structures all come up as tenuous curved lines in microscopy images. A reliable tracing method that preserves the integrity and details of these structures is particularly important for quantitative analyses. We have developed a new image transform called the "Coalescing Shortest Path Image Transform" with very encouraging properties. Our scheme efficiently combines information from an extensive collection of shortest paths in the image to delineate even very weak linear features. © Copyright Microscopy Society of America 2011.
Resumo:
The application of robotics to protein crystallization trials has resulted in the production of millions of images. Manual inspection of these images to find crystals and other interesting outcomes is a major rate-limiting step. As a result there has been intense activity in developing automated algorithms to analyse these images. The very first step for most systems that have been described in the literature is to delineate each droplet. Here, a novel approach that reaches over 97% success rate and subsecond processing times is presented. This will form the seed of a new high-throughput system to scrutinize massive crystallization campaigns automatically. © 2010 International Union of Crystallography Printed in Singapore-all rights reserved.