948 resultados para SEGMENTED WORMLIKE MICELLES


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an implementation of an aircraft pose and motion estimator using visual systems as the principal sensor for controlling an Unmanned Aerial Vehicle (UAV) or as a redundant system for an Inertial Measure Unit (IMU) and gyros sensors. First, we explore the applications of the unified theory for central catadioptric cameras for attitude and heading estimation, explaining how the skyline is projected on the catadioptric image and how it is segmented and used to calculate the UAV’s attitude. Then we use appearance images to obtain a visual compass, and we calculate the relative rotation and heading of the aerial vehicle. Additionally, we show the use of a stereo system to calculate the aircraft height and to measure the UAV’s motion. Finally, we present a visual tracking system based on Fuzzy controllers working in both a UAV and a camera pan and tilt platform. Every part is tested using the UAV COLIBRI platform to validate the different approaches, which include comparison of the estimated data with the inertial values measured onboard the helicopter platform and the validation of the tracking schemes on real flights.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, the influence of pH on interfacial energy distributed over the phospholipids-bilayer surface model and the effect of hydrophobicity on coefficient of friction (f) were investigated by using microelectrophoresis. An important clinical implication of deficiency in hydrophobicity is the loss of phospholipids that is readily observed in osteoarthritis joints. This paper establishes the influence of pH on interfacial energy upon an increase f, which might be associated with a decrease of hydrophobicity of the articular surface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Silhouettes are common features used by many applications in computer vision. For many of these algorithms to perform optimally, accurately segmenting the objects of interest from the background to extract the silhouettes is essential. Motion segmentation is a popular technique to segment moving objects from the background, however such algorithms can be prone to poor segmentation, particularly in noisy or low contrast conditions. In this paper, the work of [3] combining motion detection with graph cuts, is extended into two novel implementations that aim to allow greater uncertainty in the output of the motion segmentation, providing a less restricted input to the graph cut algorithm. The proposed algorithms are evaluated on a portion of the ETISEO dataset using hand segmented ground truth data, and an improvement in performance over the motion segmentation alone and the baseline system of [3] is shown.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ultrathin films of a poly(styrene)-block-poly(2-vinylpyrindine) diblock copolymer (PS-b-P2VP) and poly(styrene)-block-poly(4-vinylpyrindine) diblock copolymer (PS-b-P4VP) were used to form surface-induced nanopattern (SINPAT) on mica. Surface interaction controlled microphase separation led to the formation of chemically heterogeneous surface nanopatterns on dry ultrathin films. Two distinct nanopatterned surfaces, namely, wormlike and dotlike patterns, were used to investigate the influence of topography in the nanometer range on cell adhesion, proliferation, and migration. Atomic force microscopy was used to confirm that SINPAT was stable under cell culture conditions. Fibroblasts and mesenchymal progenitor cells were cultured on the nanopatterned surfaces. Phase contrast and confocal laser microscopy showed that fibroblasts and mesenchymal progenitor cells preferred the densely spaced wormlike patterns. Atomic force microscopy showed that the cells remodelled the extracellular matrix differently as they migrate over the two distinctly different nanopatterns

Relevância:

10.00% 10.00%

Publicador:

Resumo:

My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports on the empirical comparison of seven machine learning algorithms in texture classification with application to vegetation management in power line corridors. Aiming at classifying tree species in power line corridors, object-based method is employed. Individual tree crowns are segmented as the basic classification units and three classic texture features are extracted as the input to the classification algorithms. Several widely used performance metrics are used to evaluate the classification algorithms. The experimental results demonstrate that the classification performance depends on the performance matrix, the characteristics of datasets and the feature used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a method for calculating the in-bucket payload volume on a dragline for the purpose of estimating the material’s bulk density in real-time. Knowledge of the bulk density can provide instant feedback to mine planning and scheduling to improve blasting and in turn provide a more uniform bulk density across the excavation site. Furthermore costs and emissions in dragline operation, maintenance and downstream material processing can be reduced. The main challenge is to determine an accurate position and orientation of the bucket with the constraint of real-time performance. The proposed solution uses a range bearing and tilt sensor to locate and scan the bucket between the lift and dump stages of the dragline cycle. Various scanning strategies are investigated for their benefits in this real-time application. The bucket is segmented from the scene using cluster analysis while the pose of the bucket is calculated using the iterative closest point (ICP) algorithm. Payload points are segmented from the bucket by a fixed distance neighbour clustering method to preserve boundary points and exclude low density clusters introduced by overhead chains and the spreader bar. A height grid is then used to represent the payload from which the volume can be calculated by summing over the grid cells. We show volume calculated on a scaled system with an accuracy of greater than 95 per cent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The self-assembling behavior and microscopic structure of zinc oxide nanoparticle Langmuir-Blodgett monolayer films were investigated for the case of zinc oxide nanoparticles coated with a hydrophobic layer of dodecanethiol. Evolution of nanoparticle film structure as a function of surface pressure (π) at the air-water interface was monitored in situ using Brewster’s angle microscopy, where it was determined that π=16 mN/m produced near-defect-free monolayer films. Transmission electron micrographs of drop-cast and Langmuir-Schaefer deposited films of the dodecanethiol-coated zinc oxide nanoparticles revealed that the nanoparticle preparation method yielded a microscopic structure that consisted of one-dimensional rodlike assemblies of nanoparticles with typical dimensions of 25 x 400 nm, encased in the organic dodecanethiol layer. These nanoparticle-containing rodlike micelles were aligned into ordered arrangements of parallel rods using the Langmuir-Blodgett technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent research on bicycle helmets and concerns about how public bicycle hire schemes will function in the context of compulsory helmet wearing laws have drawn media attention. This monograph presents the results of research commissioned by the Queensland Department of Transport and Main Roads to review the national and international literature regarding the health outcomes of cycling and bicycle helmets and examine crash and hospital data. It also includes critical examinations of the methodology used by Voukelatos and Rissel (2010), and estimates the likely effects of possible segmented approaches to bicycle helmet wearing legislation. The research concludes that current bicycle helmet wearing rates are halving the number of head injuries experienced by Queensland cyclists. Helmet wearing legislation discouraged people from cycling when it was first introduced but there is little evidence that it continues to do so. Cycling has significant health benefits and should be encouraged in ways that reduce the risk of the most serious injuries. Infrastructure and speed management approaches to improving the safety of cycling should be undertaken as part of a Safe System approach, but protection of the individual by simple and cost-effective methods such as bicycle helmets should also be part of an overall package of measures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a method for measuring the in-bucket payload volume on a dragline excavator for the purpose of estimating the material's bulk density in real-time. Knowledge of the payload's bulk density can provide feedback to mine planning and scheduling to improve blasting and therefore provide a more uniform bulk density across the excavation site. This allows a single optimal bucket size to be used for maximum overburden removal per dig and in turn reduce costs and emissions in dragline operation and maintenance. The proposed solution uses a range bearing laser to locate and scan full buckets between the lift and dump stages of the dragline cycle. The bucket is segmented from the scene using cluster analysis, and the pose of the bucket is calculated using the Iterative Closest Point (ICP) algorithm. Payload points are identified using a known model and subsequently converted into a height grid for volume estimation. Results from both scaled and full scale implementations show that this method can achieve an accuracy of above 95%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the analysis of medical images for computer-aided diagnosis and therapy, segmentation is often required as a preliminary step. Medical image segmentation is a complex and challenging task due to the complex nature of the images. The brain has a particularly complicated structure and its precise segmentation is very important for detecting tumors, edema, and necrotic tissues in order to prescribe appropriate therapy. Magnetic Resonance Imaging is an important diagnostic imaging technique utilized for early detection of abnormal changes in tissues and organs. It possesses good contrast resolution for different tissues and is, thus, preferred over Computerized Tomography for brain study. Therefore, the majority of research in medical image segmentation concerns MR images. As the core juncture of this research a set of MR images have been segmented using standard image segmentation techniques to isolate a brain tumor from the other regions of the brain. Subsequently the resultant images from the different segmentation techniques were compared with each other and analyzed by professional radiologists to find the segmentation technique which is the most accurate. Experimental results show that the Otsu’s thresholding method is the most suitable image segmentation method to segment a brain tumor from a Magnetic Resonance Image.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of appropriate features to characterise an output class or object is critical for all classification problems. In order to find optimal feature descriptors for vegetation species classification in a power line corridor monitoring application, this article evaluates the capability of several spectral and texture features. A new idea of spectral–texture feature descriptor is proposed by incorporating spectral vegetation indices in statistical moment features. The proposed method is evaluated against several classic texture feature descriptors. Object-based classification method is used and a support vector machine is employed as the benchmark classifier. Individual tree crowns are first detected and segmented from aerial images and different feature vectors are extracted to represent each tree crown. The experimental results showed that the proposed spectral moment features outperform or can at least compare with the state-of-the-art texture descriptors in terms of classification accuracy. A comprehensive quantitative evaluation using receiver operating characteristic space analysis further demonstrates the strength of the proposed feature descriptors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stem cells have attracted tremendous interest in recent times due to their promise in providing innovative new treatments for a great range of currently debilitating diseases. This is due to their potential ability to regenerate and repair damaged tissue, and hence restore lost body function, in a manner beyond the body's usual healing process. Bone marrow-derived mesenchymal stem cells or bone marrow stromal cells are one type of adult stem cells that are of particular interest. Since they are derived from a living human adult donor, they do not have the ethical issues associated with the use of human embryonic stem cells. They are also able to be taken from a patient or other donors with relative ease and then grown readily in the laboratory for clinical application. Despite the attractive properties of bone marrow stromal cells, there is presently no quick and easy way to determine the quality of a sample of such cells. Presently, a sample must be grown for weeks and subject to various time-consuming assays, under the direction of an expert cell biologist, to determine whether it will be useful. Hence there is a great need for innovative new ways to assess the quality of cell cultures for research and potential clinical application. The research presented in this thesis investigates the use of computerised image processing and pattern recognition techniques to provide a quicker and simpler method for the quality assessment of bone marrow stromal cell cultures. In particular, aim of this work is to find out whether it is possible, through the use of image processing and pattern recognition techniques, to predict the growth potential of a culture of human bone marrow stromal cells at early stages, before it is readily apparent to a human observer. With the above aim in mind, a computerised system was developed to classify the quality of bone marrow stromal cell cultures based on phase contrast microscopy images. Our system was trained and tested on mixed images of both healthy and unhealthy bone marrow stromal cell samples taken from three different patients. This system, when presented with 44 previously unseen bone marrow stromal cell culture images, outperformed human experts in the ability to correctly classify healthy and unhealthy cultures. The system correctly classified the health status of an image 88% of the time compared to an average of 72% of the time for human experts. Extensive training and testing of the system on a set of 139 normal sized images and 567 smaller image tiles showed an average performance of 86% and 85% correct classifications, respectively. The contributions of this thesis include demonstrating the applicability and potential of computerised image processing and pattern recognition techniques to the task of quality assessment of bone marrow stromal cell cultures. As part of this system, an image normalisation method has been suggested and a new segmentation algorithm has been developed for locating cell regions of irregularly shaped cells in phase contrast images. Importantly, we have validated the efficacy of both the normalisation and segmentation method, by demonstrating that both methods quantitatively improve the classification performance of subsequent pattern recognition algorithms, in discriminating between cell cultures of differing health status. We have shown that the quality of a cell culture of bone marrow stromal cells may be assessed without the need to either segment individual cells or to use time-lapse imaging. Finally, we have proposed a set of features, that when extracted from the cell regions of segmented input images, can be used to train current state of the art pattern recognition systems to predict the quality of bone marrow stromal cell cultures earlier and more consistently than human experts.