934 resultados para Stereo matching


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a method to generate a large scale and accurate dense 3D semantic map of street scenes. A dense 3D semantic model of the environment can significantly improve a number of robotic applications such as autonomous driving, navigation or localisation. Instead of using offline trained classifiers for semantic segmentation, our approach employs a data-driven, nonparametric method to parse scenes which easily scale to a large environment and generalise to different scenes. We use stereo image pairs collected from cameras mounted on a moving car to produce dense depth maps which are combined into a global 3D reconstruction using camera poses from stereo visual odometry. Simultaneously, 2D automatic semantic segmentation using a nonparametric scene parsing method is fused into the 3D model. Furthermore, the resultant 3D semantic model is improved with the consideration of moving objects in the scene. We demonstrate our method on the publicly available KITTI dataset and evaluate the performance against manually generated ground truth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Matched case–control research designs can be useful because matching can increase power due to reduced variability between subjects. However, inappropriate statistical analysis of matched data could result in a change in the strength of association between the dependent and independent variables or a change in the significance of the findings. We sought to ascertain whether matched case–control studies published in the nursing literature utilized appropriate statistical analyses. Of 41 articles identified that met the inclusion criteria, 31 (76%) used an inappropriate statistical test for comparing data derived from case subjects and their matched controls. In response to this finding, we developed an algorithm to support decision-making regarding statistical tests for matched case–control studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low-quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long-exposure blurred images, and in the other, two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Organizations make increasingly use of social media in order to compete for customer awareness and improve the quality of their goods and services. Multiple techniques of social media analysis are already in use. Nevertheless, theoretical underpinnings and a sound research agenda are still unavailable in this field at the present time. In order to contribute to setting up such an agenda, we introduce digital social signal processing (DSSP) as a new research stream in IS that requires multi-facetted investigations. Our DSSP concept is founded upon a set of four sequential activities: sensing digital social signals that are emitted by individuals on social media; decoding online data of social media in order to reconstruct digital social signals; matching the signals with consumers’ life events; and configuring individualized goods and service offerings tailored to the individual needs of customers. We further contribute to tying loose ends of different research areas together, in order to frame DSSP as a field for further investigation. We conclude with developing a research agenda.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis improves the process of recommending people to people in social networks using new clustering algorithms and ranking methods. The proposed system and methods are evaluated on the data collected from a real life social network. The empirical analysis of this research confirms that the proposed system and methods achieved improvements in the accuracy and efficiency of matching and recommending people, and overcome some of the problems that social matching systems usually suffer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At NTCIR-10 we participated in the cross-lingual link discovery (CrossLink-2) task. In this paper we describe our systems for discovering cross-lingual links between the Chinese, Japanese, and Korean (CJK) Wikipedia and the English Wikipedia. The evaluation results show that our implementation of the cross-lingual linking method achieved promising results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The count-min sketch is a useful data structure for recording and estimating the frequency of string occurrences, such as passwords, in sub-linear space with high accuracy. However, it cannot be used to draw conclusions on groups of strings that are similar, for example close in Hamming distance. This paper introduces a variant of the count-min sketch which allows for estimating counts within a specified Hamming distance of the queried string. This variant can be used to prevent users from choosing popular passwords, like the original sketch, but it also allows for a more efficient method of analysing password statistics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: To describe the recruitment, ophthalmic examination methods and distribution of ocular biometry of participants in the Norfolk Island Eye Study, who were individuals descended from the English Bounty mutineers and their Polynesian wives. Methods: All 1,275 permanent residents of Norfolk Island aged over 15 years were invited to participate, including 602 individuals involved in a 2001 cardiovascular disease study. Participants completed a detailed questionnaire and underwent a comprehensive eye assessment including stereo disc and retinal photography, ocular coherence topography and conjunctival autofluorescence assessment. Additionally, blood or saliva was taken for DNA testing. Results: 781 participants aged over 15 years were seen (54% female), comprising 61% of the permanent Island population. 343 people (43.9%) could trace their family history to the Pitcairn Islanders (Norfolk Island Pitcairn Pedigree). Mean anterior chamber depth was 3.32mm, mean axial length (AL) was 23.5mm, and mean central corneal thickness was 546 microns. There were no statistically significant differences in these characteristics between persons with and without Pitcairn Island ancestry. Mean intra-ocular pressure was lower in people with Pitcairn Island ancestry: 15.89mmHg compared to those without Pitcairn Island ancestry 16.49mmHg (P = .007). The mean keratometry value was lower in people with Pitcairn Island ancestry (43.22 vs. 43.52, P = .007). The corneas were flatter in people of Pitcairn ancestry but there was no corresponding difference in AL or refraction. Conclusion: Our study population is highly representative of the permanent population of Norfolk Island. Ocular biometry was similar to that of other white populations. Heritability estimates, linkage analysis and genome-wide studies will further elucidate the genetic determinants of chronic ocular diseases in this genetic isolate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method for prediction of the radiation pattern of N strongly coupled antennas with mismatched sources is presented. The method facilitates fast and accurate design of compact arrays. The prediction is based on the measured N-port S parameters of the coupled antennas and the N active element patterns measured in a 50 ω environment. By introducing equivalent power sources, the radiation pattern with excitation by sources with arbitrary impedances and various decoupling and matching networks (DMN) can be accurately predicted without the need for additional measurements. Two experiments were carried out for verification: pattern prediction for parasitic antennas with different loads and for antennas with DMN. The difference between measured and predicted patterns was within 1 to 2 dB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

After first observing a person, the task of person re-identification involves recognising an individual at different locations across a network of cameras at a later time. Traditionally, this task has been performed by first extracting appearance features of an individual and then matching these features to the previous observation. However, identifying an individual based solely on appearance can be ambiguous, particularly when people wear similar clothing (i.e. people dressed in uniforms in sporting and school settings). This task is made more difficult when the resolution of the input image is small as is typically the case in multi-camera networks. To circumvent these issues, we need to use other contextual cues. In this paper, we use "group" information as our contextual feature to aid in the re-identification of a person, which is heavily motivated by the fact that people generally move together as a collective group. To encode group context, we learn a linear mapping function to assign each person to a "role" or position within the group structure. We then combine the appearance and group context cues using a weighted summation. We demonstrate how this improves performance of person re-identification in a sports environment over appearance based-features.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Higher Degree Research (HDR) student publications are increasingly valued by students, by professional communities and by research institutions. Peer-reviewed publications form the HDR student writer's publication track record and increase competitiveness in employment and research funding opportunities. These publications also make the results of HDR student research available to the community in accessible formats. HDR student publications are also valued by universities because they provide evidence of institutional research activity within a field and attract a return on research performance. However, although publications are important to multiple stakeholders, many Education HDR students do not publish the results of their research. Hence, an investigation of Education HDR graduates who submitted work for publication during their candidacy was undertaken. This multiple, explanatory case study investigated six recent Education HDR graduates who had submitted work to peer-reviewed outlets during their candidacy. The conceptual framework supported an analysis of the development of Education HDR student writing using Alexander's (2003, 2004) Model of Domain Learning which focuses on expertise, and Lave and Wenger's (1991) situated learning within a community of practice. Within this framework, the study investigated how these graduates were able to submit or publish their research despite their relative lack of writing expertise. Case data were gathered through interviews and from graduate publication records. Contextual data were collected through graduate interviews, from Faculty and university documents, and through interviews with two Education HDR supervisors. Directed content analysis was applied to all data to ascertain the support available in the research training environment. Thematic analysis of graduate and supervisor interviews was then undertaken to reveal further information on training opportunities accessed by the HDR graduates. Pattern matching of all interview transcripts provided information on how the HDR graduates developed writing expertise. Finally, explanation building was used to determine causal links between the training accessed by the graduates and their writing expertise. The results demonstrated that Education HDR graduates developed publications and some level of expertise simultaneously within communities of practice. Students were largely supported by supervisors who played a critical role. They facilitated communities of practice and largely mediated HDR engagement in other training opportunities. However, supervisor support alone did not ensure that the HDR graduates developed writing expertise. Graduates who appeared to develop the most expertise, and produce a number of publications reported experiencing both a sustained period of engagement within one community of practice, and participation in multiple communities of practice. The implications for the MDL theory, as applied to academic writing, suggests that communities of practice can assist learners to progress from initial contact with a new domain of interest through to competence. The implications for research training include the suggestion that supervisors as potentially crucial supporters of HDR student writing for publication should themselves be active publishers. Also, Faculty or university sponsorship of communities of practice focussed on HDR student writing for publication could provide effective support for the development of HDR student writing expertise and potentially increase the number of their peer-reviewed publications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The growth of suitable tissue to replace natural blood vessels requires a degradable scaffold material that is processable into porous structures with appropriate mechanical and cell growth properties. This study investigates the fabrication of degradable, crosslinkable prepolymers of l-lactide-co-trimethylene carbonate into porous scaffolds by electrospinning. After crosslinking by γ-radiation, dimensionally stable scaffolds were obtained with up to 56% trimethylene carbonate incorporation. The fibrous mats showed Young’s moduli closely matching human arteries (0.4–0.8 MPa). Repeated cyclic extension yielded negligible change in mechanical properties, demonstrating the potential for use under dynamic physiological conditions. The scaffolds remained elastic and resilient at 30% strain after 84 days of degradation in phosphate buffer, while the modulus and ultimate stress and strain progressively decreased. The electrospun mats are mechanically superior to solid films of the same materials. In vitro, human mesenchymal stem cells adhered to and readily proliferated on the three-dimensional fiber network, demonstrating that these polymers may find use in growing artificial blood vessels in vivo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the last decade, the majority of existing search techniques is either keyword- based or category-based, resulting in unsatisfactory effectiveness. Meanwhile, studies have illustrated that more than 80% of users preferred personalized search results. As a result, many studies paid a great deal of efforts (referred to as col- laborative filtering) investigating on personalized notions for enhancing retrieval performance. One of the fundamental yet most challenging steps is to capture precise user information needs. Most Web users are inexperienced or lack the capability to express their needs properly, whereas the existent retrieval systems are highly sensitive to vocabulary. Researchers have increasingly proposed the utilization of ontology-based tech- niques to improve current mining approaches. The related techniques are not only able to refine search intentions among specific generic domains, but also to access new knowledge by tracking semantic relations. In recent years, some researchers have attempted to build ontological user profiles according to discovered user background knowledge. The knowledge is considered to be both global and lo- cal analyses, which aim to produce tailored ontologies by a group of concepts. However, a key problem here that has not been addressed is: how to accurately match diverse local information to universal global knowledge. This research conducts a theoretical study on the use of personalized ontolo- gies to enhance text mining performance. The objective is to understand user information needs by a \bag-of-concepts" rather than \words". The concepts are gathered from a general world knowledge base named the Library of Congress Subject Headings. To return desirable search results, a novel ontology-based mining approach is introduced to discover accurate search intentions and learn personalized ontologies as user profiles. The approach can not only pinpoint users' individual intentions in a rough hierarchical structure, but can also in- terpret their needs by a set of acknowledged concepts. Along with global and local analyses, another solid concept matching approach is carried out to address about the mismatch between local information and world knowledge. Relevance features produced by the Relevance Feature Discovery model, are determined as representatives of local information. These features have been proven as the best alternative for user queries to avoid ambiguity and consistently outperform the features extracted by other filtering models. The two attempt-to-proposed ap- proaches are both evaluated by a scientific evaluation with the standard Reuters Corpus Volume 1 testing set. A comprehensive comparison is made with a num- ber of the state-of-the art baseline models, including TF-IDF, Rocchio, Okapi BM25, the deploying Pattern Taxonomy Model, and an ontology-based model. The gathered results indicate that the top precision can be improved remarkably with the proposed ontology mining approach, where the matching approach is successful and achieves significant improvements in most information filtering measurements. This research contributes to the fields of ontological filtering, user profiling, and knowledge representation. The related outputs are critical when systems are expected to return proper mining results and provide personalized services. The scientific findings have the potential to facilitate the design of advanced preference mining models, where impact on people's daily lives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background & aims The confounding effect of disease on the outcomes of malnutrition using diagnosis-related groups (DRG) has never been studied in a multidisciplinary setting. This study aims to determine the impact of malnutrition on hospitalisation outcomes, controlling for DRG. Methods Subjective Global Assessment was used to assess the nutritional status of 818 patients within 48 hours of admission. Prospective data were collected on cost of hospitalisation, length of stay (LOS), readmission and mortality up to 3 years post-discharged using National Death Register data. Mixed model analysis and conditional logistic regression matching by DRG were carried out to evaluate the association between nutritional status and outcomes, with the results adjusted for gender, age and race. Results Malnourished patients (29%) had longer hospital stays (6.9±7.3 days vs. 4.6±5.6 days, p<0.001) and were more likely to be readmitted within 15 days (adjusted relative risk = 1.9, 95%CI 1.1–3.2, p=0.025). Within a DRG, the mean difference between actual cost of hospitalisation and the average cost for malnourished patients was greater than well-nourished patients (p=0.014). Mortality was higher in malnourished patients at 1 year (34% vs. 4.1 %), 2 years (42.6% vs. 6.7%) and 3 years (48.5% vs. 9.9%); p<0.001 for all. Overall, malnutrition was a significant predictor of mortality (adjusted hazard ratio = 4.4, 95%CI 3.3-6.0, p<0.001). Conclusions Malnutrition was evident in up to one third of inpatients and led to poor hospitalisation outcomes, even after matching for DRG. Strategies to prevent and treat malnutrition in the hospital and post-discharge are needed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.