533 resultados para indexing
Resumo:
Visual tracking is the problem of estimating some variables related to a target given a video sequence depicting the target. Visual tracking is key to the automation of many tasks, such as visual surveillance, robot or vehicle autonomous navigation, automatic video indexing in multimedia databases. Despite many years of research, long term tracking in real world scenarios for generic targets is still unaccomplished. The main contribution of this thesis is the definition of effective algorithms that can foster a general solution to visual tracking by letting the tracker adapt to mutating working conditions. In particular, we propose to adapt two crucial components of visual trackers: the transition model and the appearance model. The less general but widespread case of tracking from a static camera is also considered and a novel change detection algorithm robust to sudden illumination changes is proposed. Based on this, a principled adaptive framework to model the interaction between Bayesian change detection and recursive Bayesian trackers is introduced. Finally, the problem of automatic tracker initialization is considered. In particular, a novel solution for categorization of 3D data is presented. The novel category recognition algorithm is based on a novel 3D descriptors that is shown to achieve state of the art performances in several applications of surface matching.
Resumo:
Software visualizations can provide a concise overview of a complex software system. Unfortunately, as software has no physical shape, there is no `natural' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typically diverges from one visualization to another. We propose an approach to consistent layout for software visualization, called Software Cartography, in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, as the vocabulary of software artifacts tends to be stable over time. We present a prototype implementation of Software Cartography, and illustrate its use with practical examples from numerous open-source case studies.
Resumo:
Case series are a commonly reported study design, but the label "case series" is used inconsistently and sometimes incorrectly. Mislabeling impairs the appropriate indexing and sorting of evidence. This article tries to clarify the concept of case series and proposes a way to distinguish them from cohort studies. In a cohort study, patients are sampled on the basis of exposure and are followed over time, and the occurrence of outcomes is assessed. A cohort study may include a comparison group, although this is not a necessary feature. A case series may be a study that samples patients with both a specific outcome and a specific exposure, or one that samples patients with a specific outcome and includes patients regardless of whether they have specific exposures. Whereas a cohort study, in principle, enables the calculation of an absolute risk or a rate for the outcome, such a calculation is not possible in a case series.
Resumo:
With the advent of cheaper and faster DNA sequencing technologies, assembly methods have greatly changed. Instead of outputting reads that are thousands of base pairs long, new sequencers parallelize the task by producing read lengths between 35 and 400 base pairs. Reconstructing an organism’s genome from these millions of reads is a computationally expensive task. Our algorithm solves this problem by organizing and indexing the reads using n-grams, which are short, fixed-length DNA sequences of length n. These n-grams are used to efficiently locate putative read joins, thereby eliminating the need to perform an exhaustive search over all possible read pairs. Our goal was develop a novel n-gram method for the assembly of genomes from next-generation sequencers. Specifically, a probabilistic, iterative approach was utilized to determine the most likely reads to join through development of a new metric that models the probability of any two arbitrary reads being joined together. Tests were run using simulated short read data based on randomly created genomes ranging in lengths from 10,000 to 100,000 nucleotides with 16 to 20x coverage. We were able to successfully re-assemble entire genomes up to 100,000 nucleotides in length.
Resumo:
Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.
Resumo:
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no “natural“ mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its \emph{vocabulary}, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
Resumo:
On 14 November 2013, the US District Court of the Southern District of New York issued a major ruling in favour of the Google Books project, concluding that Google’s unauthorized scanning and indexing of millions of copyrighted books in the collections of participating libraries and subsequently making snippets of these works available online through the “Google Books” search tool qualifies as a fair use under section 107 USCA. After assuming that Google’s actions constitute a prima facie case of copyright infringement, Judge Chin examined the four factors in section 107 USCA and concluded in favour of fair use on the grounds that the project provides “significant public benefits,” that the unauthorized use of copyrighted works (a search tool of scanned full-text books) is “highly transformative” and that it does not supersede or supplant these works. The fair use defence also excluded Google’s liability for making copies of scanned books available to the libraries (as well as under secondary liability since library actions were also found to be protected by fair use): it is aimed at enhancing lawful uses of the digitized books by the libraries for the advancement of the arts and sciences. A previous ruling by the same court of 22 March 2011 had rejected a settlement agreement proposed by the parties, on the grounds that it was “not fair, adequate, and reasonable”. The Authors Guild has appealed the ruling.
Resumo:
Tegrity Campus 2.0 is the first student achievement system that impacts learning across the entire institution, improving retention and student satisfaction. Tegrity makes class time available all the time by automatically capturing, storing and indexing every class on campus for replay by every student. With Tegrity, students quickly recall key moments or replay entire classes online, with digital notes, on their iPods and cell phones. [See PDF for complete abstract]
Resumo:
When evaluated for promotion or tenure, faculty members are increasingly judged more on the quality than on the quantity of their scholarly publications. As a result, they want help from librarians in locating all citations to their published works for documentation in their curriculum vitae. Citation analysis using Science Citation Index and Social Science Citation Index provides a logical starting point in measuring quality, but the limitations of these sources leave a void in coverage of citations to an author's work. This article discusses alternative and additional methods of locating citations to published works.
Resumo:
Childhood traumatic events may lead to long-lasting psychological effects and contribute to the development of complex posttraumatic sequelae. These might be captured by the diagnostic concept of complex posttraumatic stress disorder (CPTSD) as an alternative to classic posttraumatic stress disorder (PTSD). CPTSD comprises a further set of symptoms in addition to those of PTSD, namely, changes in affect, self, and interpersonal relationships. Previous empirical research on CPTSD has focused on middle-aged adults but not on older adults. Moreover, predictor models of CPTSD are still rare. The current study investigated the association between traumatic events in childhood and complex posttraumatic stress symptoms in older adults. The mediation of this association by 2 social-interpersonal factors (social acknowledgment as a survivor and dysfunctional disclosure) was investigated. These 2 factors focus on the perception of acknowledgment by others and either the inability to disclose traumatic experiences or the ability to do so only with negative emotional reactions. A total of 116 older individuals (age range = 59–98 years) who had experienced childhood traumatic events completed standardized self-report questionnaires indexing childhood trauma, complex trauma sequelae, social acknowledgment, and dysfunctional disclosure of trauma. The results showed that traumatic events during childhood were associated with later posttraumatic stress symptoms but with classic rather than complex symptoms. Social acknowledgment and dysfunctional disclosure partially mediated this relationship. These findings suggest that childhood traumatic stress impacts individuals across the life span and may be associated with particular adverse psychopathological consequences.
Resumo:
High-pressure powder X-ray diffraction is a fundamental technique for investigating structural responses to externally applied force. Synchrotron sources and two-dimensional detectors are required. In contrast to this conventional setup, high-resolution beamlines equipped with one-dimensional detectors could offer much better resolved peaks but cannot deliver accurate structure factors because they only sample a small portion of the Debye rings, which are usually inhomogeneous and spotty because of the small amount of sample. In this study, a simple method to overcome this problem is presented and successfully applied to solving the structure of an L-serine polymorph from powder data. A comparison of the obtained high-resolution high-pressure data with conventional data shows that this technique, providing up to ten times better angular resolution, can be of advantage for indexing, for lattice parameter refinement, and even for structure refinement and solution in special cases.