33 resultados para Scale invariant feature transform (SIFT)
Resumo:
Perceiving the world visually is a basic act for humans, but for computers it is still an unsolved problem. The variability present innatural environments is an obstacle for effective computer vision. The goal of invariant object recognition is to recognise objects in a digital image despite variations in, for example, pose, lighting or occlusion. In this study, invariant object recognition is considered from the viewpoint of feature extraction. Thedifferences between local and global features are studied with emphasis on Hough transform and Gabor filtering based feature extraction. The methods are examined with respect to four capabilities: generality, invariance, stability, and efficiency. Invariant features are presented using both Hough transform and Gabor filtering. A modified Hough transform technique is also presented where the distortion tolerance is increased by incorporating local information. In addition, methods for decreasing the computational costs of the Hough transform employing parallel processing and local information are introduced.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
Developing software is a difficult and error-prone activity. Furthermore, the complexity of modern computer applications is significant. Hence,an organised approach to software construction is crucial. Stepwise Feature Introduction – created by R.-J. Back – is a development paradigm, in which software is constructed by adding functionality in small increments. The resulting code has an organised, layered structure and can be easily reused. Moreover, the interaction with the users of the software and the correctness concerns are essential elements of the development process, contributing to high quality and functionality of the final product. The paradigm of Stepwise Feature Introduction has been successfully applied in an academic environment, to a number of small-scale developments. The thesis examines the paradigm and its suitability to construction of large and complex software systems by focusing on the development of two software systems of significant complexity. Throughout the thesis we propose a number of improvements and modifications that should be applied to the paradigm when developing or reengineering large and complex software systems. The discussion in the thesis covers various aspects of software development that relate to Stepwise Feature Introduction. More specifically, we evaluate the paradigm based on the common practices of object-oriented programming and design and agile development methodologies. We also outline the strategy to testing systems built with the paradigm of Stepwise Feature Introduction.
Resumo:
Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented
Resumo:
Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.
Resumo:
The purpose of the study is: (1) to describe how nursing students' experienced their clinical learning environment and the supervision given by staff nurses working in hospital settings; and (2) to develop and test an evaluation scale of Clinical Learning Environment and Supervision (CLES). The study has been carried out in different phases. The pilot study (n=163) explored the association between the characteristics of a ward and its evaluation as a learning environment by students. The second version of research instrument (which was developed by the results of this pilot study) were tested by an expert panel (n=9 nurse teachers) and test-retest group formed by student nurses (n=38). After this evaluative phase, the CLES was formed as the basic research instrument for this study and it was tested with the Finnish main sample (n=416). In this phase, a concurrent validity instrument (Dunn & Burnett 1995) was used to confirm the validation process of CLES. The international comparative study was made by comparing the Finnish main sample with a British sample (n=142). The international comparative study was necessary for two reasons. In the instrument developing process, there is a need to test the new instrument in some other nursing culture. Other reason for comparative international study is the reflecting the impact of open employment markets in the European Union (EU) on the need to evaluate and to integrate EU health care educational systems. The results showed that the individualised supervision system is the most used supervision model and the supervisory relationship with personal mentor is the most meaningful single element of supervision evaluated by nursing students. The ward atmosphere and the management style of ward manager are the most important environmental factors of the clinical ward. The study integrates two theoretical elements - learning environment and supervision - in developing a preliminary theoretical model. The comparative international study showed that, Finnish students were more satisfied and evaluated their clinical placements and supervision with higher scores than students in the United Kingdom (UK). The difference between groups was statistical highly significant (p= 0.000). In the UK, clinical placements were longer but students met their nurse teachers less frequently than students in Finland. Arrangements for supervision were similar. This research process has produced the evaluation scale (CLES), which can be used in research and quality assessments of clinical learning environment and supervision in Finland and in the UK. CLES consists of 27 items and it is sub-divided into five sub-dimensions. Cronbach's alpha coefficient varied from high 0.94 to marginal 0.73. CLES is a compact evaluation scale and user-friendliness makes it suitable for continuing evaluation.
Resumo:
Opinnäytetyön tarkoituksena oli kuvata alle 1 500 gramman painoisena syntyneiden keskoslasten motorista kehitystä kolmen, kuuden ja kahdentoista kuukauden korjatussa iässä, sekä tuoda esille mahdollisia motorisen kehityksen yhteisiä piirteitä Alberta Infant Motor Scale (AIMS) -testistöllä arvioituna. Työ toteutettiin yhteistyössä Lasten ja nuorten sairauksien toimialan fysioterapian yksikön kanssa, jossa keskoslasten motorisen kehityksen arviointi AIMS-testistöllä oli toteutettu vuosina 2005 - 2006. Idea opinnäytetyöhön syntyi yhteisten keskusteluiden pohjalta fysioterapeuttien kanssa. Opinnäytetyön tavoitteena oli analysoida ja koota yhteenveto Lasten ja nuorten sairauksien toimialalle heidän tutkimastaan aineistosta. Työ oli luonteeltaan kuvaileva kvantitatiivinen tutkimus valmiiksi saadun aineiston pohjalta. Aineisto koostui yhteensä 109 keskoslapsen AIMS-testistön arviointilomakkeista. Keskoslapsista 54 oli kolmen kuukauden, 42 kuuden kuukauden ja 13 kahdentoista kuukauden korjatussa iässä. Tulokset analysoitiin käyttämällä SPSS 13.0 Windows Release-tilasto-ohjelmaa ja tulokset esitettiin taulukoiden ja kuvioiden avulla. Tiedonkeruumenetelminä käytimme kirjallisuuden lisäksi uusimpia tutkimusartikkeleita sekä asiantuntijahaastattelua. Kolmen kuukauden ikäisistä keskoslapsista 51 sijoittui AIMS-testistön motorista kehitystä kuvaaville käyrille. Kolme lasta jäi käyrien alapuolelle. Kuuden kuukauden ikäisten keskoslasten kokonaispistemäärissä oli enemmän hajontaa. 15 lasta jäi AIMS-testistön motorista kehitystä kuvaavien käyrien alapuolelle. Kahdentoista kuukauden ikäisistä lapsista yhdeksän sijoittui motorista kehitystä kuvaaville käyrille ja neljä lasta jäi käyrien alapuolelle. Yhteisenä piirteenä kaikilta kolmen kuukauden ikäisiltä ja 14 kuuden kuukauden ikäiseltä lapselta puuttui taito tukeutua yläraajoihin istuma-asennossa (Sitting With Propped Arms). Tutkimustulosten perusteella kolmen kuukauden ikäisten keskoslasten motorinen kehitys oli valtaosalla (51/54) ikätasoista. Kuuden ja kahdentoista kuukauden ikäisten keskoslasten motorisessa kehityksessä yksilölliset erot olivat suurempia. Tutkimusjoukkomme keskoslapsista motorinen kehitys oli ikätasoa heikompaa 22 keskoslapsella. Lasten ja nuorten sairauksien toimiala saa käyttöönsä työmme tulokset, joita voidaan hyödyntää keskoslasten motorisen kehityksen seurannassa sekä fysioterapian kehittämisessä. Työmme lisää AIMS-testistön tunnettavuutta ja siitä on myös laajemmin hyötyä lasten parissa työskenteleville fysioterapeuteille.