973 resultados para Semi-automatic road extraction
From fall-risk assessment to fall detection: inertial sensors in the clinical routine and daily life
Resumo:
Falls are caused by complex interaction between multiple risk factors which may be modified by age, disease and environment. A variety of methods and tools for fall risk assessment have been proposed, but none of which is universally accepted. Existing tools are generally not capable of providing a quantitative predictive assessment of fall risk. The need for objective, cost-effective and clinically applicable methods would enable quantitative assessment of fall risk on a subject-specific basis. Tracking objectively falls risk could provide timely feedback about the effectiveness of administered interventions enabling intervention strategies to be modified or changed if found to be ineffective. Moreover, some of the fundamental factors leading to falls and what actually happens during a fall remain unclear. Objectively documented and measured falls are needed to improve knowledge of fall in order to develop more effective prevention strategies and prolong independent living. In the last decade, several research groups have developed sensor-based automatic or semi-automatic fall risk assessment tools using wearable inertial sensors. This approach may also serve to detect falls. At the moment, i) several fall-risk assessment studies based on inertial sensors, even if promising, lack of a biomechanical model-based approach which could provide accurate and more detailed measurements of interests (e.g., joint moments, forces) and ii) the number of published real-world fall data of older people in a real-world environment is minimal since most authors have used simulations with healthy volunteers as a surrogate for real-world falls. With these limitations in mind, this thesis aims i) to suggest a novel method for the kinematics and dynamics evaluation of functional motor tasks, often used in clinics for the fall-risk evaluation, through a body sensor network and a biomechanical approach and ii) to define the guidelines for a fall detection algorithm based on a real-world fall database availability.
Resumo:
In any terminological study, candidate term extraction is a very time-consuming task. Corpus analysis tools have automatized some processes allowing the detection of relevant data within the texts, facilitating term candidate selection as well. Nevertheless, these tools are (normally) not specific for terminology research; therefore, the units which are automatically extracted need manual evaluation. Over the last few years some software products have been specifically developed for automatic term extraction. They are based on corpus analysis, but use linguistic and statistical information to filter data more precisely. As a result, the time needed for manual evaluation is reduced. In this framework, we tried to understand if and how these new tools can really be an advantage. In order to develop our project, we simulated a terminology study: we chose a domain (i.e. legal framework for medicinal products for human use) and compiled a corpus from which we extracted terms and phraseologisms using AntConc, a corpus analysis tool. Afterwards, we compared our list with the lists extracted automatically from three different tools (TermoStat Web, TaaS e Sketch Engine) in order to evaluate their performance. In the first chapter we describe some principles relating to terminology and phraseology in language for special purposes and show the advantages offered by corpus linguistics. In the second chapter we illustrate some of the main concepts of the domain selected, as well as some of the main features of legal texts. In the third chapter we describe automatic term extraction and the main criteria to evaluate it; moreover, we introduce the term-extraction tools used for this project. In the fourth chapter we describe our research method and, in the fifth chapter, we show our results and draw some preliminary conclusions on the performance and usefulness of term-extraction tools.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.
Resumo:
A new system for computer-aided corrective surgery of the jaws has been developed and introduced clinically. It combines three-dimensional (3-D) surgical planning with conventional dental occlusion planning. The developed software allows simulating the surgical correction on virtual 3-D models of the facial skeleton generated from computed tomography (CT) scans. Surgery planning and simulation include dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and segment repositioning. By coupling the software with a tracking system and with the help of a special registration procedure, we are able to acquire dental occlusion plans from plaster model mounts. Upon completion of the surgical plan, the setup is used to manufacture positioning splints for intraoperative guidance. The system provides further intraoperative assistance with the help of a display showing jaw positions and 3-D positioning guides updated in real time during the surgical procedure. The proposed approach offers the advantages of 3-D visualization and tracking technology without sacrificing long-proven cast-based techniques for dental occlusion evaluation. The system has been applied on one patient. Throughout this procedure, we have experienced improved assessment of pathology, increased precision, and augmented control.
Issues of spectral quality in clinical 1H-magnetic resonance spectroscopy and a gallery of artifacts
Resumo:
In spite of the facts that magnetic resonance spectroscopy (MRS) is applied as clinical tool in non-specialized institutions and that semi-automatic acquisition and processing tools can be used to produce quantitative information from MRS exams without expert information, issues of spectral quality and quality assessment are neglected in the literature of MR spectroscopy. Even worse, there is no consensus among experts on concepts or detailed criteria of quality assessment for MR spectra. Furthermore, artifacts are not at all conspicuous in MRS and can easily be taken for true, interpretable features. This article aims to increase interest in issues of spectral quality and quality assessment, to start a larger debate on generally accepted criteria that spectra must fulfil to be clinically and scientifically acceptable, and to provide a sample gallery of artifacts, which can be used to raise awareness for potential pitfalls in MRS.
Resumo:
Given arbitrary pictures, we explore the possibility of using new techniques from computer vision and artificial intelligence to create customized visual games on-the-fly. This includes coloring books, link-the-dot and spot-the-difference popular games. The feasibility of these systems is discussed and we describe prototype implementation that work well in practice in an automatic or semi-automatic way.
Resumo:
Diet-related chronic diseases severely affect personal and global health. However, managing or treating these diseases currently requires long training and high personal involvement to succeed. Computer vision systems could assist with the assessment of diet by detecting and recognizing different foods and their portions in images. We propose novel methods for detecting a dish in an image and segmenting its contents with and without user interaction. All methods were evaluated on a database of over 1600 manually annotated images. The dish detection scored an average of 99% accuracy with a .2s/image run time, while the automatic and semi-automatic dish segmentation methods reached average accuracies of 88% and 91% respectively, with an average run time of .5s/image, outperforming competing solutions.
Resumo:
This article presents and technically describes a new field spectro-goniometer system for the ground-based characterization of the surface reflectance anisotropy under natural illumination conditions developed at the Alfred Wegener Institute (AWI). The spectro-goniometer consists of a Manual Transportable Instrument platform for ground-based Spectro-directional observations (ManTIS), and a hyperspectral sensor system. The presented measurement strategy shows that the AWI ManTIS field spectro-goniometer can deliver high quality hemispherical conical reflectance factor (HCRF) measurements with a pointing accuracy of ±6 cm within the constant observation center. The sampling of a ManTIS hemisphere (up to 30° viewing zenith, 360° viewing azimuth) needs approx. 18 min. The developed data processing chain in combination with the software used for the semi-automatic control provides a reliable method to reduce temporal effects during the measurements. The presented visualization and analysis approaches of the HCRF data of an Arctic low growing vegetation showcase prove the high quality of spectro-goniometer measurements. The patented low-cost and lightweight ManTIS instrument platform can be customized for various research needs and is available for purchase.
Resumo:
Este trabajo tiene como objetivo describir la experiencia de implementación y desarrollo del Portal de revistas de la Facultad de Humanidades y Ciencias de Educación de la Universidad Nacional de La Plata a fin de que pueda ser aprovechada por todos aquellos que emprendan iniciativas de características similares. Para ello, se realiza en primer lugar un repaso por la trayectoria de la Facultad respecto a la edición de revistas científicas y la labor bibliotecaria para contribuir a su visualización. En segundo orden, se exponen las tareas llevadas adelante por la Prosecretaría de Gestión Editorial y Difusión (PGEyD) de la Facultad para concretar la puesta en marcha del portal. Se hace especial referencia a la personalización del software, a la metodología utilizada para la carga masiva de información en el sistema (usuarios y números retrospectivos) y a los procedimientos que permiten la inclusión en repositorio institucional y en el catálogo web de todos los contenidos del portal de manera semi-automática. Luego, se hace alusión al trabajo que se está realizando en relación al soporte y a la capacitación de los editores. Se exponen, después, los resultados conseguidos hasta el momento en un año de trabajo: creación de 10 revistas, migración de 4 títulos completos e inclusión del 25de las contribuciones publicadas en las revistas editadas por la FaHCE. A modo de cierre se enuncian una serie de desafíos que la Prosecretaría se ha propuesto para mejorar el Portal y optimizar los flujos de trabajo intra e interinstitucionales
Resumo:
Este trabajo tiene como objetivo describir la experiencia de implementación y desarrollo del Portal de revistas de la Facultad de Humanidades y Ciencias de Educación de la Universidad Nacional de La Plata a fin de que pueda ser aprovechada por todos aquellos que emprendan iniciativas de características similares. Para ello, se realiza en primer lugar un repaso por la trayectoria de la Facultad respecto a la edición de revistas científicas y la labor bibliotecaria para contribuir a su visualización. En segundo orden, se exponen las tareas llevadas adelante por la Prosecretaría de Gestión Editorial y Difusión (PGEyD) de la Facultad para concretar la puesta en marcha del portal. Se hace especial referencia a la personalización del software, a la metodología utilizada para la carga masiva de información en el sistema (usuarios y números retrospectivos) y a los procedimientos que permiten la inclusión en repositorio institucional y en el catálogo web de todos los contenidos del portal de manera semi-automática. Luego, se hace alusión al trabajo que se está realizando en relación al soporte y a la capacitación de los editores. Se exponen, después, los resultados conseguidos hasta el momento en un año de trabajo: creación de 10 revistas, migración de 4 títulos completos e inclusión del 25de las contribuciones publicadas en las revistas editadas por la FaHCE. A modo de cierre se enuncian una serie de desafíos que la Prosecretaría se ha propuesto para mejorar el Portal y optimizar los flujos de trabajo intra e interinstitucionales
Resumo:
Este trabajo tiene como objetivo describir la experiencia de implementación y desarrollo del Portal de revistas de la Facultad de Humanidades y Ciencias de Educación de la Universidad Nacional de La Plata a fin de que pueda ser aprovechada por todos aquellos que emprendan iniciativas de características similares. Para ello, se realiza en primer lugar un repaso por la trayectoria de la Facultad respecto a la edición de revistas científicas y la labor bibliotecaria para contribuir a su visualización. En segundo orden, se exponen las tareas llevadas adelante por la Prosecretaría de Gestión Editorial y Difusión (PGEyD) de la Facultad para concretar la puesta en marcha del portal. Se hace especial referencia a la personalización del software, a la metodología utilizada para la carga masiva de información en el sistema (usuarios y números retrospectivos) y a los procedimientos que permiten la inclusión en repositorio institucional y en el catálogo web de todos los contenidos del portal de manera semi-automática. Luego, se hace alusión al trabajo que se está realizando en relación al soporte y a la capacitación de los editores. Se exponen, después, los resultados conseguidos hasta el momento en un año de trabajo: creación de 10 revistas, migración de 4 títulos completos e inclusión del 25de las contribuciones publicadas en las revistas editadas por la FaHCE. A modo de cierre se enuncian una serie de desafíos que la Prosecretaría se ha propuesto para mejorar el Portal y optimizar los flujos de trabajo intra e interinstitucionales
Resumo:
ZooScan with ZooProcess and Plankton Identifier (PkID) software is an integrated analysis system for acquisition and classification of digital zooplankton images from preserved zooplankton samples. Zooplankton samples are digitized by the ZooScan and processed by ZooProcess and PkID in order to detect, enumerate, measure and classify the digitized objects. Here we present a semi-automatic approach that entails automated classification of images followed by manual validation, which allows rapid and accurate classification of zooplankton and abiotic objects. We demonstrate this approach with a biweekly zooplankton time series from the Bay of Villefranche-sur-mer, France. The classification approach proposed here provides a practical compromise between a fully automatic method with varying degrees of bias and a manual but accurate classification of zooplankton. We also evaluate the appropriate number of images to include in digital learning sets and compare the accuracy of six classification algorithms. We evaluate the accuracy of the ZooScan for automated measurements of body size and present relationships between machine measures of size and C and N content of selected zooplankton taxa. We demonstrate that the ZooScan system can produce useful measures of zooplankton abundance, biomass and size spectra, for a variety of ecological studies.
Resumo:
The main objective of ventilation systems in case of fire is the reduction of the possible consequences by achieving the best possible conditions for the evacuation of the users and the intervention of the emergency services. In the last years, the required quick response of the ventilation system, from normal to emergency mode, has been improved by the use of automatic and semi-automatic control systems, what reduces the response times through the support to the operators decision taking, and the use of pre-defined strategies. A further step consists on the use of closedloop algorithms, which takes into account not only the initial conditions but their development (air velocity, traffic situation, etc), optimizing the quality of the smoke control process
Resumo:
We introduce an innovative, semi-automatic method to transform low resolution facial meshes into high definition ones, based on the tailoring of a generic, neutral human head model, designed by an artist, to fit the facial features of a specific person. To determine these facial features we need to select a set of "control points" (corners of eyes, lips, etc.) in at least two photographs of the subject's face. The neutral head mesh is then automatically reshaped according to the relation between the control points in the original subject's mesh through a set of transformation pyramids. The last step consists in merging both meshes and filling the gaps that appear in the previous process. This algorithm avoids the use of expensive and complicated technologies to obtain depth maps, which also need to be meshed later.