944 resultados para Software-based techniques
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2012
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2015
Resumo:
Hem realitzat l’estudi de moviments humans i hem buscat la forma de poder crear aquests moviments en temps real sobre entorns digitals de forma que la feina que han de dur a terme els artistes i animadors sigui reduïda. Hem fet un estudi de les diferents tècniques d’animació de personatges que podem trobar actualment en l’industria de l’entreteniment així com les principals línies de recerca, estudiant detingudament la tècnica més utilitzada, la captura de moviments. La captura de moviments permet enregistrar els moviments d’una persona mitjançant sensors òptics, sensors magnètics i vídeo càmeres. Aquesta informació és emmagatzemada en arxius que després podran ser reproduïts per un personatge en temps real en una aplicació digital. Tot moviment enregistrat ha d’estar associat a un personatge, aquest és el procés de rigging, un dels punts que hem treballat ha estat la creació d’un sistema d’associació de l’esquelet amb la malla del personatge de forma semi-automàtica, reduint la feina de l’animador per a realitzar aquest procés. En les aplicacions en temps real com la realitat virtual, cada cop més s’està simulant l’entorn en el que viuen els personatges mitjançant les lleis de Newton, de forma que tot canvi en el moviment d’un cos ve donat per l’aplicació d’una força sobre aquest. La captura de moviments no escala bé amb aquests entorns degut a que no és capaç de crear noves animacions realistes a partir de l’enregistrada que depenguin de l’interacció amb l’entorn. L’objectiu final del nostre treball ha estat realitzar la creació d’animacions a partir de forces tal i com ho fem en la realitat en temps real. Per a això hem introduït un model muscular i un sistema de balanç sobre el personatge de forma que aquest pugui respondre a les interaccions amb l’entorn simulat mitjançant les lleis de Newton de manera realista.
Resumo:
Robust Huber type regression and testing of linear hypotheses are adapted to statistical analysis of parallel line and slope ratio assays. They are applied in the evaluation of results of several experiments carried out in order to compare and validate alternatives to animal experimentation based on embryo and cell cultures. Computational procedures necessary for the application of robust methods of analysis used the conversational statistical package ROBSYS. Special commands for the analysis of parallel line and slope ratio assays have been added to ROBSYS.
Resumo:
This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.
Resumo:
In this project a research both in finding predictors via clustering techniques and in reviewing the Data Mining free software is achieved. The research is based in a case of study, from where additionally to the KDD free software used by the scientific community; a new free tool for pre-processing the data is presented. The predictors are intended for the e-learning domain as the data from where these predictors have to be inferred are student qualifications from different e-learning environments. Through our case of study not only clustering algorithms are tested but also additional goals are proposed.
Resumo:
In the eighties, John Aitchison (1986) developed a new methodological approach for the statistical analysis of compositional data. This new methodology was implemented in Basic routines grouped under the name CODA and later NEWCODA inMatlab (Aitchison, 1997). After that, several other authors have published extensions to this methodology: Marín-Fernández and others (2000), Barceló-Vidal and others (2001), Pawlowsky-Glahn and Egozcue (2001, 2002) and Egozcue and others (2003). (...)
Resumo:
This paper describes a Computer-Supported Collaborative Learning (CSCL) case study in engineering education carried out within the context of a network management course. The case study shows that the use of two computing tools developed by the authors and based on Free- and Open-Source Software (FOSS) provide significant educational benefits over traditional engineering pedagogical approaches in terms of both concepts and engineering competencies acquisition. First, the Collage authoring tool guides and supports the course teacher in the process of authoring computer-interpretable representations (using the IMS Learning Design standard notation) of effective collaborative pedagogical designs. Besides, the Gridcole system supports the enactment of that design by guiding the students throughout the prescribed sequence of learning activities. The paper introduces the goals and context of the case study, elaborates onhow Collage and Gridcole were employed, describes the applied evaluation methodology, anddiscusses the most significant findings derived from the case study.
Resumo:
Résumé La protéomique basée sur la spectrométrie de masse est l'étude du proteome l'ensemble des protéines exprimées au sein d'une cellule, d'un tissu ou d'un organisme - par cette technique. Les protéines sont coupées à l'aide d'enzymes en plus petits morceaux -les peptides -, et, séparées par différentes techniques. Les différentes fractions contenant quelques centaines de peptides sont ensuite analysées dans un spectromètre de masse. La masse des peptides est enregistrée et chaque peptide est séquentiellement fragmenté pour en obtenir sa séquence. L'information de masse et séquence est ensuite comparée à une base de données de protéines afin d'identifier la protéine d'origine. Dans une première partie, la thèse décrit le développement de méthodes d'identification. Elle montre l'importance de l'enrichissement de protéines comme moyen d'accès à des protéines de moyenne à faible abondance dans le lait humain. Elle utilise des injections répétées pour augmenter la couverture en protéines et la confiance dans l'identification. L'impacte de nouvelle version de base de données sur la liste des protéines identifiées est aussi démontré. De plus, elle utilise avec succès la spectrométrie de masse comme alternative aux anticorps, pour valider la présence de 34 constructions de protéines pathogéniques du staphylocoque doré exprimées dans une souche de lactocoque. Dans une deuxième partie, la thèse décrit le développement de méthodes de quantification. Elle expose de nouvelles approches de marquage des terminus des protéines aux isotopes stables et décrit la première méthode de marquage des groupements carboxyliques au niveau protéine à l'aide de réactifs composé de carbone 13. De plus, une nouvelle méthode, appelée ANIBAL, marquant tous les groupements amines et carboxyliques au niveau de la protéine, est exposée. Summary Mass spectrometry-based proteomics is the study of the proteome -the set of all expressed proteins in a cell, tissue or organism -using mass spectrometry. Proteins are cut into smaller pieces - peptides - using proteolytic enzymes and separated using different separation techniques. The different fractions containing several hundreds of peptides are than analyzed by mass spectrometry. The mass of the peptides entering the instrument are recorded and each peptide is sequentially fragmented to obtain its amino acid sequence. Each peptide sequence with its corresponding mass is then searched against a protein database to identify the protein to which it belongs. This thesis presents new method developments in this field. In a first part, the thesis describes development of identification methods. It shows the importance of protein enrichment methods to gain access to medium-to-low abundant proteins in a human milk sample. It uses repeated injection to increase protein coverage and confidence in identification and demonstrates the impact of new database releases on protein identification lists. In addition, it successfully uses mass spectrometry as an alternative to antibody-based assays to validate the presence of 34 different recombinant constructs of Staphylococcus aureus pathogenic proteins expressed in a Lactococcus lactis strain. In a second part, development of quantification methods is described. It shows new stable isotope labeling approaches based on N- and C-terminus labeling of proteins and describes the first method of labeling of carboxylic groups at the protein level using 13C stable isotopes. In addition, a new quantitative approach called ANIBAL is explained that labels all amino and carboxylic groups at the protein level.
Resumo:
This paper examines statistical analysis of social reciprocity at group, dyadic, and individual levels. Given that testing statistical hypotheses regarding social reciprocity can be also of interest, a statistical procedure based on Monte Carlo sampling has been developed and implemented in R in order to allow social researchers to describe groups and make statistical decisions.
Resumo:
In this research work we searched for open source libraries which supports graph drawing and visualisation and can run in a browser. Subsequent these libraries were evaluated to find out which one is the best for this task. The result was the d3.js is that library which has the greatest functionality, flexibility and customisability. Afterwards we developed an open source software tool where d3.js was included and which was written in JavaScript so that it can run browser-based.
Resumo:
Työssä tutkittiin tehokasta tietojohtamista globaalin metsäteollisuusyrityksen tutkimus ja kehitys verkostossa. Työn tavoitteena oli rakentaa kuvaus tutkimus ja kehitys sisällön hallintaan kohdeyrityksen käyttämän tietojohtamisohjelmiston avulla. Ensin selvitettiin käsitteitä tietämys ja tietojohtaminen kirjallisuuden avulla. Selvityksen perusteella esitettiin prosessimalli, jolla tietämystä voidaan tehokkaasti hallita yrityksessä. Seuraavaksi analysoitiin tietojohtamisen asettamia vaatimuksia informaatioteknologialle ja informaatioteknologian roolia prosessimallissa. Verkoston vaatimukset tietojohtamista kohtaan selvitettiin haastattelemalla yrityksen avainhenkilöitä. Haastatteluiden perusteella järjestelmän tuli tehokkaasti tukea virtuaalisten projektiryhmien työskentelyä, mahdollistaa tehtaiden välinen tietämyksen jakaminen ja tukea järjestelmään syötetyn sisällön hallintaa. Ensiksi järjestelmän käyttöliittymän rakenne ja salaukset muokattiin vastaamaan verkoston tarpeita. Rakenne tarjoaa työalueen työryhmille ja alueet tehtaiden väliseen tietämyksen jakamiseen. Sisällönhallintaa varten järjestelmään kehitettiin kategoria, profiloitu portaali ja valmiiksi määriteltyjä hakuja. Kehitetty malli tehostaa projektiryhmien työskentelyä, mahdollistaa olemassa olevan tietämyksen hyväksikäytön tehdastasolla sekä helpottaa tutkimus ja kehitys aktiviteettien seurantaa. Toimenpide-ehdotuksina esitetään järjestelmän integrointia tehtaiden operatiivisiin ohjausjärjestelmiin ja ohjelmiston käyttöönottoa tehdastason projektinhallinta työkaluksi.Ehdotusten tavoitteena on varmistaa sekä tehokas tietämyksen jakaminen tehtaiden välillä että tehokas tietojohtaminen tehdastasolla.
Resumo:
This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.