959 resultados para VHDL (Computer hardware description language)
Resumo:
This is a study of a state of the art implementation of a new computer integrated testing (CIT) facility within a company that designs and manufactures transport refrigeration systems. The aim was to use state of the art hardware, software and planning procedures in the design and implementation of three CIT systems. Typical CIT system components include data acquisition (DAQ) equipment, application and analysis software, communication devices, computer-based instrumentation and computer technology. It is shown that the introduction of computer technology into the area of testing can have a major effect on such issues as efficiency, flexibility, data accuracy, test quality, data integrity and much more. Findings reaffirm how the overall area of computer integration continues to benefit any organisation, but with more recent advances in computer technology, communication methods and software capabilities, less expensive more sophisticated test solutions are now possible. This allows more organisations to benefit from the many advantages associated with CIT. Examples of computer integration test set-ups and the benefits associated with computer integration have been discussed.
Resumo:
Aquest projecte té com a objectiu l'anàlisi de prestacions de processadors RISC de baix cost i el disseny d'un processador RISC simple per a aplicacions de propòsit general relacionades amb l'adquisició i el procés simple de dades. Com a resultat es presenta el processador SR3C de 32 bits i arquitectura RISC. Aquest processador s'ha descrit i simulat mitjançant el llenguatge de descripció de hardware VHDL i s'ha sintetitzat en una FPGA. El processador està preparat per poder utilitzar-se en SoCs reals gràcies al compliment de l'estàndard de busos Wishbone. A més també es pot utilitzar com plataforma educativa gràcies a l'essamblador i simulador desenvolupats.
Resumo:
Le modèle développé à l'Institut universitaire de médecine sociale et préventive de Lausanne utilise un programme informatique pour simuler les mouvements d'entrées et de sorties des hôpitaux de soins généraux. Cette simulation se fonde sur les données récoltées de routine dans les hôpitaux; elle tient notamment compte de certaines variations journalières et saisonnières, du nombre d'entrées, ainsi que du "Case-Mix" de l'hôpital, c'est-à-dire de la répartition des cas selon les groupes cliniques et l'âge des patients.
Resumo:
Desarrollo de un sistema capaz de procesar consultas en lenguaje natural introducidas por el usuario mediante el teclado. El sistema es capaz de responder a consultas en castellano, relacionadas con un dominio de aplicación representado mediante una base de datos relacional.
Resumo:
Collaborative activities, in which students actively interact with each other, have proved to provide significant learning benefits. In Computer-Supported Collaborative Learning (CSCL), these collaborative activities are assisted by technologies. However, the use of computers does not guarantee collaboration, as free collaboration does not necessary lead to fruitful learning. Therefore, practitioners need to design CSCL scripts that structure the collaborative settings so that they promote learning. However, not all teachers have the technical and pedagogical background needed to design such scripts. With the aim of assisting teachers in designing effective CSCL scripts, we propose a model to support the selection of reusable good practices (formulated as patterns) so that they can be used as a starting point for their own designs. This model is based on a pattern ontology that computationally represents the knowledge captured on a pattern language for the design of CSCL scripts. A preliminary evaluation of the proposed approach is provided with two examples based on a set of meaningful interrelated patters computationally represented with the pattern ontology, and a paper prototyping experience carried out with two teaches. The results offer interesting insights towards the implementation of the pattern ontology in software tools.
Resumo:
Extensible Markup Language (XML) is a generic computing language that provides an outstanding case study of commodification of service standards. The development of this language in the late 1990s marked a shift in computer science as its extensibility let store and share any kind of data. Many office suites software rely on it. The chapter highlights how the largest multinational firms pay special attention to gain a recognised international standard for such a major technological innovation. It argues that standardisation processes affects market structures and can lead to market capture. By examining how a strategic use of standardisation arenas can generate profits, it shows that Microsoft succeeded in making its own technical solution a recognised ISO standard in 2008, while the same arena already adopted two years earlier the open source standard set by IBM and Sun Microsystems. Yet XML standardisation also helped to establish a distinct model of information technology services at the expense of Microsoft monopoly on proprietary software
Resumo:
BACKGROUND: Molecular interaction Information is a key resource in modern biomedical research. Publicly available data have previously been provided in a broad array of diverse formats, making access to this very difficult. The publication and wide implementation of the Human Proteome Organisation Proteomics Standards Initiative Molecular Interactions (HUPO PSI-MI) format in 2004 was a major step towards the establishment of a single, unified format by which molecular interactions should be presented, but focused purely on protein-protein interactions. RESULTS: The HUPO-PSI has further developed the PSI-MI XML schema to enable the description of interactions between a wider range of molecular types, for example nucleic acids, chemical entities, and molecular complexes. Extensive details about each supported molecular interaction can now be captured, including the biological role of each molecule within that interaction, detailed description of interacting domains, and the kinetic parameters of the interaction. The format is supported by data management and analysis tools and has been adopted by major interaction data providers. Additionally, a simpler, tab-delimited format MITAB2.5 has been developed for the benefit of users who require only minimal information in an easy to access configuration. CONCLUSION: The PSI-MI XML2.5 and MITAB2.5 formats have been jointly developed by interaction data producers and providers from both the academic and commercial sector, and are already widely implemented and well supported by an active development community. PSI-MI XML2.5 enables the description of highly detailed molecular interaction data and facilitates data exchange between databases and users without loss of information. MITAB2.5 is a simpler format appropriate for fast Perl parsing or loading into Microsoft Excel.
Resumo:
RÉSUMÉ Cette thèse porte sur le développement de méthodes algorithmiques pour découvrir automatiquement la structure morphologique des mots d'un corpus. On considère en particulier le cas des langues s'approchant du type introflexionnel, comme l'arabe ou l'hébreu. La tradition linguistique décrit la morphologie de ces langues en termes d'unités discontinues : les racines consonantiques et les schèmes vocaliques. Ce genre de structure constitue un défi pour les systèmes actuels d'apprentissage automatique, qui opèrent généralement avec des unités continues. La stratégie adoptée ici consiste à traiter le problème comme une séquence de deux sous-problèmes. Le premier est d'ordre phonologique : il s'agit de diviser les symboles (phonèmes, lettres) du corpus en deux groupes correspondant autant que possible aux consonnes et voyelles phonétiques. Le second est de nature morphologique et repose sur les résultats du premier : il s'agit d'établir l'inventaire des racines et schèmes du corpus et de déterminer leurs règles de combinaison. On examine la portée et les limites d'une approche basée sur deux hypothèses : (i) la distinction entre consonnes et voyelles peut être inférée sur la base de leur tendance à alterner dans la chaîne parlée; (ii) les racines et les schèmes peuvent être identifiés respectivement aux séquences de consonnes et voyelles découvertes précédemment. L'algorithme proposé utilise une méthode purement distributionnelle pour partitionner les symboles du corpus. Puis il applique des principes analogiques pour identifier un ensemble de candidats sérieux au titre de racine ou de schème, et pour élargir progressivement cet ensemble. Cette extension est soumise à une procédure d'évaluation basée sur le principe de la longueur de description minimale, dans- l'esprit de LINGUISTICA (Goldsmith, 2001). L'algorithme est implémenté sous la forme d'un programme informatique nommé ARABICA, et évalué sur un corpus de noms arabes, du point de vue de sa capacité à décrire le système du pluriel. Cette étude montre que des structures linguistiques complexes peuvent être découvertes en ne faisant qu'un minimum d'hypothèses a priori sur les phénomènes considérés. Elle illustre la synergie possible entre des mécanismes d'apprentissage portant sur des niveaux de description linguistique distincts, et cherche à déterminer quand et pourquoi cette coopération échoue. Elle conclut que la tension entre l'universalité de la distinction consonnes-voyelles et la spécificité de la structuration racine-schème est cruciale pour expliquer les forces et les faiblesses d'une telle approche. ABSTRACT This dissertation is concerned with the development of algorithmic methods for the unsupervised learning of natural language morphology, using a symbolically transcribed wordlist. It focuses on the case of languages approaching the introflectional type, such as Arabic or Hebrew. The morphology of such languages is traditionally described in terms of discontinuous units: consonantal roots and vocalic patterns. Inferring this kind of structure is a challenging task for current unsupervised learning systems, which generally operate with continuous units. In this study, the problem of learning root-and-pattern morphology is divided into a phonological and a morphological subproblem. The phonological component of the analysis seeks to partition the symbols of a corpus (phonemes, letters) into two subsets that correspond well with the phonetic definition of consonants and vowels; building around this result, the morphological component attempts to establish the list of roots and patterns in the corpus, and to infer the rules that govern their combinations. We assess the extent to which this can be done on the basis of two hypotheses: (i) the distinction between consonants and vowels can be learned by observing their tendency to alternate in speech; (ii) roots and patterns can be identified as sequences of the previously discovered consonants and vowels respectively. The proposed algorithm uses a purely distributional method for partitioning symbols. Then it applies analogical principles to identify a preliminary set of reliable roots and patterns, and gradually enlarge it. This extension process is guided by an evaluation procedure based on the minimum description length principle, in line with the approach to morphological learning embodied in LINGUISTICA (Goldsmith, 2001). The algorithm is implemented as a computer program named ARABICA; it is evaluated with regard to its ability to account for the system of plural formation in a corpus of Arabic nouns. This thesis shows that complex linguistic structures can be discovered without recourse to a rich set of a priori hypotheses about the phenomena under consideration. It illustrates the possible synergy between learning mechanisms operating at distinct levels of linguistic description, and attempts to determine where and why such a cooperation fails. It concludes that the tension between the universality of the consonant-vowel distinction and the specificity of root-and-pattern structure is crucial for understanding the advantages and weaknesses of this approach.
Resumo:
The Learning Affect Monitor (LAM) is a new computer-based assessment system integrating basic dimensional evaluation and discrete description of affective states in daily life, based on an autonomous adapting system. Subjects evaluate their affective states according to a tridimensional space (valence and activation circumplex as well as global intensity) and then qualify it using up to 30 adjective descriptors chosen from a list. The system gradually adapts to the user, enabling the affect descriptors it presents to be increasingly relevant. An initial study with 51 subjects, using a 1 week time-sampling with 8 to 10 randomized signals per day, produced n = 2,813 records with good reliability measures (e.g., response rate of 88.8%, mean split-half reliability of .86), user acceptance, and usability. Multilevel analyses show circadian and hebdomadal patterns, and significant individual and situational variance components of the basic dimension evaluations. Validity analyses indicate sound assignment of qualitative affect descriptors in the bidimensional semantic space according to the circumplex model of basic affect dimensions. The LAM assessment module can be implemented on different platforms (palm, desk, mobile phone) and provides very rapid and meaningful data collection, preserving complex and interindividually comparable information in the domain of emotion and well-being.
Resumo:
This documents sums up a projectaimed at building a new web interfaceto the Apertium machine translationplatform, including pre-editing andpost-editing environments. It containsa description of the accomplished workon this project, as well as an overviewof possible evolutions.
Resumo:
Työ sisältää ohjaislaitteiston vertailun ja valinnan rinnakkaisrakenteista robottia varten sekä kunnonvalvontajärjestelmän periaatteiden laadinnan kyseistä robottia varten. Ohjauslaitteisto sisältää teollisuustietokoneen sekä kenttäväylän. Sekä tietokoneesta että väylästä on teoriaosuus ja yksityiskohtaisempi valintaosuus. Teoriaosuudessa selitetään tarkemmin laitteiden toimintaperiaatteista. Valintaosuudessa kerrotaanmiksi jokin tietty laite on valittu käytettäväksi robotin ohjauksessa. Kunnonvalvontateoria ja rinnakkaisrakenteisen robotin kunnonvalvonnan keinot ovat työn toinen osa. Teoriaosa sisältää yleisluonteisen selvityksen vikaantumisesta ja valvonnasta. Erikoisrobotin kunnonvalvonnan keinot esitetään työssä tietyssä järjestyksessä. Ensin esitetään mahdolliset vikatilanteet. Toisessa kohdassa havainnollistetaan vikojen havaitseminen.
Resumo:
Electrical Impedance Tomography (EIT) is an imaging method which enables a volume conductivity map of a subject to be produced from multiple impedance measurements. It has the potential to become a portable non-invasive imaging technique of particular use in imaging brain function. Accurate numerical forward models may be used to improve image reconstruction but, until now, have employed an assumption of isotropic tissue conductivity. This may be expected to introduce inaccuracy, as body tissues, especially those such as white matter and the skull in head imaging, are highly anisotropic. The purpose of this study was, for the first time, to develop a method for incorporating anisotropy in a forward numerical model for EIT of the head and assess the resulting improvement in image quality in the case of linear reconstruction of one example of the human head. A realistic Finite Element Model (FEM) of an adult human head with segments for the scalp, skull, CSF, and brain was produced from a structural MRI. Anisotropy of the brain was estimated from a diffusion tensor-MRI of the same subject and anisotropy of the skull was approximated from the structural information. A method for incorporation of anisotropy in the forward model and its use in image reconstruction was produced. The improvement in reconstructed image quality was assessed in computer simulation by producing forward data, and then linear reconstruction using a sensitivity matrix approach. The mean boundary data difference between anisotropic and isotropic forward models for a reference conductivity was 50%. Use of the correct anisotropic FEM in image reconstruction, as opposed to an isotropic one, corrected an error of 24 mm in imaging a 10% conductivity decrease located in the hippocampus, improved localisation for conductivity changes deep in the brain and due to epilepsy by 4-17 mm, and, overall, led to a substantial improvement on image quality. This suggests that incorporation of anisotropy in numerical models used for image reconstruction is likely to improve EIT image quality.
Resumo:
The objective of this article is to systematically assess the quality of web-based information in French language on the alcohol dependence. The authors analysed, using a standardised pro forma, the 20 most highly ranked pages identified by 3 common internet search engines using 2 keywords. Results show that a total of 45 sites were analysed. The authors conclude that the overall quality of the sites was relatively poor, especially for the description of possible treatments, however with a wide variability. Content quality was not correlated with other aspects of quality such as interactivity, aesthetic or accountability.
Resumo:
Tämä kandidaatintyö tutkii tietotekniikan perusopetuksessa keskeisen aiheen,ohjelmoinnin, alkeisopetusta ja siihen liittyviä ongelmia. Työssä perehdytään ohjelmoinnin perusopetusmenetelmiin ja opetuksen lähestymistapoihin, sekä ratkaisuihin, joilla opetusta voidaan tehostaa. Näitä ratkaisuja työssä ovat mm. ohjelmointikielen valinta, käytettävän kehitysympäristön löytäminen sekä kurssia tukevien opetusapuvälineiden etsiminen. Lisäksi kurssin läpivientiin liittyvien toimintojen, kuten harjoitusten ja mahdollisten viikkotehtävien valinta kuuluu osaksitätä työtä. Työ itsessään lähestyy aihetta tutkimalla Pythonin soveltuvuutta ohjelmoinnin alkeisopetukseen mm. vertailemalla sitä muihin olemassa oleviin yleisiin opetuskieliin, kuten C, C++ tai Java. Se tarkastelee kielen hyviä ja huonoja puolia, sekä tutkii, voidaanko Pythonia hyödyntää luontevasti pääasiallisena opetuskielenä. Lisäksi työ perehtyy siihen, mitä kaikkea kurssilla tulisi opettaa, sekä siihen, kuinka kurssin läpivienti olisi tehokkainta toteuttaa ja minkälaiset tekniset puitteet kurssin toteuttamista varten olisi järkevää valita.