966 resultados para High-level languages
Resumo:
The Chlamydiales order is composed of nine families of strictly intracellular bacteria. Among them, Chlamydia trachomatis, C. pneumoniae, and C. psittaci are established human pathogens, whereas Waddlia chondrophila and Parachlamydia acanthamoebae have emerged as new pathogens in humans. However, despite their medical importance, their biodiversity and ecology remain to be studied. Even if arthropods and, particularly, ticks are well known to be vectors of numerous infectious agents such as viruses and bacteria, virtually nothing is known about ticks and chlamydia. This study investigated the prevalence of Chlamydiae in ticks. Specifically, 62,889 Ixodes ricinus ticks, consolidated into 8,534 pools, were sampled in 172 collection sites throughout Switzerland and were investigated using pan-Chlamydiales quantitative PCR (qPCR) for the presence of Chlamydiales DNA. Among the pools, 543 (6.4%) gave positive results and the estimated prevalence in individual ticks was 0.89%. Among those pools with positive results, we obtained 16S rRNA sequences for 359 samples, allowing classification of Chlamydiales DNA at the family level. A high level of biodiversity was observed, since six of the nine families belonging to the Chlamydiales order were detected. Those most common were Parachlamydiaceae (33.1%) and Rhabdochlamydiaceae (29.2%). "Unclassified Chlamydiales" (31.8%) were also often detected. Thanks to the huge amount of Chlamydiales DNA recovered from ticks, this report opens up new perspectives on further work focusing on whole-genome sequencing to increase our knowledge about Chlamydiales biodiversity. This report of an epidemiological study also demonstrates the presence of Chlamydia-related bacteria within Ixodes ricinus ticks and suggests a role for ticks in the transmission of and as a reservoir for these emerging pathogenic Chlamydia-related bacteria.
Resumo:
Formal software development processes and well-defined development methodologies are nowadays seen as the definite way to produce high-quality software within time-limits and budgets. The variety of such high-level methodologies is huge ranging from rigorous process frameworks like CMMI and RUP to more lightweight agile methodologies. The need for managing this variety and the fact that practically every software development organization has its own unique set of development processes and methods have created a profession of software process engineers. Different kinds of informal and formal software process modeling languages are essential tools for process engineers. These are used to define processes in a way which allows easy management of processes, for example process dissemination, process tailoring and process enactment. The process modeling languages are usually used as a tool for process engineering where the main focus is on the processes themselves. This dissertation has a different emphasis. The dissertation analyses modern software development process modeling from the software developers’ point of view. The goal of the dissertation is to investigate whether the software process modeling and the software process models aid software developers in their day-to-day work and what are the main mechanisms for this. The focus of the work is on the Software Process Engineering Metamodel (SPEM) framework which is currently one of the most influential process modeling notations in software engineering. The research theme is elaborated through six scientific articles which represent the dissertation research done with process modeling during an approximately five year period. The research follows the classical engineering research discipline where the current situation is analyzed, a potentially better solution is developed and finally its implications are analyzed. The research applies a variety of different research techniques ranging from literature surveys to qualitative studies done amongst software practitioners. The key finding of the dissertation is that software process modeling notations and techniques are usually developed in process engineering terms. As a consequence the connection between the process models and actual development work is loose. In addition, the modeling standards like SPEM are partially incomplete when it comes to pragmatic process modeling needs, like light-weight modeling and combining pre-defined process components. This leads to a situation, where the full potential of process modeling techniques for aiding the daily development activities can not be achieved. Despite these difficulties the dissertation shows that it is possible to use modeling standards like SPEM to aid software developers in their work. The dissertation presents a light-weight modeling technique, which software development teams can use to quickly analyze their work practices in a more objective manner. The dissertation also shows how process modeling can be used to more easily compare different software development situations and to analyze their differences in a systematic way. Models also help to share this knowledge with others. A qualitative study done amongst Finnish software practitioners verifies the conclusions of other studies in the dissertation. Although processes and development methodologies are seen as an essential part of software development, the process modeling techniques are rarely used during the daily development work. However, the potential of these techniques intrigues the practitioners. As a conclusion the dissertation shows that process modeling techniques, most commonly used as tools for process engineers, can also be used as tools for organizing the daily software development work. This work presents theoretical solutions for bringing the process modeling closer to the ground-level software development activities. These theories are proven feasible by presenting several case studies where the modeling techniques are used e.g. to find differences in the work methods of the members of a software team and to share the process knowledge to a wider audience.
Resumo:
We report here the construction of a vector derived from pET3-His and pRSET plasmids for the expression and purification of recombinant proteins in Escherichia coli based on T7 phage RNA polymerase. The resulting pAE plasmid combined the advantages of both vectors: small size (pRSET), expression of a short 6XHis tag at N-terminus (pET3-His) and a high copy number of plasmid (pRSET). The small size of the vector (2.8 kb) and the high copy number/cell (200-250 copies) facilitate the subcloning and sequencing procedures when compared to the pET system (pET3-His, 4.6 kb and 40-50 copies) and also result in high level expression of recombinant proteins (20 mg purified protein/liter of culture). In addition, the vector pAE enables the expression of a fusion protein with a minimal amino-terminal hexa-histidine affinity tag (a tag of 9 amino acids using XhoI restriction enzyme for the 5'cloning site) as in the case of pET3-His plasmid and in contrast to proteins expressed by pRSET plasmids (a tag of 36 amino acids using BamHI restriction enzyme for the 5'cloning site). Thus, although proteins expressed by pRSET plasmids also have a hexa-histidine tag, the fusion peptide is much longer and may represent a problem for some recombinant proteins.
Resumo:
Ce mémoire vise à recenser les avantages et les inconvénients de l'utilisation du langage de programmation fonctionnel dynamique Scheme pour le développement de jeux vidéo. Pour ce faire, la méthode utilisée est d'abord basée sur une approche plus théorique. En effet, une étude des besoins au niveau de la programmation exprimés par ce type de développement, ainsi qu'une description détaillant les fonctionnalités du langage Scheme pertinentes au développement de jeux vidéo sont données afin de bien mettre en contexte le sujet. Par la suite, une approche pratique est utilisée en effectuant le développement de deux jeux vidéo de complexités croissantes: Space Invaders et Lode Runner. Le développement de ces jeux vidéo a mené à l'extension du langage Scheme par plusieurs langages spécifiques au domaine et bibliothèques, dont notamment un système de programmation orienté objets et un système de coroutines. L'expérience acquise par le développement de ces jeux est finalement comparée à celle d'autres développeurs de jeux vidéo de l'industrie qui ont utilisé Scheme pour la création de titres commerciaux. En résumé, l'utilisation de ce langage a permis d'atteindre un haut niveau d'abstraction favorisant la modularité des jeux développés sans affecter les performances de ces derniers.
Resumo:
Mon sujet de recherche traite sur la prononciation de l'espagnol comme langue étrangère chez les élèves québécois, sur leurs difficultés concrètes et lignes de correction qui peuvent leur être attribuées. Dans une première partie plus générale, nous traiterons sur l'enseignement de la prononciation, de la place qu'elle occupe dans l'enseignement d'une langue étrangère. Nous croyons que la prononciation est un aspect de la langue qui a été mis de côté pour mettre en valeur la communication. Si une "mauvaise" prononciation n'entrave pas à la compréhension ou à la communication, elle n'est pas corrigée ni travaillée. Nous pouvons donc nous retrouver avec des étudiants ayant un haut niveau d'espagnol mais dont la prononciation connaît certaines lacunes. Nous déterminerons également ce que nous entendons par "meilleure" ou "mauvaise" prononciation, nous nous interrogerons également sur la pertinence de l'enseignement de la phonétique. Nous nous poserons aussi la question sur la place de la prononciation selon la méthodologie didactique utilisée, et analyserons la quantité et qualité des exercices de prononciation présents ou pas dans les manuels scolaires, et s'ils correspondent aux exigences des documents officiels tels le Cadre commun européenne de référence, ou le Plan curricular de l'institut Cervantès. Dans une deuxième partie nous nous questionnons sur les facteurs qui conditionnent l'apprentissage d'une langue et le perfectionnement de la prononciation dans une langue étrangère, car nous croyons que peut importe l'âge de l'étudiant, il y a toujours place à l'amélioration dans la prononciation. Nous nous interrogeons ensuite sur les tendances générales des francophones lors de leur prononciation de l'espagnol, nous ferons une étude contrastive des phonèmes espagnols et français, puis nous étudierons plus en détail les tendances des élèves québécois, car nous croyons que ces derniers sont dotés de certains atouts en comparaison à d'autres francophones. Dans une troisième partie, nous proposons des exercices visant à améliorer la prononciation chez nos élèves, et afin de vérifier l'efficacité de ces exercices, nous enregistrerons des étudiants ayant bénéficié de ces exercices, et d'autres qui n'y auront pas eu droit. Cette étude comparative cherche à prouver que ces exercices aident réellement et qu'ils, ou d'autres exercices de ce genre, devraient être inclus dans l'enseignement. Le questionnaire dont il s'agit s'attarde principalement au phénomène du [r], que nous croyons être un, ou le son le plus difficile à prononcer en espagnol (autant la vibrante simple comme multiple). Bien entendu, une partie de ce chapitre sera consacrée à l'analyse de résultats.
Resumo:
Cette thèse étudie des modèles de séquences de haute dimension basés sur des réseaux de neurones récurrents (RNN) et leur application à la musique et à la parole. Bien qu'en principe les RNN puissent représenter les dépendances à long terme et la dynamique temporelle complexe propres aux séquences d'intérêt comme la vidéo, l'audio et la langue naturelle, ceux-ci n'ont pas été utilisés à leur plein potentiel depuis leur introduction par Rumelhart et al. (1986a) en raison de la difficulté de les entraîner efficacement par descente de gradient. Récemment, l'application fructueuse de l'optimisation Hessian-free et d'autres techniques d'entraînement avancées ont entraîné la recrudescence de leur utilisation dans plusieurs systèmes de l'état de l'art. Le travail de cette thèse prend part à ce développement. L'idée centrale consiste à exploiter la flexibilité des RNN pour apprendre une description probabiliste de séquences de symboles, c'est-à-dire une information de haut niveau associée aux signaux observés, qui en retour pourra servir d'à priori pour améliorer la précision de la recherche d'information. Par exemple, en modélisant l'évolution de groupes de notes dans la musique polyphonique, d'accords dans une progression harmonique, de phonèmes dans un énoncé oral ou encore de sources individuelles dans un mélange audio, nous pouvons améliorer significativement les méthodes de transcription polyphonique, de reconnaissance d'accords, de reconnaissance de la parole et de séparation de sources audio respectivement. L'application pratique de nos modèles à ces tâches est détaillée dans les quatre derniers articles présentés dans cette thèse. Dans le premier article, nous remplaçons la couche de sortie d'un RNN par des machines de Boltzmann restreintes conditionnelles pour décrire des distributions de sortie multimodales beaucoup plus riches. Dans le deuxième article, nous évaluons et proposons des méthodes avancées pour entraîner les RNN. Dans les quatre derniers articles, nous examinons différentes façons de combiner nos modèles symboliques à des réseaux profonds et à la factorisation matricielle non-négative, notamment par des produits d'experts, des architectures entrée/sortie et des cadres génératifs généralisant les modèles de Markov cachés. Nous proposons et analysons également des méthodes d'inférence efficaces pour ces modèles, telles la recherche vorace chronologique, la recherche en faisceau à haute dimension, la recherche en faisceau élagué et la descente de gradient. Finalement, nous abordons les questions de l'étiquette biaisée, du maître imposant, du lissage temporel, de la régularisation et du pré-entraînement.
Resumo:
This thesis entitled “Development planning at the state level in india a case study with reference to kerala1957-84.Planning in India is a concurrent subject with the Centre and the States having well-defined domains of jurisdiction with regard to planning functions and sources of resource mobilisation.The genesis of the lack of academic interest in state level planning is in the widely held belief that in the extent scheme of Centre-State economic relations, the states have little scope for initiative in planning.Both at the theoretical and empirical levels, Kerala has attached very great importance to planning.It has been the localeof wide and deep discussions on the various dimensions of planning.In Kerala's development process, the leading sector consists of social services such as education and public healthOne point that needs special emphasis in this regard is that the high demand for education in Kerala cannot be attributed to the Keralites' ‘unique urge‘ for education. Rather, it is related to the very high level of unemployment in the state (Kerala has the highest level of unemployment in the country.In resource allocation under the Five Year Plans, Kerala attached the highest weightage to power generation, hydro-electric projects being the major source of power in the state. Nearly one-fourth of the plan resources has been claimed by hydro-electric projects.In the agricultural sector, Kera1a's level of productive use of electric power is one of the lowest.As is evident.from above, planning in Kerala has not enabled us to solve the basic problems of the state. More 'scientific' planning in the sense of applying mre sophisticated planning techniques is obviously not the answer. It, on the contrary, consists of more fundamental changes some of which can be brought about through an effective use of measures well within the power of the State Government.
Resumo:
The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.
Resumo:
Brightness judgments are a key part of the primate brain's visual analysis of the environment. There is general consensus that the perceived brightness of an image region is based not only on its actual luminance, but also on the photometric structure of its neighborhood. However, it is unclear precisely how a region's context influences its perceived brightness. Recent research has suggested that brightness estimation may be based on a sophisticated analysis of scene layout in terms of transparency, illumination and shadows. This work has called into question the role of low-level mechanisms, such as lateral inhibition, as explanations for brightness phenomena. Here we describe experiments with displays for which low-level and high-level analyses make qualitatively different predictions, and with which we can quantitatively assess the trade-offs between low-level and high-level factors. We find that brightness percepts in these displays are governed by low-level stimulus properties, even when these percepts are inconsistent with higher-level interpretations of scene layout. These results point to the important role of low-level mechanisms in determining brightness percepts.
Resumo:
In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.
Resumo:
Inter-simple sequence repeat (ISSR) analysis and aggressiveness assays were used to investigate genetic variability within a global collection of Fusarium culmorum isolates. A set of four ISSR primers were tested, of which three primers amplified a total of 37 bands out of which 30 (81%) were polymorphic. The intraspecific diversity was high, ranging from four to 28 different ISSR genotypes for F. culmorum depending on the primer. The combined analysis of ISSR data revealed 59 different genotypes clustered into seven distinct clades amongst 75 isolates of F. culmorum examined. All the isolates were assayed to test their aggressiveness on a winter wheat cv. 'Armada'. A significant quantitative variation for aggressiveness was found among the isolates. The ISSR and aggressiveness variation existed on a macro- as well as micro-geographical scale. The data suggested a long-range dispersal of F. culmorum and indicated that this fungus may have been introduced into Canada from Europe. In addition to the high level of intraspecific diversity observed in F. culmorum, the index of multilocus association calculated using ISSR data indicated that reproduction in F. culmorum cannot be exclusively clonal and recombination is likely to occur.
Resumo:
In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.
Resumo:
P>Modern sugarcane (Saccharum spp.) is the leading sugar crop and a primary energy crop. It has the highest level of `vertical` redundancy (2n = 12x = 120) of all polyploid plants studied to date. It was produced about a century ago through hybridization between two autopolyploid species, namely S. officinarum and S. spontaneum. In order to investigate the genome dynamics in this highly polyploid context, we sequenced and compared seven hom(oe)ologous haplotypes (bacterial artificial chromosome clones). Our analysis revealed a high level of gene retention and colinearity, as well as high gene structure and sequence conservation, with an average sequence divergence of 4% for exons. Remarkably, all of the hom(oe)ologous genes were predicted as being functional (except for one gene fragment) and showed signs of evolving under purifying selection, with the exception of genes within segmental duplications. By contrast, transposable elements displayed a general absence of colinearity among hom(oe)ologous haplotypes and appeared to have undergone dynamic expansion in Saccharum, compared with sorghum, its close relative in the Andropogonea tribe. These results reinforce the general trend emerging from recent studies indicating the diverse and nuanced effect of polyploidy on genome dynamics.
Resumo:
The main objective of this thesis work is to develop communication link between Runrev Revolution (IDE) and JADE (Multi-Agent System) through Socket programming using TCP/IP layer. These two independent platforms are connected using socket programming technique. Socket programming is considered to be newly emerging technology among these two platforms, the work done in this thesis work is considered to be a prototype.A Graphical simulation model is developed by salixphere (Company in Hedemora) to simulate logistic problems using Runrev Revolution (IDE). The simulation software/program is called “BIOSIM”. The logistic problems are complex, and conventional optimization techniques are unlikely very successful. “BIOSIM” can demonstrate the graphical representation of logistic problems depending upon the problem domains. As this simulation model is developed in revolution programming language (Transcript) which is dynamically typed and English-like language, it is quite slow compared to other high level programming languages. The object of this thesis work is to add intelligent behaviour in graphical objects and develop communication link between Runrev revolution (IDE) and JADE (Multi-Agent System) using TCP/IP layers.The test shows the intelligent behaviour in the graphical objects and successful communication between Runrev Revolution (IDE) and JADE (Multi-Agent System).
Resumo:
This thesis analyzes the current state of the language immersion program in Catalonia after its implementation 30 years ago, and after the immigration wave of the last decade. The language immersion is a method of teaching a second language using a language of instruction different than the students’ mother tongue. The Catalan authorities use this as a method for preserving Catalan in the society.The aim of this study is to examine the use of Catalan at school and outside of school by students who have followed the language immersion program. Language attitudes play an important role for the maintenance of a minority language, as Catalan. Therefore, in this study, the informants’ attitudes towards Catalan have also been measured. The method applied is a quantitative method where the informants have answered a written questionnaire. The results show a high level of knowledge of Catalan and its frequent use in the classroom. In contrast, outside of school the Castilian language is more often used. The informants seem to have a positive attitude towards Catalan.The conclusion is that the language immersion works satisfactory in a school context but often fails outside of school.