938 resultados para Domain-specific programming languages


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtual Laboratories are an indispensablespace for developing practical activities in a Virtual Environment. In the field of Computer and Software Engineering different types of practical activities have tobe performed in order to obtain basic competences which are impossible to achieve by other means. This paper specifies an ontology for a general virtual laboratory.The proposed ontology provides a mechanism to select the best resources needed in a Virtual Laboratory once a specific practical activity has been defined and the maincompetences that students have to achieve in the learning process have been fixed. Furthermore, the proposed ontology can be used to develop an automatic and wizardtool that creates a Moodle Classroom using the practical activity specification and the related competences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uromodulin is the most abundant protein in the urine. It is exclusively produced by renal epithelial cells and it plays key roles in kidney function and disease. Uromodulin mainly exerts its function as an extracellular matrix whose assembly depends on a conserved, specific proteolytic cleavage leading to conformational activation of a Zona Pellucida (ZP) polymerisation domain. Through a comprehensive approach, including extensive characterisation of uromodulin processing in cellular models and in specific knock-out mice, we demonstrate that the membrane-bound serine protease hepsin is the enzyme responsible for the physiological cleavage of uromodulin. Our findings define a key aspect of uromodulin biology and identify the first in vivo substrate of hepsin. The identification of hepsin as the first protease involved in the release of a ZP domain protein is likely relevant for other members of this protein family, including several extracellular proteins, as egg coat proteins and inner ear tectorins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prodigiosin and obatoclax, members of the prodiginines family, are small molecules with anti-cancer properties that are currently under preclinical and clinical trials. The molecular target(s) of these agents, however, is an open question. Combining experimental and computational techniques we find that prodigiosin binds to the BH3 domain in some BCL-2 protein families, which play an important role in the apoptotic programmed cell death. In particular, our results indicate a large affinity of prodigiosin for MCL-1, an anti-apoptotic member of the BCL-2 family. In melanoma cells, we demonstrate that prodigiosin activates the mitochondrial apoptotic pathway by disrupting MCL-1/BAK complexes. Computer simulations with the PELE software allow the description of the induced fit process, obtaining a detailed atomic view of the molecular interactions. These results provide new data to understand the mechanism of action of these molecules, and assist in the development of more specific inhibitors of anti-apoptotic BCL-2 proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, simple methods have been sought to lower the teacher’s threshold to start to apply constructive alignment in instruction. From the phases of the instructional process, aspects that can be improved with little effort by the teacher have been identified. Teachers have been interviewed in order to find out what students actually learn in computer science courses. A quantitative analysis of the structured interviews showed that in addition to subject specific skills and knowledge, students learn many other skills that should be mentioned in the learning outcomes of the course. The students’ background, such as their prior knowledge, learning style and culture, affects how they learn in a course. A survey was conducted to map the learning styles of computer science students and to see if their cultural background affected their learning style. A statistical analysis of the data indicated that computer science students are different learners than engineering students in general and that there is a connection between the student’s culture and learning style. In this thesis, a simple self-assessment scale that is based on Bloom’s revised taxonomy has been developed. A statistical analysis of the test results indicates that in general the scale is quite reliable, but single students still slightly overestimate or under-estimate their knowledge levels. For students, being able to follow their own progress is motivating, and for a teacher, self-assessment results give information about how the class is proceeding and what the level of the students’ knowledge is.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article reports on the design and characteristics of substrate mimetics in protease-catalyzed reactions. Firstly, the basis of protease-catalyzed peptide synthesis and the general advantages of substrate mimetics over common acyl donor components are described. The binding behavior of these artificial substrates and the mechanism of catalysis are further discussed on the basis of hydrolysis, acyl transfer, protein-ligand docking, and molecular dynamics studies on the trypsin model. The general validity of the substrate mimetic concept is illustrated by the expansion of this strategy to trypsin-like, glutamic acid-specific, and hydrophobic amino acid-specific proteases. Finally, opportunities for the combination of the substrate mimetic strategy with the chemical solid-phase peptide synthesis and the use of substrate mimetics for non-peptide organic amide synthesis are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of the present study was to assess the specificity and sensitivity of a modified assay using short synthetic peptides of the V3 region of HIV-1 gp120, which is the main target for neutralizing antibodies. Results from an enzyme immunoassay (EIA) employing a panel of synthetic peptides of HIV-1 subtypes and using urea washes to detect high avidity antibodies (AAV3) were compared with those obtained by the heteroduplex mobility assay and DNA sequencing. The EIA correctly typed 100% of subtype B (sensitivity = 1.0; specificity = 0.95), 100% of HIV-1 E samples (sensitivity = 1.0; specificity = 1.0), and 95% of subtype C specimens (sensitivity = 0.95; specificity = 0.94). In contrast, only 50% of subtype A (sensitivity = 0.5; specificity = 0.95), 60% of subtype D (sensitivity = 0.6; specificity = 1.0), and 28% of subtype F samples (sensitivity = 0.28; specificity = 0.95) were correctly identified. This approach was also able to discriminate in a few samples antibodies from patients infected with B variants circulating in Brazil and Thailand that reacted specifically. The assays described in this study are relatively rapid and simple to perform compared to molecular approaches and can be used to screen large numbers of serum or plasma samples. Moreover, the classification in subtypes (genotypes) may overestimate HIV-1 diversity and a classification into serotypes, based on antigenic V3 diversity or another principal neutralization domain, may be more helpful for vaccine development and identification of variants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The syndecans, heparan sulfate proteoglycans, are abundant molecules associated with the cell surface and extracellular matrix and consist of a protein core to which heparan sulfate chains are covalently attached. Each of the syndecan core proteins has a short cytoplasmic domain that binds cytosolic regulatory factors. The syndecans also contain highly conserved transmembrane domains and extracellular domains for which important activities are becoming known. These protein domains locate the syndecan on cell surface sites during development and tumor formation where they interact with other receptors to regulate signaling and cytoskeletal organization. The functions of cell surface heparan sulfate proteoglycan have been centered on the role of heparan sulfate chains, located on the outer side of the cell surface, in the binding of a wide array of ligands, including extracellular matrix proteins and soluble growth factors. More recently, the core proteins of the syndecan family transmembrane proteoglycans have also been shown to be involved in cell signaling through interaction with integrins and tyrosine kinase receptors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation of Jussi-Pekka Hakkarainen, held at the Emtacl15 conference on the 20th of April 2015 in Trondheim, Norway.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emerging technologies have recently challenged the libraries to reconsider their role as a mere mediator between the collections, researchers, and wider audiences (Sula, 2013), and libraries, especially the nationwide institutions like national libraries, haven’t always managed to face the challenge (Nygren et al., 2014). In the Digitization Project of Kindred Languages, the National Library of Finland has become a node that connects the partners to interplay and work for shared goals and objectives. In this paper, I will be drawing a picture of the crowdsourcing methods that have been established during the project to support both linguistic research and lingual diversity. The National Library of Finland has been executing the Digitization Project of Kindred Languages since 2012. The project seeks to digitize and publish approximately 1,200 monograph titles and more than 100 newspapers titles in various, and in some cases endangered Uralic languages. Once the digitization has been completed in 2015, the Fenno-Ugrica online collection will consist of 110,000 monograph pages and around 90,000 newspaper pages to which all users will have open access regardless of their place of residence. The majority of the digitized literature was originally published in the 1920s and 1930s in the Soviet Union, and it was the genesis and consolidation period of literary languages. This was the era when many Uralic languages were converted into media of popular education, enlightenment, and dissemination of information pertinent to the developing political agenda of the Soviet state. The ‘deluge’ of popular literature in the 1920s to 1930s suddenly challenged the lexical orthographic norms of the limited ecclesiastical publications from the 1880s onward. Newspapers were now written in orthographies and in word forms that the locals would understand. Textbooks were written to address the separate needs of both adults and children. New concepts were introduced in the language. This was the beginning of a renaissance and period of enlightenment (Rueter, 2013). The linguistically oriented population can also find writings to their delight, especially lexical items specific to a given publication, and orthographically documented specifics of phonetics. The project is financially supported by the Kone Foundation in Helsinki and is part of the Foundation’s Language Programme. One of the key objectives of the Kone Foundation Language Programme is to support a culture of openness and interaction in linguistic research, but also to promote citizen science as a tool for the participation of the language community in research. In addition to sharing this aspiration, our objective within the Language Programme is to make sure that old and new corpora in Uralic languages are made available for the open and interactive use of the academic community as well as the language societies. Wordlists are available in 17 languages, but without tokenization, lemmatization, and so on. This approach was verified with the scholars, and we consider the wordlists as raw data for linguists. Our data is used for creating the morphological analyzers and online dictionaries at the Helsinki and Tromsø Universities, for instance. In order to reach the targets, we will produce not only the digitized materials but also their development tools for supporting linguistic research and citizen science. The Digitization Project of Kindred Languages is thus linked with the research of language technology. The mission is to improve the usage and usability of digitized content. During the project, we have advanced methods that will refine the raw data for further use, especially in the linguistic research. How does the library meet the objectives, which appears to be beyond its traditional playground? The written materials from this period are a gold mine, so how could we retrieve these hidden treasures of languages out of the stack that contains more than 200,000 pages of literature in various Uralic languages? The problem is that the machined-encoded text (OCR) contains often too many mistakes to be used as such in research. The mistakes in OCRed texts must be corrected. For enhancing the OCRed texts, the National Library of Finland developed an open-source code OCR editor that enabled the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary to implement, since these rare and peripheral prints did often include already perished characters, which are sadly neglected by the modern OCR software developers, but belong to the historical context of kindred languages and thus are an essential part of the linguistic heritage (van Hemel, 2014). Our crowdsourcing tool application is essentially an editor of Alto XML format. It consists of a back-end for managing users, permissions, and files, communicating through a REST API with a front-end interface—that is, the actual editor for correcting the OCRed text. The enhanced XML files can be retrieved from the Fenno-Ugrica collection for further purposes. Could the crowd do this work to support the academic research? The challenge in crowdsourcing lies in its nature. The targets in the traditional crowdsourcing have often been split into several microtasks that do not require any special skills from the anonymous people, a faceless crowd. This way of crowdsourcing may produce quantitative results, but from the research’s point of view, there is a danger that the needs of linguists are not necessarily met. Also, the remarkable downside is the lack of shared goal or the social affinity. There is no reward in the traditional methods of crowdsourcing (de Boer et al., 2012). Also, there has been criticism that digital humanities makes the humanities too data-driven and oriented towards quantitative methods, losing the values of critical qualitative methods (Fish, 2012). And on top of that, the downsides of the traditional crowdsourcing become more imminent when you leave the Anglophone world. Our potential crowd is geographically scattered in Russia. This crowd is linguistically heterogeneous, speaking 17 different languages. In many cases languages are close to extinction or longing for language revitalization, and the native speakers do not always have Internet access, so an open call for crowdsourcing would not have produced appeasing results for linguists. Thus, one has to identify carefully the potential niches to complete the needed tasks. When using the help of a crowd in a project that is aiming to support both linguistic research and survival of endangered languages, the approach has to be a different one. In nichesourcing, the tasks are distributed amongst a small crowd of citizen scientists (communities). Although communities provide smaller pools to draw resources, their specific richness in skill is suited for complex tasks with high-quality product expectations found in nichesourcing. Communities have a purpose and identity, and their regular interaction engenders social trust and reputation. These communities can correspond to research more precisely (de Boer et al., 2012). Instead of repetitive and rather trivial tasks, we are trying to utilize the knowledge and skills of citizen scientists to provide qualitative results. In nichesourcing, we hand in such assignments that would precisely fill the gaps in linguistic research. A typical task would be editing and collecting the words in such fields of vocabularies where the researchers do require more information. For instance, there is lack of Hill Mari words and terminology in anatomy. We have digitized the books in medicine, and we could try to track the words related to human organs by assigning the citizen scientists to edit and collect words with the OCR editor. From the nichesourcing’s perspective, it is essential that altruism play a central role when the language communities are involved. In nichesourcing, our goal is to reach a certain level of interplay, where the language communities would benefit from the results. For instance, the corrected words in Ingrian will be added to an online dictionary, which is made freely available for the public, so the society can benefit, too. This objective of interplay can be understood as an aspiration to support the endangered languages and the maintenance of lingual diversity, but also as a servant of ‘two masters’: research and society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internet of Things (IoT) technologies are developing rapidly, and therefore there exist several standards of interconnection protocols and platforms. The existence of heterogeneous protocols and platforms has become a critical challenge for IoT system developers. To mitigate this challenge, few alliances and organizations have taken the initiative to build a framework that helps to integrate application silos. Some of these frameworks focus only on a specific domain like home automation. However, the resource constraints in the large proportion of connected devices make it difficult to build an interoperable system using such frameworks. Therefore, a general purpose, lightweight interoperability framework that can be used for a range of devices is required. To tackle the heterogeneous nature, this work introduces an embedded, distributed and lightweight service bus, Lightweight IoT Service bus Architecture (LISA), which fits inside the network stack of a small real-time operating system for constrained nodes. LISA provides a uniform application programming interface for an IoT system on a range of devices with variable resource constraints. It hides platform and protocol variations underneath it, thus facilitating interoperability in IoT implementations. LISA is inspired by the Network on Terminal Architecture, a service centric open architecture by Nokia Research Center. Unlike many other interoperability frameworks, LISA is designed specifically for resource constrained nodes and it provides essential features of a service bus for easy service oriented architecture implementation. The presented architecture utilizes an intermediate computing layer, a Fog layer, between the small nodes and the cloud, thereby facilitating the federation of constrained nodes into subnetworks. As a result of a modular and distributed design, the part of LISA running in the Fog layer handles the heavy lifting to assist the lightweight portion of LISA inside the resource constrained nodes. Furthermore, LISA introduces a new networking paradigm, Node Centric Networking, to route messages across protocol boundaries to facilitate interoperability. This thesis presents a concept implementation of the architecture and creates a foundation for future extension towards a comprehensive interoperability framework for IoT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have investigated Russian children’s reading acquisition during an intermediate period in their development: after literacy onset, but before they have acquired well-developed decoding skills. The results of our study suggest that Russian first graders rely primarily on phonemes and syllables as reading grain-size units. Phonemic awareness seems to have reached the metalinguistic level more rapidly than syllabic awareness after the onset of reading instruction, the reversal which is typical for the initial stages of formal reading instruction creating external demand for phonemic awareness. Another reason might be the inherent instability of syllabic boundaries in Russian. We have shown that body-coda is a more natural representation of subsyllabic structure in Russian than onset-rime. We also found that Russian children displayed variability of syllable onset and offset decisions which can be attributed to the lack of congruence between syllabic and morphemic word division in Russian. We suggest that fuzziness of syllable boundary decisions is a sign of the transitional nature of this stage in the reading development and it indicates progress towards an awareness of morphologically determined closed syllables. Our study also showed that orthographic complexity exerts an influence on reading in Russian from the very start of reading acquisition. Besides, we found that Russian first graders experience fluency difficulties in reading orthographically simple words and nonwords of two and more syllables. The transition from monosyllabic to bisyllabic lexical items constitutes a certain threshold, for which the syllabic structure seemed to be of no difference. When we compared the outcomes of the Russian children with the ones produced by speakers of other languages, we discovered that in the tasks which could be performed with the help of alphabetic recoding Russian children’s accuracy was comparable to that of children learning to read in relatively shallow orthographies. In tasks where this approach works only partially, Russian children demonstrated accuracy results similar to those in deeper orthographies. This pattern of moderate results in accuracy and excellent performance in terms of reaction times is an indication that children apply phonological recoding as their dominant strategy to various reading tasks and are only beginning to develop suitable multiple strategies in dealing with orthographically complex material. The development of these strategies is not completed during Grade 1 and the shift towards diversification of strategies apparently continues in Grade 2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article-based doctoral dissertation deals with adult individuals in Western societies who were born into multilingual and multicultural families and have parents of different nationalities. The study’s participants grew up outside their parents’ countries of origin and relate to a multitude of bonds that link them across various cultures, languages and places. The study explores the social dimension of cultural belonging and examines diverse approaches that enable the participants to create notions of belonging and identification despite possessing at times contradictory transnational allegiances. The works offers new perspectives on transnational belonging and makes a timely contribution to discussions in the fields of cultural heritage studies, ethnology and transnational studies. The dissertation combines qualitative research methods with an insider perspective. The empirical material is based on semi-structured interviews with fifteen participants, among which are also the author’s siblings. The study addresses the relevance of the author’s personal situatedness and her multi-faceted roles as well as ethical concerns related to the methodological approach of insider research. The social dimension of cultural identities affect both the participants’ identification with their multiple attachments and language use in everyday life. The key research findings present interrelated discussions of the participants’ notion of being a mixture, the importance of family bonds and multilingualism, a specific mixed family lifestyle, the notion of non-belonging and the study participants’ sense of otherness as a means of creating communality with others. The study discusses the participants’ various life strategies of flexible relativising, juggling with multiple affiliations, the approach of “blending in” and their sense of ironic nation-ness for constructing a coherent sense of belonging. The author argues that multicultural belonging is inextricably connected to an association with multiple languages, cultures and places. Multicultural belonging is relational and depends on the context, social relationships and locations. The study proposes that multicultural belonging creates a tolerant understanding of membership and enables experiences of cosmopolitanism and selected notions of allegiance.