925 resultados para Subroutines in Procedural Programming Languages
Resumo:
Background The goal of our work was to develop a simple method to evaluate a compensation treatment after unplanned treatment interruptions with respect to their tumour- and normal tissue effect. Methods We developed a software tool in java programming language based on existing recommendations to compensate for treatment interruptions. In order to express and visualize the deviations from the originally planned tumour and normal tissue effects we defined the compensability index. Results The compensability index represents an evaluation of the suitability of compensatory radiotherapy in a single number based on the number of days used for compensation and the preference of preserving the originally planned tumour effect or not exceeding the originally planned normal tissue effect. An automated tool provides a method for quick evaluation of compensation treatments. Conclusions The compensability index calculation may serve as a decision support system based on existing and established recommendations.
Resumo:
Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.
Resumo:
The factors influencing the degree of separation or overlap in the neuronal networks responsible for the processing of first and second language are still subject to investigation. This longitudinal study investigates how increasing second language proficiency influences activation differences during lexico-semantic processing of first and second language. Native English speaking exchange students learning German were examined with functional magnetic resonance imaging while reading words in three different languages at two points in time: at the beginning of their stay (day 1) and 5 months later (day 2), when second language proficiency had significantly increased. On day 1, second language words evoked more frontal activation than words from the mother tongue. These differences were diminished on day 2. We therefore conclude that with increasing second language proficiency, lexico-semantic processing of second language words needs less frontal control. Our results demonstrate that lexico-semantic processing of first and second language converges onto similar networks as second language proficiency increases.
Resumo:
Nonallergic hypersensitivity and allergic reactions are part of the many different types of adverse drug reactions (ADRs). Databases exist for the collection of ADRs. Spontaneous reporting makes up the core data-generating system of pharmacovigilance, but there is a large under-estimation of allergy/hypersensitivity drug reactions. A specific database is therefore required for drug allergy and hypersensitivity using standard operating procedures (SOPs), as the diagnosis of drug allergy/hypersensitivity is difficult and current pharmacovigilance algorithms are insufficient. Although difficult, the diagnosis of drug allergy/hypersensitivity has been standardized by the European Network for Drug Allergy (ENDA) under the aegis of the European Academy of Allergology and Clinical Immunology and SOPs have been published. Based on ENDA and Global Allergy and Asthma European Network (GA(2)LEN, EU Framework Programme 6) SOPs, a Drug Allergy and Hypersensitivity Database (DAHD((R))) has been established under FileMaker((R)) Pro 9. It is already available online in many different languages and can be accessed using a personal login. GA(2)LEN is a European network of 27 partners (16 countries) and 59 collaborating centres (26 countries), which can coordinate and implement the DAHD across Europe. The GA(2)LEN-ENDA-DAHD platform interacting with a pharmacovigilance network appears to be of great interest for the reporting of allergy/hypersensitivity ADRs in conjunction with other pharmacovigilance instruments.
Resumo:
Central Eastern Europe, the research area this paper is concerned with, is a region characterized by a high diversity of languages and cultures. It is, at the same time, an area where political, cultural and social conflicts have emerged over time, nowadays especially in border zones, where people of different ethnic, cultural or linguistic background live. In this context, it is important for us researchers to get balanced interview data, and consequently we very often have to conduct interviews in several different languages and within changing cultural contexts. In order to avoid "communication problems" or even conflictual (interview) situations, which might damage the outcome of the research, we are thus challenged to find appropriate communication strategies for any of these situations. This is especially difficult when we are confronted with language or culture-specific terminology or taboo expressions that carry political meaning(s). Once the interview data is collected and it comes to translating and analysing it, we face further challenges and new questions arise. First of all, we have to decide what a good translation strategy would be. Many words and phrases that exist in one language do not have an exact equivalent in another. Therefore we have to find a solution for translating these expressions and concepts in a way that their meanings do not get "lost by translation". In this paper I discuss and provide insights to these challenges by presenting and discussing numerous examples from the region in question. Specifically, I focus on the deconstruction of the meaning of geographical names and politically loaded expressions in order to show the sensitivities of language, the difficulties of research in multilingual settings and with multilingual data as well as the strategies or "ways out" of certain dilemmas.
Resumo:
Virtual machines emulating hardware devices are generally implemented in low-level languages and using a low-level style for performance reasons. This trend results in largely difficult to understand, difficult to extend and unmaintainable systems. As new general techniques for virtual machines arise, it gets harder to incorporate or test these techniques because of early design and optimization decisions. In this paper we show how such decisions can be postponed to later phases by separating virtual machine implementation issues from the high-level machine-specific model. We construct compact models of whole-system VMs in a high-level language, which exclude all low-level implementation details. We use the pluggable translation toolchain PyPy to translate those models to executables. During the translation process, the toolchain reintroduces the VM implementation and optimization details for specific target platforms. As a case study we implement an executable model of a hardware gaming device. We show that our approach to VM building increases understandability, maintainability and extendability while preserving performance.
Resumo:
Virtual machines (VMs) emulating hardware devices are generally implemented in low-level languages for performance reasons. This results in unmaintainable systems that are difficult to understand. In this paper we report on our experience using the PyPy toolchain to improve the portability and reduce the complexity of whole-system VM implementations. As a case study we implement a VM prototype for a Nintendo Game Boy, called PyGirl, in which the high-level model is separated from low-level VM implementation issues. We shed light on the process of refactoring from a low-level VM implementation in Java to a high-level model in RPython. We show that our whole-system VM written with PyPy is significantly less complex than standard implementations, without substantial loss in performance.
Resumo:
Cet article traite des expressions de la perception sensuelle de l’ouest nilotique. La première partie de l’article présente une terminologie ophrésiologique en louo et burun et démontre que cette catégorie lexicale détient aussi une catégorie grammaticale particulière. Phénomènes très rares dans les langues du monde, les termes ophrésiologiques sont seulement présentés sous forme introductive pour encourager davantage les recherches futures. La seconde partie de l’article porte sur les descriptions des couleurs utilisées pour les animaux domestiques. Quand les modalités de l’économie changent, les noms utilisés pour les couleurs des animaux peuvent aussi être employés pour d’autres concepts culturels. La troisième partie de l’article montre que les classificateurs nominaux en mabaan (burun) expriment des principes concernant le toucher en tant que structure cognitive. En conséquence, différents procédés de grammaticalisation sont assumés et corrélés avec des ponctuations dans l’histoire culturelle et mentale des ancêtres des locuteurs de l’ouest nilotique.
Resumo:
After 20 years of silence, two recent references from the Czech Republic (Bezpečnostní softwarová asociace, Case C-393/09) and from the English High Court (SAS Institute, Case C-406/10) touch upon several questions that are fundamental for the extent of copyright protection for software under the Computer Program Directive 91/25 (now 2009/24) and the Information Society Directive 2001/29. In Case C-393/09, the European Court of Justice held that “the object of the protection conferred by that directive is the expression in any form of a computer program which permits reproduction in different computer languages, such as the source code and the object code.” As “any form of expression of a computer program must be protected from the moment when its reproduction would engender the reproduction of the computer program itself, thus enabling the computer to perform its task,” a graphical user interface (GUI) is not protected under the Computer Program Directive, as it does “not enable the reproduction of that computer program, but merely constitutes one element of that program by means of which users make use of the features of that program.” While the definition of computer program and the exclusion of GUIs mirror earlier jurisprudence in the Member States and therefore do not come as a surprise, the main significance of Case C-393/09 lies in its interpretation of the Information Society Directive. In confirming that a GUI “can, as a work, be protected by copyright if it is its author’s own intellectual creation,” the ECJ continues the Europeanization of the definition of “work” which began in Infopaq (Case C-5/08). Moreover, the Court elaborated this concept further by excluding expressions from copyright protection which are dictated by their technical function. Even more importantly, the ECJ held that a television broadcasting of a GUI does not constitute a communication to the public, as the individuals cannot have access to the “essential element characterising the interface,” i.e., the interaction with the user. The exclusion of elements dictated by technical functions from copyright protection and the interpretation of the right of communication to the public with reference to the “essential element characterising” the work may be seen as welcome limitations of copyright protection in the interest of a free public domain which were not yet apparent in Infopaq. While Case C-393/09 has given a first definition of the computer program, the pending reference in Case C-406/10 is likely to clarify the scope of protection against nonliteral copying, namely in how far the protection extends beyond the text of the source code to the design of a computer program and where the limits of protection lie as regards the functionality of a program and mere “principles and ideas.” In light of the travaux préparatoires, it is submitted that the ECJ is also likely to grant protection for the design of a computer program, while excluding both the functionality and underlying principles and ideas from protection under the European copyright directives.
Resumo:
In using online social networks to connect and interact with people has become extremely popular all around the world. Thelargest Social Networking Site (SNS), Facebook, offers its services in over 70 languages and increasingly relies oninternational users to grow its membership. Aiming to understand the role of culture in SNS participation, this study adopts a‘privacy calculus’ perspective to examine the differences in participation patterns between American and MoroccanFacebook users. Survey results show that Moroccans users disclose less on Facebook than US users, yet perceive moredamage should their privacy on Facebook be violated. American users, on the other hand, have lower privacy concerns, trustfellow SNS members and legal system more, and disclose more in their profile. From a practical standpoint, the resultsindicate that SNS providers cannot rely on the same methods to encourage user participation and disclosure in differentcountries.
Resumo:
According to the 2000 United States Census, the Asian population in Houston, Texas, has increased more than 67% in the last ten years. To supplement an already active consumer health information program, the staff of the Houston Academy of Medicine-Texas Medical Center Library worked with community partners to bring health information to predominantly Asian neighborhoods. Brochures on health topics of concern to the Asian community were translated and placed in eight informational kiosks in Asian centers such as temples and an Asian grocery store. A press conference and a ribbon cutting ceremony were held to debut the kiosks and to introduce the Consumer Health Information for Asians (CHIA) program. Project goals for the future include digitizing the translated brochures, mounting them on the Houston HealthWays Website, and developing touch-screen kiosks. The CHIA group is investigating adding health resources in other Asian languages, as well as Spanish. Funding for this project has come from outside sources rather than from the regular library budget.
Resumo:
OBJECTIVE The results of Interventional Management of Stroke (IMS) III, Magnetic Resonance and REcanalization of Stroke Clots Using Embolectomy (MR RESCUE), and SYNTHESIS EXPANSION trials are expected to affect the practice of endovascular treatment for acute ischemic stroke. The purpose of this report is to review the components of the designs and methods of these trials and to describe the influence of those components on the interpretation of trial results. METHODS A critical review of trial design and conduct of IMS III, MR RESCUE, and SYNTHESIS EXPANSION is performed with emphasis on patient selection, shortcomings in procedural aspects, and methodology of data ascertainment and analysis. The influence of each component is estimated based on published literature including multicenter clinical trials reporting on endovascular treatment for acute ischemic stroke and myocardial infarction. RESULTS We critically examined the time interval between symptom onset and treatment and rates of angiographic recanalization to differentiate between "endovascular treatment" and "parameter optimized endovascular treatment" as it relates to the IMS III, MR RESCUE, and SYNTHESIS EXPANSION trials. All the three trials failed to effectively test "parameter optimized endovascular treatment" due to the delay between symptom onset and treatment and less than optimal rates of recanalization. In all the three trials, the magnitude of benefit with endovascular treatment required to reject the null hypothesis was larger than could be expected based on previous studies. The IMS III and SYNTHESIS EXPANSION trials demonstrated that rates of symptomatic intracerebral hemorrhages subsequent to treatment are similar between IV thrombolytics and endovascular treatment in matched acute ischemic stroke patients. The trials also indirectly validated the superiority/equivalence of IV thrombolytics (compared with endovascular treatment) in patients with minor neurological deficits and those without large vessel occlusion on computed tomographic/magnetic resonance angiography. CONCLUSIONS The results do not support a large magnitude benefit of endovascular treatment in subjects randomized in all the three trials. The possibility that benefits of a smaller magnitude exist in certain patient populations cannot be excluded. Large magnitude benefits can be expected with implementation of "parameter optimized endovascular treatment" in patients with ischemic stroke who are candidates for IV thrombolytics.
Resumo:
Polymorphism, along with inheritance, is one of the most important features in object-oriented languages, but it is also one of the biggest obstacles to source code comprehension. Depending on the run-time type of the receiver of a message, any one of a number of possible methods may be invoked. Several algorithms for creating accurate call-graphs using static analysis already exist, however, they consume significant time and memory resources. We propose an approach that will combine static and dynamic analysis and yield the best possible precision with a minimal trade-off between used resources and accuracy.