928 resultados para Scandinavian languages
Resumo:
Native languages of the Americas whose predicate and clause structure reflect nominal hierarchies show an interesting range of structural diversity not only with respect to morphological makeup of their predicates and arguments but also with respect to the factors governing obviation status. The present article maps part of such diversity. The sample surveyed here includes languages with some sort of nonlocal (third person acting on third person) direction-marking system.
Resumo:
Early initiation of everolimus with calcineurin inhibitor therapy has been shown to reduce the progression of cardiac allograft vasculopathy (CAV) in de novo heart transplant recipients. The effect of de novo everolimus therapy and early total elimination of calcineurin inhibitor therapy has, however, not been investigated and is relevant given the morbidity and lack of efficacy of current protocols in preventing CAV. This 12-month multicenter Scandinavian trial randomized 115 de novo heart transplant recipients to everolimus with complete calcineurin inhibitor elimination 7-11 weeks after HTx or standard cyclosporine immunosuppression. Ninety-five (83%) patients had matched intravascular ultrasound examinations at baseline and 12 months. Mean (± SD) recipient age was 49.9 ± 13.1 years. The everolimus group (n = 47) demonstrated significantly reduced CAV progression as compared to the calcineurin inhibitor group (n = 48) (ΔMaximal Intimal Thickness 0.03 ± 0.06 and 0.08 ± 0.12 mm, ΔPercent Atheroma Volume 1.3 ± 2.3 and 4.2 ± 5.0%, ΔTotal Atheroma Volume 1.1 ± 19.2 mm(3) and 13.8 ± 28.0 mm(3) [all p-values ≤ 0.01]). Everolimus patients also had a significantly greater decline in levels of soluble tumor necrosis factor receptor-1 as compared to the calcineurin inhibitor group (p = 0.02). These preliminary results suggest that an everolimus-based CNI-free can potentially be considered in suitable de novo HTx recipients.
Resumo:
DATED-1 comprises a compilation of dates related to the build-up and retreat of the Eurasian (British-Irish, Scandinavian, Svalbard-Barents-Kara Seas) Ice Sheets, and time-slice maps of the Eurasian Ice sheet margins. Dates are sourced from the published literature. Ice margins are based on published geological and chronological data and include uncertainty bounds (maximum, minimum) as well as what we consider to be the most-credible (mc) based on the available evidence. DATED-1 has a census date of 1 January 2013. Full description and caveats for use are given in: Hughes, A.L.C., Gyllencreutz, R., Lohne, Ø.S., Mangerud, J., Svendsen, J.I. (2015) The last Eurasian Ice Sheets - a chronological database and time-slice reconstruction, DATED-1.
Resumo:
In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.
Resumo:
Studying independence of goals has proven very useful in the context of logic programming. In particular, it has provided a formal basis for powerful automatic parallelization tools, since independence ensures that two goals may be evaluated in parallel while preserving correctness and eciency. We extend the concept of independence to constraint logic programs (CLP) and prove that it also ensures the correctness and eciency of the parallel evaluation of independent goals. Independence for CLP languages is more complex than for logic programming as search space preservation is necessary but no longer sucient for ensuring correctness and eciency. Two additional issues arise. The rst is that the cost of constraint solving may depend upon the order constraints are encountered. The second is the need to handle dynamic scheduling. We clarify these issues by proposing various types of search independence and constraint solver independence, and show how they can be combined to allow dierent optimizations, from parallelism to intelligent backtracking. Sucient conditions for independence which can be evaluated \a priori" at run-time are also proposed. Our study also yields new insights into independence in logic programming languages. In particular, we show that search space preservation is not only a sucient but also a necessary condition for ensuring correctness and eciency of parallel execution.
Resumo:
We address the problem of developing mechanisms for easily implementing modular extensions to modular (logic) languages. By(language) extensions we refer to different groups of syntactic definitions and translation rules that extend a language. Our use of the concept of modularity in this context is twofold. We would like these extensions to be modular, in the sense above, i.e., we should be able to develop different extensions mostly separately. At the same time, the sources and targets for the extensions are modular languages, i.e., such extensions may take as input sepárate pieces of code and also produce sepárate pieces of code. Dealing with this double requirement involves interesting challenges to ensure that modularity is not broken: first, combinations of extensions (as if they were a single extensión) must be given a precise meaning. Also, the sepárate translation of múltiple sources (as if they were a single source) must be feasible. We present a detailed description of a code expansion-based framework that proposes novel solutions for these problems. We argüe that the approach, while implemented for Ciao, can be adapted for other Prolog-based systems and languages.
Resumo:
Abstract machines provide a certain separation between platformdependent and platform-independent concerns in compilation. Many of the differences between architectures are encapsulated in the speciflc abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecode-based abstract machine. Our approach includes a one-time, programindependent proflling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compile-time cost estimation phase, using the instruction timing information, infers expressions giving platform-dependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account low-level issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/veriflcation of time properties, certiflcation of time properties in mobile code, granularity control in parallel/distributed computing, and resource-oriented specialization.