962 resultados para free and open source software
Resumo:
The increasing amount of data available about software systems poses new challenges for re- and reverse engineering research, as the proposed approaches need to scale. In this context, concerns about meta-modeling and analysis techniques need to be augmented by technical concerns about how to reuse and how to build upon the efforts of previous research. Moose is an extensive infrastructure for reverse engineering evolved for over 10 years that promotes the reuse of engineering efforts in research. Moose accommodates various types of data modeled in the FAMIX family of meta-models. The goal of this half-day workshop is to strengthen the community of researchers and practitioners who are working in re- and reverse engineering, by providing a forum for building future research starting from Moose and FAMIX as shared infrastructure.
Resumo:
The increasing amount of data available about software systems poses new challenges for re- and reverse engineering research, as the proposed approaches need to scale. In this context, concerns about meta-modeling and analysis techniques need to be augmented by technical concerns about how to reuse and how to build upon the efforts of previous research. MOOSE is an extensive infrastructure for reverse engineering evolved for over 10 years that promotes the reuse of engineering efforts in research. MOOSE accommodates various types of data modeled in the FAMIX family of meta-models. The goal of this half-day workshop is to strengthen the community of researchers and practitioners who are working in re- and reverse engineering, by providing a forum for building future research starting from MOOSE and FAMIX as shared infrastructure.
Resumo:
Zur administrativen Unterstützung von Lehr- und Lernprozessen werden E-Learning-Plattformen eingesetzt, die auf der Grundlage des Internet Funktionen zur Distribution von Lehr- und Lernmaterialien und zur Kommunikation zwischen Lehrenden und Lernenden anbieten. Zahlreiche wissenschaftliche Beiträge und Marktstudien beschäftigen sich mit der multikriteriellen Evaluation dieser Softwareprodukte zur informatorischen Fundierung strategischer Investitionsentscheidungen. Demgegenüber werden Instrumente zum kostenorientierten Controlling von E-Learning-Plattformen allenfalls marginal thematisiert. Dieser Beitrag greift daher das Konzept der Total Cost of Ownership (TCO) auf, das einen methodischen Ansatzpunkt zur Schaffung von Kostentransparenz von E-Learning-Plattformen bildet. Aufbauend auf den konzeptionellen Grundlagen werden Problembereiche und Anwendungspotenziale für das kostenorientierte Controlling von LMS identifiziert. Zur softwaregestützten Konstruktion und Analyse von TCO-Modellen wird das Open Source-Werkzeug TCO-Tool eingeführt und seine Anwendung anhand eines synthetischen Fallbeispiels erörtert. Abschließend erfolgt die Identifikation weiterführender Entwicklungsperspektiven des TCO-Konzepts im Kontext des E-Learning. Die dargestellte Thematik ist nicht nur von theoretischem Interesse, sondern adressiert auch den steigenden Bedarf von Akteuren aus der Bildungspraxis nach Instrumenten zur informatorischen Fundierung von Investitions- und Desinvestitionsentscheidungen im Umfeld des E-Learning.
Resumo:
PDP++ is a freely available, open source software package designed to support the development, simulation, and analysis of research-grade connectionist models of cognitive processes. It supports most popular parallel distributed processing paradigms and artificial neural network architectures, and it also provides an implementation of the LEABRA computational cognitive neuroscience framework. Models are typically constructed and examined using the PDP++ graphical user interface, but the system may also be extended through the incorporation of user-written C++ code. This article briefly reviews the features of PDP++, focusing on its utility for teaching cognitive modeling concepts and skills to university undergraduate and graduate students. An informal evaluation of the software as a pedagogical tool is provided, based on the author’s classroom experiences at three research universities and several conference-hosted tutorials.
Resumo:
Open Source Communities and content-oriented projects (Creative Commons etc.) have reached a new level of economic and cultural significance in some areas of the Internet ecosystem. These communities have developed their own set of legal rules covering licensing issues, intellectual property management, project governance rules etc. Typical Open Source licenses and project rules are written without any reference to national law. This paper considers the question whether these license contracts and other legal rules are to be qualified as a lex mercatoria (or lex informatica) of these communities.
Resumo:
Ökobilanzierung von Produktsystemen dient der Abschätzung ihrer Wirkung auf die Umwelt. Eine vollständige Lebenswegbetrachtung erfordert auch die Einbeziehung intralogistischer Transportprozesse bzw. -mittel. Für die Erstellung von Ökobilanzen wird i. d. R. ein Computerprogramm verwendet. Die Demoversionen dreier kommerzieller Softwarelösungen (SimaPro, GaBi und Umberto NXT LCA) und die Vollversion einer Open Source Software (openLCA) wurden aus softwareergonomischer Sicht analysiert. Hierzu erfolgte u. a. der Nachbau der bereitgestellten Tutorials bzw. die Modellierung eigener Produktsysteme. Im Rahmen der Analyse wurden die Punkte • Entstehung, Verbreitung, Zielgruppe, • Eignung der Tutorials, Erlernbarkeit, • Grafische Benutzeroberfläche, Individualisierbarkeit der Software, • Umsetzung der Anforderungen aus den Ökobilanzierungsnormen, • Notwendige Arbeitsschritte zur Erstellung einer Ökobilanz einer vergleichenden Betrachtung unterzogen. Der Beitrag beinhaltet eine Einführung in die wesentlichen Prinzipien der Ökobilanzierung und die Grundsätze der Softwareergonomie. Diese werden zu softwareergonomischen Eigenschaften für Ökobilanzsoftwarelösungen subsumiert. Anschließend werden die Ergebnisse des Softwarevergleiches dargestellt. Abschließend erfolgt eine Zusammenfassung der Erkenntnisse.
Resumo:
During recent years, mindfulness-based approaches have been gaining relevance for treatment in clinical populations. Correspondingly, the empirical study of mindfulness has steadily grown; thus, the availability of valid measures of the construct is critically important. This paper gives an overview of the current status in the field of self-report assessment of mindfulness. All eight currently available and validated mindfulness scales (for adults) are evaluated, with a particular focus on their virtues and limitations and on differences among them. It will be argued that none of these scales may be a fully adequate measure of mindfulness, as each of them offers unique advantages but also disadvantages. In particular, none of them seems to provide a comprehensive assessment of all aspects of mindfulness in samples from the general population. Moreover, some scales may be particularly indicated in investigations focusing on specific populations such as clinical samples (Cognitive and Affective Mindfulness Scale, Southampton Mindfulness Questionnaire) or meditators (Freiburg Mindfulness Inventory). Three main open issues are discussed: (1) the coverage of aspects of mindfulness in questionnaires; (2) the nature of the relationships between these aspects; and (3) the validity of self-report measures of mindfulness. These issues should be considered in future developments in the self-report assessment of mindfulness.
Resumo:
This introductory chapter briefly introduces a few milestones in the voluminous previous literature on semantic roles, and charts the territory in which the papers of this volume aim to make a contribution. This territory is characterized by fairly disparate conceptualizations of semantic roles and their status in theories of grammar and the lexicon, as well as by diverse and probably complementary ways of deriving or identifying them based on linguistic data. Particular attention is given to the question of how selected roles appear to relate to each other, and we preliminarily address the issue of how roles, subroles, and role complexes are best thought of in general.
Resumo:
Software developers often ask questions about software systems and software ecosystems that entail exploration and navigation, such as who uses this component?, and where is this feature implemented?. Software visualisation can be a great aid to understanding and exploring the answers to such questions, but visualisations require expertise to implement effectively, and they do not always scale well to large systems. We propose to automatically generate software visualisations based on software models derived from open source software corpora and from an analysis of the properties of typical developers queries and commonly used visualisations. The key challenges we see are (1) understanding how to match queries to suitable visualisations, and (2) scaling visualisations effectively to very large software systems and corpora. In the paper we motivate the idea of automatic software visualisation, we enumerate the challenges and our proposals to address them, and we describe some very initial results in our attempts to develop scalable visualisations of open source software corpora.
Resumo:
Manual used for the implementation of CDE's Geoprocessing courses in the South and East. Composed of 6 modules covering important aspects of GIS handling and implementation: 1) Introduction to GIS; 2) Management issues; 3) GIS data preparation; 4) GIS data presentation; 5) Vector data analysis; 6) Raster data analysis. At the moment the manual is designed for use with ArcGIS. Work on a manual for use with open source software is currently ongoing. This manual was successfully used during several GIS training events in Kenya and Tajikistan.
Resumo:
BACKGROUND The Endoscopic Release of Carpal Tunnel Syndrome (ECTR) is a minimal invasive approach for the treatment of Carpal Tunnel Syndrome. There is scepticism regarding the safety of this technique, based on the assumption that this is a rather "blind" procedure and on the high number of severe complications that have been reported in the literature. PURPOSE To evaluate whether there is evidence supporting a higher risk after ECTR in comparison to the conventional open release. METHODS We searched MEDLINE (January 1966 to November 2013), EMBASE (January 1980 to November 2013), the Cochrane Neuromuscular Disease Group Specialized Register (November 2013) and CENTRAL (2013, issue 11 in The Cochrane Library). We hand-searched reference lists of included studies. We included all randomized or quasi-randomized controlled trials (e.g. study using alternation, date of birth, or case record number) that compare any ECTR with any OCTR technique. Safety was assessed by the incidence of major, minor and total number of complications, recurrences, and re-operations.The total time needed before return to work or to return to daily activities was also assessed. We synthesized data using a random-effects meta-analysis in STATA. We conducted a sensitivity analysis for rare events using binomial likelihood. We judged the conclusiveness of meta-analysis calculating the conditional power of meta-analysis. CONCLUSIONS ECTR is associated with less time off work or with daily activities. The assessment of major complications, reoperations and recurrence of symptoms does not favor either of the interventions. There is an uncertain advantage of ECTR with respect to total minor complications (more transient paresthesia but fewer skin-related complications). Future studies are unlikely to alter these findings because of the rarity of the outcome. The effect of a learning curve might be responsible for reduced recurrences and reoperations with ECTR in studies that are more recent, although formal statistical analysis failed to provide evidence for such an association. LEVEL OF EVIDENCE I.
Resumo:
People report suggested misinformation about a previously witnessed event for manifold reasons, such as social pressure, lack of memory of the original aspect, or a firm belief to remember the misinformation from the witnessed event. In our experiments (N = 429), which follow Loftus's paradigm, we tried to disentangle the reasons for reporting a central and a peripheral piece of misinformation in a recognition task by examining (a) the impact a warning about possible misinformation has on the error rate, and (b) whether once reported misinformation was actually attributed to the witnessed event in a later source-monitoring (SM) task. Overall, a misinformation effect was found for both items. The warning strongly reduced the misinformation effect, but only for the central item. In contrast, reports of the peripheral misinformation were correctly attributed to the misinformation source or, at least, ascribed to guesswork much more often than the central ones. As a consequence, after the SM task, the initially higher error rate for the peripheral item was even lower than that of the central item. Results convincingly show that the reasons for reporting misinformation, and correspondingly also the potential to avoid them in legal settings, depend on the centrality of the misinformation.