939 resultados para Open Information Extraction
Resumo:
The use of information technology (IT) in dentistry is far ranging. In order to produce a working document for the dental educator, this paper focuses on those methods where IT can assist in the education and competence development of dental students and dentists (e.g. e-learning, distance learning, simulations and computer-based assessment). Web pages and other information-gathering devices have become an essential part of our daily life, as they provide extensive information on all aspects of our society. This is mirrored in dental education where there are many different tools available, as listed in this report. IT offers added value to traditional teaching methods and examples are provided. In spite of the continuing debate on the learning effectiveness of e-learning applications, students request such approaches as an adjunct to the traditional delivery of learning materials. Faculty require support to enable them to effectively use the technology to the benefit of their students. This support should be provided by the institution and it is suggested that, where possible, institutions should appoint an e-learning champion with good interpersonal skills to support and encourage faculty change. From a global prospective, all students and faculty should have access to e-learning tools. This report encourages open access to e-learning material, platforms and programs. The quality of such learning materials must have well defined learning objectives and involve peer review to ensure content validity, accuracy, currency, the use of evidence-based data and the use of best practices. To ensure that the developers' intellectual rights are protected, the original content needs to be secure from unauthorized changes. Strategies and recommendations on how to improve the quality of e-learning are outlined. In the area of assessment, traditional examination schemes can be enriched by IT, whilst the Internet can provide many innovative approaches. Future trends in IT will evolve around improved uptake and access facilitated by the technology (hardware and software). The use of Web 2.0 shows considerable promise and this may have implications on a global level. For example, the one-laptop-per-child project is the best example of what Web 2.0 can do: minimal use of hardware to maximize use of the Internet structure. In essence, simple technology can overcome many of the barriers to learning. IT will always remain exciting, as it is always changing and the users, whether dental students, educators or patients are like chameleons adapting to the ever-changing landscape.
Resumo:
A post classification change detection technique based on a hybrid classification approach (unsupervised and supervised) was applied to Landsat Thematic Mapper (TM), Landsat Enhanced Thematic Plus (ETM+), and ASTER images acquired in 1987, 2000 and 2004 respectively to map land use/cover changes in the Pic Macaya National Park in the southern region of Haiti. Each image was classified individually into six land use/cover classes: built-up, agriculture, herbaceous, open pine forest, mixed forest, and barren land using unsupervised ISODATA and maximum likelihood supervised classifiers with the aid of field collected ground truth data collected in the field. Ground truth information, collected in the field in December 2007, and including equalized stratified random points which were visual interpreted were used to assess the accuracy of the classification results. The overall accuracy of the land classification for each image was respectively: 1987 (82%), 2000 (82%), 2004 (87%). A post classification change detection technique was used to produce change images for 1987 to 2000, 1987 to 2004, and 2000 to 2004. It was found that significant changes in the land use/cover occurred over the 17- year period. The results showed increases in built up (from 10% to 17%) and herbaceous (from 5% to 14%) areas between 1987 and 2004. The increase of herbaceous was mostly caused by the abandonment of exhausted agriculture lands. At the same time, open pine forest and mixed forest areas lost (75%) and (83%) of their area to other land use/cover types. Open pine forest (from 20% to 14%) and mixed forest (from18 to 12%) were transformed into agriculture area or barren land. This study illustrated the continuing deforestation, land degradation and soil erosion in the region, which in turn is leading to decrease in vegetative cover. The study also showed the importance of Remote Sensing (RS) and Geographic Information System (GIS) technologies to estimate timely changes in the land use/cover, and to evaluate their causes in order to design an ecological based management plan for the park.
Resumo:
Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms.
Resumo:
During decades Distance Transforms have proven to be useful for many image processing applications, and more recently, they have started to be used in computer graphics environments. The goal of this paper is to propose a new technique based on Distance Transforms for detecting mesh elements which are close to the objects' external contour (from a given point of view), and using this information for weighting the approximation error which will be tolerated during the mesh simplification process. The obtained results are evaluated in two ways: visually and using an objective metric that measures the geometrical difference between two polygonal meshes.
Resumo:
We describe the use of log file analysis to investigate whether the use of CSCL applications corresponds to its didactical purposes. Exemplarily we examine the use of the web-based system CommSy as software support for project-oriented university courses. We present two findings: (1) We suggest measures to shape the context of CSCL applications and support their initial and continuous use. (2) We show how log files can be used to analyze how, when and by whom a CSCL system is used and thus help to validate further empirical findings. However, log file analyses can only be interpreted reasonably when additional data concerning the context of use is available.
Liferay, Lecture2Go, Hochschulapps, OERs-MOOCs, Open IDM, e-Identity, CampusSource White Paper Award
Resumo:
Vorträge und Präsentationen von der CampusSource Tagung am 25.04.2013 bei der FernUniversität in Hagen zu den Themen:Liferay, Lecture2Go, Hochschulapps, OERs-MOOCs, Open IDM, e-Identity, CampusSource White Paper Award
Resumo:
Massive Open Online Courses (MOOCs) bezeichnen Kurse, die online stattfinden und auf Grund fehlender Zugangsbeschränkungen und kostenfreien Zugangs sehr hohe Teilnehmerzahlen erreichen. Der erste MOOC wurde 2011 durch Sebastian Thrun, Professor für Informatik an der Stanford University, zum Thema der Künstlichen Intelligenz angeboten und hatte 160.000 Teilnehmende. In der Folge wurden MOOCs als die revolutionäre Lehr-/Lerninnovation gepriesen, immer mehr Unternehmen gründeten MOOCs-Plattformen. Seit Ende 2012 bieten auch in Deutschland erste Institutionen eigene Plattformen mit MOOCs an. Man unterscheidet im Wesentlichen zwei Varianten – xMoocs und cMOOCs: xMoocs bieten auf Video aufgezeichnete Vorlesungen, die durch Tests und Fragen unterbrochen und zu denen Aufgaben ausgeteilt werden. Sie werden ergänzt durch Foren. cMOOCs orientieren sich eher an der Form eines Seminars oder Workshops, in ihnen können die Teilnehmenden die Inhalte selbst miterarbeiten und -gestalten. Um die Potenziale, aber auch die Schwächen der MOOCs bewerten zu können, bedarf es aber einer differenzierten Betrachtung, als sie bisher stattgefunden hat. Dieser Band stellt Erfahrungsberichte und Beispiele aus deutschen Hochschulen oder mit deutscher Beteiligung vor und reflektiert das Phänomen der MOOCs unter didaktischen, historischen und bildungspolitischen Aspekten.
Resumo:
Earth observations (EO) represent a growing and valuable resource for many scientific, research and practical applications carried out by users around the world. Access to EO data for some applications or activities, like climate change research or emergency response activities, becomes indispensable for their success. However, often EO data or products made of them are (or are claimed to be) subject to intellectual property law protection and are licensed under specific conditions regarding access and use. Restrictive conditions on data use can be prohibitive for further work with the data. Global Earth Observation System of Systems (GEOSS) is an initiative led by the Group on Earth Observations (GEO) with the aim to provide coordinated, comprehensive, and sustained EO and information for making informed decisions in various areas beneficial to societies, their functioning and development. It seeks to share data with users world-wide with the fewest possible restrictions on their use by implementing GEOSS Data Sharing Principles adopted by GEO. The Principles proclaim full and open exchange of data shared within GEOSS, while recognising relevant international instruments and national policies and legislation through which restrictions on the use of data may be imposed.The paper focuses on the issue of the legal interoperability of data that are shared with varying restrictions on use with the aim to explore the options of making data interoperable. The main question it addresses is whether the public domain or its equivalents represent the best mechanism to ensure legal interoperability of data. To this end, the paper analyses legal protection regimes and their norms applicable to EO data. Based on the findings, it highlights the existing public law statutory, regulatory, and policy approaches, as well as private law instruments, such as waivers, licenses and contracts, that may be used to place the datasets in the public domain, or otherwise make them publicly available for use and re-use without restrictions. It uses GEOSS and the particular characteristics of it as a system to identify the ways to reconcile the vast possibilities it provides through sharing of data from various sources and jurisdictions on the one hand, and the restrictions on the use of the shared resources on the other. On a more general level the paper seeks to draw attention to the obstacles and potential regulatory solutions for sharing factual or research data for the purposes that go beyond research and education.
Resumo:
Funded by the Library and the Vice President for Research and Economic Development, Digital Repository @ Iowa State University is a service for Iowa State's faculty, students and staff to manage, preserve and provide access to their scholarship. This presentation provides an overview of open access and introduces the repository to the Library Liaisons.
Resumo:
BACKGROUND Traditionally arthrotomy has rarely been performed during surgery for slipped capital femoral epiphysis (SCFE). As a result, most pathophysiological information about the articular surfaces was derived clinically and radiographically. Novel insights regarding deformity-induced damage and epiphyseal perfusion became available with surgical hip dislocation. QUESTIONS/PURPOSES We (1) determined the influence of chronicity of prodromal symptoms and severity of SCFE deformity on severity of cartilage damage. (2) In surgically confirmed disconnected epiphyses, we determined the influence of injury and time to surgery on epiphyseal perfusion; and (3) the frequency of new bone at the posterior neck potentially reducing perfusion during epimetaphyseal reduction. METHODS We reviewed 116 patients with 119 SCFE and available records treated between 1996 and 2011. Acetabular cartilage damage was graded as +/++/+++ in 109 of the 119 hips. Epiphyseal perfusion was determined with laser-Doppler flowmetry at capsulotomy and after reduction. Information about bone at the posterior neck was retrieved from operative reports. RESULTS Ninety-seven of 109 hips (89%) had documented cartilage damage; severity was not associated with higher slip angle or chronicity; disconnected epiphyses had less damage. Temporary or definitive cessation of perfusion in disconnected epiphyses increased with time to surgery; posterior bone resection improved the perfusion. In one necrosis, the retinaculum was ruptured; two were in the group with the longest time interval. Posterior bone formation is frequent in disconnected epiphyses, even without prodromal periods. CONCLUSIONS Addressing the cause of cartilage damage (cam impingement) should become an integral part of SCFE surgery. Early surgery for disconnected epiphyses appears to reduce the risk of necrosis. Slip reduction without resection of posterior bone apposition may jeopardize epiphyseal perfusion. LEVEL OF EVIDENCE Level IV, retrospective case series. See Guidelines for Authors for a complete description of levels of evidence.
Resumo:
For many years a combined analysis of pionic hydrogen and deuterium atoms has been known as a good tool to extract information on the isovector and especially on the isoscalar s-wave pN scattering length. However, given the smallness of the isoscalar scattering length, the analysis becomes useful only if the pion–deuteron scattering length is controlled theoretically to a high accuracy comparable to the experimental precision. To achieve the required few-percent accuracy one needs theoretical control over all isospin-conserving three-body pNN !pNN operators up to one order before the contribution of the dominant unknown (N†N)2pp contact term. This term appears at next-to-next-to-leading order in Weinberg counting. In addition, one needs to include isospin-violating effects in both two-body (pN) and three-body (pNN) operators. In this talk we discuss the results of the recent analysis where these isospin-conserving and -violating effects have been carefully taken into account. Based on this analysis, we present the up-to-date values of the s-wave pN scattering lengths.
Resumo:
We present the results of an investigation into the nature of the information needs of software developers who work in projects that are part of larger ecosystems. In an open- question survey we asked framework and library developers about their information needs with respect to both their upstream and downstream projects. We investigated what kind of information is required, why is it necessary, and how the developers obtain this information. The results show that the downstream needs are grouped into three categories roughly corresponding to the different stages in their relation with an upstream: selection, adop- tion, and co-evolution. The less numerous upstream needs are grouped into two categories: project statistics and code usage. The current practices part of the study shows that to sat- isfy many of these needs developers use non-specific tools and ad hoc methods. We believe that this is a largely unexplored area of research.
Resumo:
Traditionally, ontologies describe knowledge representation in a denotational, formalized, and deductive way. In addition, in this paper, we propose a semiotic, inductive, and approximate approach to ontology creation. We define a conceptual framework, a semantics extraction algorithm, and a first proof of concept applying the algorithm to a small set of Wikipedia documents. Intended as an extension to the prevailing top-down ontologies, we introduce an inductive fuzzy grassroots ontology, which organizes itself organically from existing natural language Web content. Using inductive and approximate reasoning to reflect the natural way in which knowledge is processed, the ontology’s bottom-up build process creates emergent semantics learned from the Web. By this means, the ontology acts as a hub for computing with words described in natural language. For Web users, the structural semantics are visualized as inductive fuzzy cognitive maps, allowing an initial form of intelligence amplification. Eventually, we present an implementation of our inductive fuzzy grassroots ontology Thus,this paper contributes an algorithm for the extraction of fuzzy grassroots ontologies from Web data by inductive fuzzy classification.
Resumo:
This paper presents fuzzy clustering algorithms to establish a grassroots ontology – a machine-generated weak ontology – based on folksonomies. Furthermore, it describes a search engine for vaguely associated terms and aggregates them into several meaningful cluster categories, based on the introduced weak grassroots ontology. A potential application of this ontology, weblog extraction, is illustrated using a simple example. Added value and possible future studies are discussed in the conclusion.