845 resultados para How Finns learn mathematics and science
Resumo:
This paper aims to sketch some bases for the problematization of digital tools as objects of knowledge for Social Sciences and Humanities (SSH). Our purpose is to raise some relevant questions about the Digital Humanities (DH) and how SSH and Computer Sciences (CS) can work together to face new challenges. We discuss some tension points and propose a model for SSH and CS collaboration for joint projects in cultural digitization.
Resumo:
The proposed event is part of the 2013 program of IFLA (International Federation of Library Association) as well IFLA – CLM Committee on eBooks and e-lending. The proposed event is also part of the activities of a research project with international participation "Copyright Policies of libraries and other cultural institutions” (2012-2014), (financed by National Science Fund of the Bulgarian Ministry of Education, Youth and Science, Contract No ДФНИ-К01/0002-21.11.2012).
Resumo:
The main focus of this paper is on mathematical theory and methods which have a direct bearing on problems involving multiscale phenomena. Modern technology is refining measurement and data collection to spatio-temporal scales on which observed geophysical phenomena are displayed as intrinsically highly variable and intermittant heirarchical structures,e.g. rainfall, turbulence, etc. The heirarchical structure is reflected in the occurence of a natural separation of scales which collectively manifest at some basic unit scale. Thus proper data analysis and inference require a mathematical framework which couples the variability over multiple decades of scale in which basic theoretical benchmarks can be identified and calculated. This continues the main theme of the research in this area of applied probability over the past twenty years.
Resumo:
Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014
Resumo:
This work was supported in part by the EU „2nd Generation Open Access Infrastructure for Research in Europe" (OpenAIRE+). The autumn training school Development and Promotion of Open Access to Scientific Information and Research is organized in the frame of the Fourth International Conference on Digital Presentation and Preservation of Cultural and Scientific Heritage—DiPP2014 (September 18–21, 2014, Veliko Tarnovo, Bulgaria, http://dipp2014.math.bas.bg/), organized under the UNESCO patronage. The main organiser is the Institute of Mathematics and Informatics, Bulgarian Academy of Sciences with the support of EU project FOSTER (http://www.fosteropenscience.eu/) and the P. R. Slaveykov Regional Public Library in Veliko Tarnovo, Bulgaria.
Resumo:
We present some recent trends in the field of digital cultural heritage management and applications including digital cultural data curation, interoperability, open linked data publishing, crowd sourcing, visualization, platforms for digital cultural heritage, and applications. We present some examples from research and development projects of MUSIC/TUC in those areas.
Resumo:
The paper discusses some current trends in the area of development and use of semantic portals for accessing heterogeneous museum collections on the Semantic Web. The presentation is focused on some issues concerning metadata standards for museums, museum collections ontologies and semantic search engines. A number of design considerations and recommendations are formulated.
Resumo:
In this paper, we first overview the French project on heritage called PATRIMA, launched in 2011 as one of the Projets d'investissement pour l'avenir, a French funding program meant to last for the next ten years. The overall purpose of the PATRIMA project is to promote and fund research on various aspects of heritage presentation and preservation. Such research being interdisciplinary, research groups in history, physics, chemistry, biology and computer science are involved in this project. The PATRIMA consortium involves research groups from universities and from the main museums or cultural heritage institutions in Paris and surroundings. More specifically, the main members of the consortium are the two universities of Cergy-Pontoise and Versailles Saint-Quentin and the following famous museums or cultural institutions: Musée du Louvre, Château de Versailles, Bibliothèque nationale de France, Musée du Quai Branly, Musée Rodin. In the second part of the paper, we focus on two projects funded by PATRIMA named EDOP and Parcours and dealing with data integration. The goal of the EDOP project is to provide users with a data space for the integration of heterogeneous information about heritage; Linked Open Data are considered for an effective access to the corresponding data sources. On the other hand, the Parcours project aims at building an ontology on the terminology about the techniques dealing with restoration and/or conservation. Such an ontology is meant to provide a common terminology to researchers using different databases and different vocabularies.
Resumo:
This paper reflects on the experience of PanamaTipico.com, an independent website specialized in the research and preservation of the cultural heritage of the Republic of Panama, a developing country located in Central America. Basic information about the project is described. Also discussed, are some of the challenges confronted by the project and the results achieved. The goal of this paper is to encourage a discussion on whether or not the experience of Pan-amaTipico.com is comparable to the experiences of similar projects in developing countries in Eastern Europe and elsewhere.
Resumo:
Learning to Research Researching to Learn explores the integration of research into teaching and learning at all levels of higher education. The chapters draw on the long and ongoing debate about the teaching research nexus in universities. Although the vast majority of academics believe that there is an important and valuable link between teaching and research, the precise nature of this relationship continues to be contested. The book includes chapters that showcase innovative ways of learning to research; how research is integrated into coursework teaching; how students learn the processes of research, and how universities are preparing students to engage with the world. The chapters also showcase innovative ways of researching to learn, exploring how students learn through doing research, how they conceptualise the knowledge of their fields of study through the processes of doing research, and how students experiment and reflect on the results produced. These are the key issues addressed by this anthology, as it brings together analyses of the ways in which university teachers are developing research skills in their students, creating enquiry-based approaches to teaching, and engaging in education research themselves. The studies here explore the links between teaching, learning and research in a range of contexts, from pre-enrolment through to academic staff development, in Australia, the UK, the US, Singapore and Denmark. Through a rich array of theoretical and methodological approaches, the collection seeks to further our understanding of how universities can play an effective role in educating graduates suited to the twenty-first century
Resumo:
The paper presents recent developments in the domain of digital mathematics libraries towards the envisioned 21st Century Global Library for Mathematics. The Bulgarian Digital Mathematical Library BulDML and the Czech Digital Mathematical Library DML-CZ are founding partners of the EuDML Initiative and through it contribute to the sustainable development of the European Digital Mathematics Library EuDML and to the global advancements in this area.
Resumo:
ACM Computing Classification System (1998): F.2.1, G.1.5, I.1.2.
Resumo:
Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.