960 resultados para geographical data
Resumo:
A spatial process observed over a lattice or a set of irregular regions is usually modeled using a conditionally autoregressive (CAR) model. The neighborhoods within a CAR model are generally formed deterministically using the inter-distances or boundaries between the regions. An extension of CAR model is proposed in this article where the selection of the neighborhood depends on unknown parameter(s). This extension is called a Stochastic Neighborhood CAR (SNCAR) model. The resulting model shows flexibility in accurately estimating covariance structures for data generated from a variety of spatial covariance models. Specific examples are illustrated using data generated from some common spatial covariance functions as well as real data concerning radioactive contamination of the soil in Switzerland after the Chernobyl accident.
Resumo:
Environmental monitoring is becoming critical as human activity and climate change place greater pressures on biodiversity, leading to an increasing need for data to make informed decisions. Acoustic sensors can help collect data across large areas for extended periods making them attractive in environmental monitoring. However, managing and analysing large volumes of environmental acoustic data is a great challenge and is consequently hindering the effective utilization of the big dataset collected. This paper presents an overview of our current techniques for collecting, storing and analysing large volumes of acoustic data efficiently, accurately, and cost-effectively.
Resumo:
In this introductory chapter to Schmeinck, D. and Lidstone, J. (2014) “Current trends and issues in geographical education” in Schmeinck, D. and Lidstone, J. (2014) Eds) Standards and Research in Geographical Education: Current Trends and International Issues. Berlin. Mensch und Buch Verlag. Pp. 5 - 16. , the authors review and analyse eleven papers originally presented to the Congress of the International Geographical Union held in Cologne in 2012. Taking the collection of papers as a single corpus representing the “state of the art” of geography education, they applied lexical and bibliometric analyses in an innovative attempt to identify the nature of geographical education as represented by this anthology of peer reviewed chapters presented at the start of the second decade of the Twenty-first century?
Resumo:
What is the state of geographical education in the second decade of the 21st century? This volume presents a selection of peer reviewed papers presented at the 2012 Cologne Congress of the International Geographical Union (IGU) sessions on Geographical Education as representative of current thinking in the area. It then presents (perhaps for the first time) a cross-case analysis of the common factors of all these papers as a current summary of the “state of the art” of geographical education today. The primary aim of the individual authors as well as the editors is not only to record the current state of the art of geographical education but also to promote ongoing discussions of the longer term health and future prospects of international geographical education. We wish to encourage ongoing debate and discussion amongst local, national, regional and international education journals, conferences and discussion groups as part of the international mission of the Commission on Geographical Eduction. While the currency of these chapters in terms of their foci, breadth and recency of the theoretical literature on which they are based and the new research findings they present justifies considerable confidence in the current health of geographical education as an educational and research endeavour, each new publication should only be the start of new scholarly inquiry. Where should we, as a scholarly community, place our energies for the future? If readers are left with a new sense of direction, then the aims of the authors and editors will have been amply met.
Resumo:
This chapter outlines a perspective of educational assessment as enabling, whereby the learner is central and assessment is focused on supporting the knowledge, skills and dispositions necessary for lifelong learning. It argues that better education for young people is achievable when educational policy and practice give priority to learning improvement, thereby making assessment for accountability a related, though secondary, concern. The chapter describes how this work of internationally recognized scholars brings together diverse perspectives and theoretical frameworks and, in so doing, provides readers with a range of ways to consider their pathway through the book. A ‘map’ and summaries of chapters suggest a reading according to a thematic approach, geographical setting, author/s profile or content purposes depending on the reader’s own priorities. A section on assessment past, present, and futures calls for a rebalancing of improvement and accountability goals, and for countries to be careful to avoid privileging large-scale testing over other forms of data about learning and achievement.
Resumo:
This chapter addresses data modelling as a means of promoting statistical literacy in the early grades. Consideration is first given to the importance of increasing young children’s exposure to statistical reasoning experiences and how data modelling can be a rich means of doing so. Selected components of data modelling are then reviewed, followed by a report on some findings from the third-year of a three-year longitudinal study across grades one through three.
Resumo:
A variety of sustainable development research efforts and related activities are attempting to reconcile the issues of conserving our natural resources without limiting economic motivation while also improving our social equity and quality of life. Land use/land cover change, occurring on a global scale, is an aggregate of local land use decisions and profoundly impacts our environment. It is therefore the local decision making process that should be the eventual target of many of the ongoing data collection and research efforts which strive toward supporting a sustainable future. Satellite imagery data is a primary source of data upon which to build a core data set for use by researchers in analyzing this global change. A process is necessary to link global change research, utilizing satellite imagery, to the local land use decision making process. One example of this is the NASA-sponsored Regional Data Center (RDC) prototype. The RDC approach is an attempt to integrate science and technology at the community level. The anticipated result of this complex interaction between research and the decision making communities will be realized in the form of long-term benefits to the public.
Resumo:
Historically, it appears that some of the WRCF have survived because i) they lack sufficient quantity of commercially valuable species; ii) they are located in remote or inaccessible areas; or iii) they have been protected as national parks and sanctuaries. Forests will be protected when people who are deciding the fate of forests conclude than the conservation of forests is more beneficial, e.g. generates higher incomes or has cultural or social values, than their clearance. If this is not the case, forests will continue to be cleared and converted. In the future, the WRCF may be protected only by focused attention. The future policy options may include strategies for strong protection measures, the raising of public awareness about the value of forests, and concerted actions for reducing pressure on forest lands by providing alternatives to forest exploitation to meet the growing demands of forest products. Many areas with low population densities offer an opportunity for conservation if appropriate steps are taken now by the national governments and international community. This opportunity must be founded upon the increased public and government awareness that forests have vast importance to the welfare of humans and ecosystems' services such as biodiversity, watershed protection, and carbon balance. Also paramount to this opportunity is the increased scientific understanding of forest dynamics and technical capability to install global observation and assessment systems. High-resolution satellite data such as Landsat 7 and other technologically advanced satellite programs will provide unprecedented monitoring options for governing authorities. Technological innovation can contribute to the way forests are protected. The use of satellite imagery for regular monitoring and Internet for information dissemination provide effective tools for raising worldwide awareness about the significance of forests and intrinsic value of nature.
Resumo:
Natural distributions of most freshwater taxa are restricted geographically, a pattern that reflects dispersal limitation. Macrobrachium rosenbergii is unusual because it occurs naturally in rivers from near Pakistan in the west, across India and Bangladesh to the Malay Peninsula, and across the Sunda Shelf and Indonesian archipelago to western Java. Individuals cannot tolerate full marine conditions, so dispersal between river drainage basins must occur at limited geographical scales when ecological or climatic factors are favorable. We examined molecular diversity in wild populations of M. rosenbergii across its complete natural range to document patterns of diversity and to relate them to factors that have driven evolution of diversity in this species. We found 3 clades in the mitochondrial deoxyribonucleic acid (mtDNA) data set that corresponded geographically with eastern, central, and western sets of haplotypes that last shared a common ancestor 1 × 106 y ago. The eastern clade was closest to the common ancestor of all 3 clades and to the common ancestor with its congener, Macrobrachium spinipes, distributed east of Huxley's Line. Macrobrachium rosenbergii could have evolved in the western Indonesian archipelago and spread westward during the early to mid-Pleistocene to India and Sri Lanka. Additional groups identified in the nuclear DNA data set in the central and western clades probably indicate secondary contact via dispersal between regions and modern introductions that have mixed nuclear and mtDNA genes. Pleistocene sea-level fluctuations can explain dispersal across the Indonesian archipelago and parts of mainland southeastern Asia via changing river drainage connections in shallow seas on wide continental shelves. At the western end of the modern distribution where continental shelves are smaller, intermittent freshwater plumes from large rivers probably permitted larval dispersal across inshore areas of lowered salinity.
Resumo:
The complex supply chain relations of the construction industry, coupled with the substantial amount of information to be shared on a regular basis between the parties involved, make the traditional paper-based data interchange methods inefficient, error prone and expensive. The successful information technology (IT) applications that enable seamless data interchange, such as the Electronic Data Interchange (EDI) systems, have generally failed to be successfully implemented in the construction industry. An alternative emerging technology, Extensible Markup Language (XML), and its applicability to streamline business processes and to improve data interchange methods within the construction industry are analysed, as is the EDI technology to identify the strategic advantages that XML technology provides to overcome the barriers to implementation. In addition, the successful implementation of XML-based automated data interchange platforms for a large organization, and the proposed benefits thereof, are presented as a case study.
Resumo:
Enterprises, both public and private, have rapidly commenced using the benefits of enterprise resource planning (ERP) combined with business analytics and “open data sets” which are often outside the control of the enterprise to gain further efficiencies, build new service operations and increase business activity. In many cases, these business activities are based around relevant software systems hosted in a “cloud computing” environment. “Garbage in, garbage out”, or “GIGO”, is a term long used to describe problems in unqualified dependency on information systems, dating from the 1960s. However, a more pertinent variation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems, such as ERP and usage of open datasets in a cloud environment, the ability to verify the authenticity of those data sets used may be almost impossible, resulting in dependence upon questionable results. Illicit data set “impersonation” becomes a reality. At the same time the ability to audit such results may be an important requirement, particularly in the public sector. This paper discusses the need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment and analyses some current technologies that are offered and which may be appropriate. However, severe limitations to addressing these requirements have been identified and the paper proposes further research work in the area.
Resumo:
Enterprise resource planning (ERP) systems are rapidly being combined with “big data” analytics processes and publicly available “open data sets”, which are usually outside the arena of the enterprise, to expand activity through better service to current clients as well as identifying new opportunities. Moreover, these activities are now largely based around relevant software systems hosted in a “cloud computing” environment. However, the over 50- year old phrase related to mistrust in computer systems, namely “garbage in, garbage out” or “GIGO”, is used to describe problems of unqualified and unquestioning dependency on information systems. However, a more relevant GIGO interpretation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems based around ERP and open datasets as well as “big data” analytics, particularly in a cloud environment, the ability to verify the authenticity and integrity of the data sets used may be almost impossible. In turn, this may easily result in decision making based upon questionable results which are unverifiable. Illicit “impersonation” of and modifications to legitimate data sets may become a reality while at the same time the ability to audit any derived results of analysis may be an important requirement, particularly in the public sector. The pressing need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment is discussed in this paper. Some current and appropriate technologies currently being offered are also examined. However, severe limitations in addressing the problems identified are found and the paper proposes further necessary research work for the area. (Note: This paper is based on an earlier unpublished paper/presentation “Identity, Addressing, Authenticity and Audit Requirements for Trust in ERP, Analytics and Big/Open Data in a ‘Cloud’ Computing Environment: A Review and Proposal” presented to the Department of Accounting and IT, College of Management, National Chung Chen University, 20 November 2013.)
Resumo:
One of the concerns about the use of Bluetooth MAC Scanner (BMS) data, especially from urban arterial, is the bias in the travel time estimates from multiple Bluetooth devices being transported by a vehicle. For instance, if a bus is transporting 20 passengers with Bluetooth equipped mobile phones, then the discovery of these mobile phones by BMS will be considered as 20 different vehicles, and the average travel time along the corridor estimated from the BMS data will be biased with the travel time from the bus. This paper integrates Bus Vehicle Identification system with BMS network to empirically evaluate such bias, if any. The paper also reports an interesting finding on the uniqueness of MAC IDs.
Resumo:
Loop detectors are the oldest and widely used traffic data source. On urban arterials, they are mainly installed for signal control. Recently state of the art Bluetooth MAC Scanners (BMS) has significantly captured the interest of stakeholders for exploiting it for area wide traffic monitoring. Loop detectors provide flow- a fundamental traffic parameter; whereas BMS provides individual vehicle travel time between BMS stations. Hence, these two data sources complement each other, and if integrated should increase the accuracy and reliability of the traffic state estimation. This paper proposed a model that integrates loops and BMS data for seamless travel time and density estimation for urban signalised network. The proposed model is validated using both real and simulated data and the results indicate that the accuracy of the proposed model is over 90%.
Resumo:
Using Media-Access-Control (MAC) address for data collection and tracking is a capable and cost effective approach as the traditional ways such as surveys and video surveillance have numerous drawbacks and limitations. Positioning cell-phones by Global System for Mobile communication was considered an attack on people's privacy. MAC addresses just keep a unique log of a WiFi or Bluetooth enabled device for connecting to another device that has not potential privacy infringements. This paper presents the use of MAC address data collection approach for analysis of spatio-temporal dynamics of human in terms of shared space utilization. This paper firstly discuses the critical challenges and key benefits of MAC address data as a tracking technology for monitoring human movement. Here, proximity-based MAC address tracking is postulated as an effective methodology for analysing the complex spatio-temporal dynamics of human movements at shared zones such as lounge and office areas. A case study of university staff lounge area is described in detail and results indicates a significant added value of the methodology for human movement tracking. By analysis of MAC address data in the study area, clear statistics such as staff’s utilisation frequency, utilisation peak periods, and staff time spent is obtained. The analyses also reveal staff’s socialising profiles in terms of group and solo gathering. The paper is concluded with a discussion on why MAC address tracking offers significant advantages for tracking human behaviour in terms of shared space utilisation with respect to other and more prominent technologies, and outlines some of its remaining deficiencies.