869 resultados para surrogate data
Resumo:
This chapter addresses data modelling as a means of promoting statistical literacy in the early grades. Consideration is first given to the importance of increasing young children’s exposure to statistical reasoning experiences and how data modelling can be a rich means of doing so. Selected components of data modelling are then reviewed, followed by a report on some findings from the third-year of a three-year longitudinal study across grades one through three.
Resumo:
A variety of sustainable development research efforts and related activities are attempting to reconcile the issues of conserving our natural resources without limiting economic motivation while also improving our social equity and quality of life. Land use/land cover change, occurring on a global scale, is an aggregate of local land use decisions and profoundly impacts our environment. It is therefore the local decision making process that should be the eventual target of many of the ongoing data collection and research efforts which strive toward supporting a sustainable future. Satellite imagery data is a primary source of data upon which to build a core data set for use by researchers in analyzing this global change. A process is necessary to link global change research, utilizing satellite imagery, to the local land use decision making process. One example of this is the NASA-sponsored Regional Data Center (RDC) prototype. The RDC approach is an attempt to integrate science and technology at the community level. The anticipated result of this complex interaction between research and the decision making communities will be realized in the form of long-term benefits to the public.
Resumo:
Historically, it appears that some of the WRCF have survived because i) they lack sufficient quantity of commercially valuable species; ii) they are located in remote or inaccessible areas; or iii) they have been protected as national parks and sanctuaries. Forests will be protected when people who are deciding the fate of forests conclude than the conservation of forests is more beneficial, e.g. generates higher incomes or has cultural or social values, than their clearance. If this is not the case, forests will continue to be cleared and converted. In the future, the WRCF may be protected only by focused attention. The future policy options may include strategies for strong protection measures, the raising of public awareness about the value of forests, and concerted actions for reducing pressure on forest lands by providing alternatives to forest exploitation to meet the growing demands of forest products. Many areas with low population densities offer an opportunity for conservation if appropriate steps are taken now by the national governments and international community. This opportunity must be founded upon the increased public and government awareness that forests have vast importance to the welfare of humans and ecosystems' services such as biodiversity, watershed protection, and carbon balance. Also paramount to this opportunity is the increased scientific understanding of forest dynamics and technical capability to install global observation and assessment systems. High-resolution satellite data such as Landsat 7 and other technologically advanced satellite programs will provide unprecedented monitoring options for governing authorities. Technological innovation can contribute to the way forests are protected. The use of satellite imagery for regular monitoring and Internet for information dissemination provide effective tools for raising worldwide awareness about the significance of forests and intrinsic value of nature.
Resumo:
The complex supply chain relations of the construction industry, coupled with the substantial amount of information to be shared on a regular basis between the parties involved, make the traditional paper-based data interchange methods inefficient, error prone and expensive. The successful information technology (IT) applications that enable seamless data interchange, such as the Electronic Data Interchange (EDI) systems, have generally failed to be successfully implemented in the construction industry. An alternative emerging technology, Extensible Markup Language (XML), and its applicability to streamline business processes and to improve data interchange methods within the construction industry are analysed, as is the EDI technology to identify the strategic advantages that XML technology provides to overcome the barriers to implementation. In addition, the successful implementation of XML-based automated data interchange platforms for a large organization, and the proposed benefits thereof, are presented as a case study.
Resumo:
Enterprises, both public and private, have rapidly commenced using the benefits of enterprise resource planning (ERP) combined with business analytics and “open data sets” which are often outside the control of the enterprise to gain further efficiencies, build new service operations and increase business activity. In many cases, these business activities are based around relevant software systems hosted in a “cloud computing” environment. “Garbage in, garbage out”, or “GIGO”, is a term long used to describe problems in unqualified dependency on information systems, dating from the 1960s. However, a more pertinent variation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems, such as ERP and usage of open datasets in a cloud environment, the ability to verify the authenticity of those data sets used may be almost impossible, resulting in dependence upon questionable results. Illicit data set “impersonation” becomes a reality. At the same time the ability to audit such results may be an important requirement, particularly in the public sector. This paper discusses the need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment and analyses some current technologies that are offered and which may be appropriate. However, severe limitations to addressing these requirements have been identified and the paper proposes further research work in the area.
Resumo:
Enterprise resource planning (ERP) systems are rapidly being combined with “big data” analytics processes and publicly available “open data sets”, which are usually outside the arena of the enterprise, to expand activity through better service to current clients as well as identifying new opportunities. Moreover, these activities are now largely based around relevant software systems hosted in a “cloud computing” environment. However, the over 50- year old phrase related to mistrust in computer systems, namely “garbage in, garbage out” or “GIGO”, is used to describe problems of unqualified and unquestioning dependency on information systems. However, a more relevant GIGO interpretation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems based around ERP and open datasets as well as “big data” analytics, particularly in a cloud environment, the ability to verify the authenticity and integrity of the data sets used may be almost impossible. In turn, this may easily result in decision making based upon questionable results which are unverifiable. Illicit “impersonation” of and modifications to legitimate data sets may become a reality while at the same time the ability to audit any derived results of analysis may be an important requirement, particularly in the public sector. The pressing need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment is discussed in this paper. Some current and appropriate technologies currently being offered are also examined. However, severe limitations in addressing the problems identified are found and the paper proposes further necessary research work for the area. (Note: This paper is based on an earlier unpublished paper/presentation “Identity, Addressing, Authenticity and Audit Requirements for Trust in ERP, Analytics and Big/Open Data in a ‘Cloud’ Computing Environment: A Review and Proposal” presented to the Department of Accounting and IT, College of Management, National Chung Chen University, 20 November 2013.)
Resumo:
One of the concerns about the use of Bluetooth MAC Scanner (BMS) data, especially from urban arterial, is the bias in the travel time estimates from multiple Bluetooth devices being transported by a vehicle. For instance, if a bus is transporting 20 passengers with Bluetooth equipped mobile phones, then the discovery of these mobile phones by BMS will be considered as 20 different vehicles, and the average travel time along the corridor estimated from the BMS data will be biased with the travel time from the bus. This paper integrates Bus Vehicle Identification system with BMS network to empirically evaluate such bias, if any. The paper also reports an interesting finding on the uniqueness of MAC IDs.
Resumo:
Loop detectors are the oldest and widely used traffic data source. On urban arterials, they are mainly installed for signal control. Recently state of the art Bluetooth MAC Scanners (BMS) has significantly captured the interest of stakeholders for exploiting it for area wide traffic monitoring. Loop detectors provide flow- a fundamental traffic parameter; whereas BMS provides individual vehicle travel time between BMS stations. Hence, these two data sources complement each other, and if integrated should increase the accuracy and reliability of the traffic state estimation. This paper proposed a model that integrates loops and BMS data for seamless travel time and density estimation for urban signalised network. The proposed model is validated using both real and simulated data and the results indicate that the accuracy of the proposed model is over 90%.
Resumo:
Using Media-Access-Control (MAC) address for data collection and tracking is a capable and cost effective approach as the traditional ways such as surveys and video surveillance have numerous drawbacks and limitations. Positioning cell-phones by Global System for Mobile communication was considered an attack on people's privacy. MAC addresses just keep a unique log of a WiFi or Bluetooth enabled device for connecting to another device that has not potential privacy infringements. This paper presents the use of MAC address data collection approach for analysis of spatio-temporal dynamics of human in terms of shared space utilization. This paper firstly discuses the critical challenges and key benefits of MAC address data as a tracking technology for monitoring human movement. Here, proximity-based MAC address tracking is postulated as an effective methodology for analysing the complex spatio-temporal dynamics of human movements at shared zones such as lounge and office areas. A case study of university staff lounge area is described in detail and results indicates a significant added value of the methodology for human movement tracking. By analysis of MAC address data in the study area, clear statistics such as staff’s utilisation frequency, utilisation peak periods, and staff time spent is obtained. The analyses also reveal staff’s socialising profiles in terms of group and solo gathering. The paper is concluded with a discussion on why MAC address tracking offers significant advantages for tracking human behaviour in terms of shared space utilisation with respect to other and more prominent technologies, and outlines some of its remaining deficiencies.
Resumo:
We consider the following problem: members in a dynamic group retrieve their encrypted data from an untrusted server based on keywords and without any loss of data confidentiality and member’s privacy. In this paper, we investigate common secure indices for conjunctive keyword-based retrieval over encrypted data, and construct an efficient scheme from Wang et al. dynamic accumulator, Nyberg combinatorial accumulator and Kiayias et al. public-key encryption system. The proposed scheme is trapdoorless and keyword-field free. The security is proved under the random oracle, decisional composite residuosity and extended strong RSA assumptions.
Resumo:
In this paper we describe the design of DNA Jewellery, which is a wearable tangible data representation of personal DNA profile data. An iterative design process was followed to develop a 3D form-language that could be mapped to standard DNA profile data, with the aim of retaining readability of data while also producing an aesthetically pleasing and unique result in the area of personalized design. The work explores design issues with the production of data tangibles, contributes to a growing body of research exploring tangible representations of data and highlights the importance of approaches that move between technology, art and design.
Resumo:
The motion response of marine structures in waves can be studied using finite-dimensional linear-time-invariant approximating models. These models, obtained using system identification with data computed by hydrodynamic codes, find application in offshore training simulators, hardware-in-the-loop simulators for positioning control testing, and also in initial designs of wave-energy conversion devices. Different proposals have appeared in the literature to address the identification problem in both time and frequency domains, and recent work has highlighted the superiority of the frequency-domain methods. This paper summarises practical frequency-domain estimation algorithms that use constraints on model structure and parameters to refine the search of approximating parametric models. Practical issues associated with the identification are discussed, including the influence of radiation model accuracy in force-to-motion models, which are usually the ultimate modelling objective. The illustration examples in the paper are obtained using a freely available MATLAB toolbox developed by the authors, which implements the estimation algorithms described.
Resumo:
Process models specify behavioral aspects by describing ordering constraints between tasks which must be accomplished to achieve envisioned goals. Tasks usually exchange information by means of data objects, i.e., by writing information to and reading information from data objects. A data object can be characterized by its states and allowed state transitions. In this paper, we propose a notion which checks conformance of a process model with respect to data objects that its tasks access. This new notion can be used to tell whether in every execution of a process model each time a task needs to access a data object in a particular state, it is ensured that the data object is in the expected state or can reach the expected state and, hence, the process model can achieve its goals.
Resumo:
Presentation by Tony Beatton, QUT Business School, at Managing your research data seminar, 2012
Resumo:
Presentation by Siobhann McCafferty, Institute for Future Environments, at Managing your research data seminar, 2012