348 resultados para Business Intelligence, ETL, Data Warehouse, Metadati, Reporting
Resumo:
This study examined elementary school teachers’ knowledge of their legislative and policy-based reporting duties with respect to child sexual abuse. Data were collected from 470 elementary school teachers from urban and rural government and nongovernment schools in 3 Australian states, which at the time of the study had 3 different legislative reporting duties for teachers. Teachers completed the 8-part Teacher Reporting Questionnaire (TRQ). Multinomial logistic regression analysis was used to determine factors associated with (a) teachers’ legislation knowledge and (b) teachers’ policy knowledge. Teachers with higher levels of knowledge had a combination of pre- and in-service training about child sexual abuse and more positive attitudes toward reporting, held administration positions in their school, and had reported child sexual abuse at least once during their teaching career. They were also more likely to work in the state with the strongest legislative reporting duty, which had been in place the longest.
Resumo:
Between 2001 and 2005, the US airline industry faced financial turmoil while the European airline industry entered a period of substantive deregulation. Consequently, this opened up opportunities for low-cost carriers to become more competitive in the market. To assess airline performance and identify the sources of efficiency in the immediate aftermath of these events, we employ a bootstrap data envelopment analysis truncated regression approach. The results suggest that at the time the mainstream airlines needed to significantly reorganize and rescale their operations to remain competitive. In the second-stage analysis, the results indicate that private ownership, status as a low-cost carrier, and improvements in weight load contributed to better organizational efficiency.
Resumo:
The application of artificial intelligence in finance is relatively new area of research. This project employed artificial neural networks (ANNs) that use both fundamental and technical inputs to predict future prices of widely held Australian stocks and use these predicted prices for stock portfolio selection over a long investment horizon. The research involved the creation and testing of a large number of possible network configurations and draws conclusions about ANN architectures and their overall suitability for the purpose of stock portfolio selection.
Resumo:
Occupational exposures of healthcare workers tend to occur because of inconsistent compliance with standard precautions. Also, incidence of occupational exposure is underreported among operating room personnel. The purpose of this project was to develop national estimates for compliance with standard precautions and occupational exposure reporting practices among operating room nurses in Australia. Data was obtained utilizing a 96-item self-report survey. The Standard Precautions and Occupational Exposure Reporting survey was distributed anonymously to 500 members of the Australian College of Operating Room Nurses. The Health Belief Model was the theoretical framework used to guide the analysis of data. Data was analysed to examine relationships between specific constructs of the Health Belief Model to identify factors that might influence the operating room nurse to undertake particular health behaviours to comply with standard precautions and occupational exposure reporting. Results of the study revealed compliance rates of 55.6% with double gloving, 59.1% with announcing sharps transfers, 71.9% with using a hands-free sharps pass technique, 81.9% with no needle recapping and 92.0% with adequate eye protection. Although 31.6% of respondents indicated receiving an occupational exposure in the past 12 months, only 82.6% of them reported their exposures. The results of this study provide national estimates of compliance with standard precautions and occupational exposure reporting among operating room nurses in Australia. These estimates can now be used as support for the development and implementation of measures to improve practices in order to reduce occupational exposures and, ultimately, disease transmission rates among this high-risk group.
Resumo:
Social media platforms are of interest to interactive entertainment companies for a number of reasons. They can operate as a platform for deploying games, as a tool for communicating with customers and potential customers, and can provide analytics on how players utilize the; game providing immediate feedback on design decisions and changes. However, as ongoing research with Australian developer Halfbrick, creators of $2 , demonstrates, the use of these platforms is not universally seen as a positive. The incorporation of Big Data into already innovative development practices has the potential to cause tension between designers, whilst the platform also challenges the traditional business model, relying on micro-transactions rather than an up-front payment and a substantial shift in design philosophy to take advantage of the social aspects of platforms such as Facebook.
Resumo:
In this paper we focus specifically on explaining variation in core human values, and suggest that individual differences in values can be partially explained by personality traits and the perceived ability to manage emotions in the self and others (i.e. trait emotional intelligence). A sample of 209 university students was used to test hypotheses regarding several proposed direct and indirect relationships between personality traits, trait emotional intelligence and values. Consistent with the hypotheses, Harm Avoidance and Novelty Seeking were found to directly predict Hedonism, Conformity, and Stimulation. Harm Avoidance was also found to indirectly predict these values through the mediating effects of key subscales of trait emotional intelligence. Novelty Seeking was not found to be an indirect predictor of values. Results have implications for our understanding of the relationship between personality, trait emotional intelligence and values, and suggest a common basis in terms of approach and avoidance pathways.
Resumo:
In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques.
Resumo:
Talk of Big Data seems to be everywhere. Indeed, the apparently value-free concept of ‘data’ has seen a spectacular broadening of popular interest, shifting from the dry terminology of labcoat-wearing scientists to the buzzword du jour of marketers. In the business world, data is increasingly framed as an economic asset of critical importance, a commodity on a par with scarce natural resources (Backaitis, 2012; Rotella, 2012). It is social media that has most visibly brought the Big Data moment to media and communication studies, and beyond it, to the social sciences and humanities. Social media data is one of the most important areas of the rapidly growing data market (Manovich, 2012; Steele, 2011). Massive valuations are attached to companies that directly collect and profit from social media data, such as Facebook and Twitter, as well as to resellers and analytics companies like Gnip and DataSift. The expectation attached to the business models of these companies is that their privileged access to data and the resulting valuable insights into the minds of consumers and voters will make them irreplaceable in the future. Analysts and consultants argue that advanced statistical techniques will allow the detection of ongoing communicative events (natural disasters, political uprisings) and the reliable prediction of future ones (electoral choices, consumption)...
Resumo:
Public health research consistently demonstrates the salience of neighbourhood as a determinant of both health-related behaviours and outcomes across the human life course. This paper will report on the findings from a mixed-methods Brisbane-based study that explores how mothers with primary school children from both high and low socioeconomic suburbs use the local urban environment for the purpose of physical activity. Firstly, we demonstrate findings from an innovative methodology using the geographic information systems (GIS) embedded in social media platforms on mobile phones to track locations, resource-use, distances travelled, and modes of transport of the families in real-time; and secondly, we report on qualitative data that provides insight into reasons for differential use of the environment by both groups. Spatial/mapping and statistical data showed that while the mothers from both groups demonstrated similar daily routines, the mothers from the high SEP suburb engaged in increased levels of physical activity, travelled less frequently and less distance by car, and walked more for transport. The qualitative data revealed differences in the psychosocial processes and characteristics of the households and neighbourhoods of the respective groups, with mothers in the lower SEP suburb reporting more stress, higher conflict, and lower quality relationships with neighbours.
Resumo:
This work considers the problem of building high-fidelity 3D representations of the environment from sensor data acquired by mobile robots. Multi-sensor data fusion allows for more complete and accurate representations, and for more reliable perception, especially when different sensing modalities are used. In this paper, we propose a thorough experimental analysis of the performance of 3D surface reconstruction from laser and mm-wave radar data using Gaussian Process Implicit Surfaces (GPIS), in a realistic field robotics scenario. We first analyse the performance of GPIS using raw laser data alone and raw radar data alone, respectively, with different choices of covariance matrices and different resolutions of the input data. We then evaluate and compare the performance of two different GPIS fusion approaches. The first, state-of-the-art approach directly fuses raw data from laser and radar. The alternative approach proposed in this paper first computes an initial estimate of the surface from each single source of data, and then fuses these two estimates. We show that this method outperforms the state of the art, especially in situations where the sensors react differently to the targets they perceive.
Resumo:
Field robots often rely on laser range finders (LRFs) to detect obstacles and navigate autonomously. Despite recent progress in sensing technology and perception algorithms, adverse environmental conditions, such as the presence of smoke, remain a challenging issue for these robots. In this paper, we investigate the possibility to improve laser-based perception applications by anticipating situations when laser data are affected by smoke, using supervised learning and state-of-the-art visual image quality analysis. We propose to train a k-nearest-neighbour (kNN) classifier to recognise situations where a laser scan is likely to be affected by smoke, based on visual data quality features. This method is evaluated experimentally using a mobile robot equipped with LRFs and a visual camera. The strengths and limitations of the technique are identified and discussed, and we show that the method is beneficial if conservative decisions are the most appropriate.
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke. An example of application is given with monocular SLAM estimating the pose of the UGV while smoke is present in the environment. It is shown that the proposed novel quality metric can be used to anticipate situations where the quality of the pose estimate will be significantly degraded due to the input image data. This leads to decisions of advantageously switching between data sources (e.g. using infrared images instead of visual images).
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke.
Resumo:
This document describes large, accurately calibrated and time-synchronised datasets, gathered in controlled environmental conditions, using an unmanned ground vehicle equipped with a wide variety of sensors. These sensors include: multiple laser scanners, a millimetre wave radar scanner, a colour camera and an infra-red camera. Full details of the sensors are given, as well as the calibration parameters needed to locate them with respect to each other and to the platform. This report also specifies the format and content of the data, and the conditions in which the data have been gathered. The data collection was made in two different situations of the vehicle: static and dynamic. The static tests consisted of sensing a fixed ’reference’ terrain, containing simple known objects, from a motionless vehicle. For the dynamic tests, data were acquired from a moving vehicle in various environments, mainly rural, including an open area, a semi-urban zone and a natural area with different types of vegetation. For both categories, data have been gathered in controlled environmental conditions, which included the presence of dust, smoke and rain. Most of the environments involved were static, except for a few specific datasets which involve the presence of a walking pedestrian. Finally, this document presents illustrations of the effects of adverse environmental conditions on sensor data, as a first step towards reliability and integrity in autonomous perceptual systems.
Resumo:
In this paper we present large, accurately calibrated and time-synchronized data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a color camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain.