858 resultados para downloading of data
Resumo:
This thesis explored the knowledge and reasoning of young children in solving novel statistical problems, and the influence of problem context and design on their solutions. It found that young children's statistical competencies are underestimated, and that problem design and context facilitated children's application of a wide range of knowledge and reasoning skills, none of which had been taught. A qualitative design-based research method, informed by the Models and Modeling perspective (Lesh & Doerr, 2003) underpinned the study. Data modelling activities incorporating picture story books were used to contextualise the problems. Children applied real-world understanding to problem solving, including attribute identification, categorisation and classification skills. Intuitive and metarepresentational knowledge together with inductive and probabilistic reasoning was used to make sense of data, and beginning awareness of statistical variation and informal inference was visible.
Resumo:
BACKGROUND: Observational data suggested that supplementation with vitamin D could reduce risk of infection, but trial data are inconsistent. OBJECTIVE: We aimed to examine the effect of oral vitamin D supplementation on antibiotic use. DESIGN: We conducted a post hoc analysis of data from pilot D-Health, which is a randomized trial carried out in a general community setting between October 2010 and February 2012. A total of 644 Australian residents aged 60-84 y were randomly assigned to receive monthly doses of a placebo (n = 214) or 30,000 (n = 215) or 60,000 (n = 215) IU oral cholecalciferol for ≤12 mo. Antibiotics prescribed during the intervention period were ascertained by linkage with pharmacy records through the national health insurance scheme (Medicare Australia). RESULTS: People who were randomly assigned 60,000 IU cholecalciferol had nonsignificant 28% lower risk of having antibiotics prescribed at least once than did people in the placebo group (RR: 0.72; 95% CI: 0.48, 1.07). In analyses stratified by age, in subjects aged ≥70 y, there was a significant reduction in antibiotic use in the high-dose vitamin D compared with placebo groups (RR: 0.53; 95% CI: 0.32, 0.90), whereas there was no effect in participants <70 y old (RR: 1.07; 95% CI: 0.58, 1.97) (P-interaction = 0.1). CONCLUSION: Although this study was a post hoc analysis and statistically nonsignificant, this trial lends some support to the hypothesis that supplementation with 60,000 IU vitamin D/mo is associated with lower risk of infection, particularly in older adults. The trial was registered at the Australian New Zealand Clinical Trials Registry (anzctr.org.au) as ACTRN12609001063202.
Resumo:
Situation awareness, ones understanding of ‘what is going on’, is a critical commodity for road users. Although the concept has received much attention in the driving context, situation awareness in vulnerable road users, such as cyclists, remains unexplored. This paper presents the findings from an exploratory on-road study of cyclist situation awareness, the aim of which was to explore how cyclists develop situation awareness, what their situation awareness comprises, and what the causes of degraded cyclist situation awareness may be. Twenty participants cycled a pre-defined urban on-road study route. A range of data were collected, including verbal protocols, forward scene video and rear video, and a network analysis procedure was used to describe and assess cyclist situation awareness. The analysis produced a number of key findings regarding cyclist situation awareness, including the potential for cyclists’ awareness of other road users to be degraded due to additional situation awareness and decision making requirements that are placed on them in certain road situations. Strategies for improving cyclists’ situation awareness are discussed.
Resumo:
Pavlovian fear conditioning is a robust technique for examining behavioral and cellular components of fear learning and memory. In fear conditioning, the subject learns to associate a previously neutral stimulus with an inherently noxious co-stimulus. The learned association is reflected in the subjects' behavior upon subsequent re-exposure to the previously neutral stimulus or the training environment. Using fear conditioning, investigators can obtain a large amount of data that describe multiple aspects of learning and memory. In a single test, researchers can evaluate functional integrity in fear circuitry, which is both well characterized and highly conserved across species. Additionally, the availability of sensitive and reliable automated scoring software makes fear conditioning amenable to high-throughput experimentation in the rodent model; thus, this model of learning and memory is particularly useful for pharmacological and toxicological screening. Due to the conserved nature of fear circuitry across species, data from Pavlovian fear conditioning are highly translatable to human models. We describe equipment and techniques needed to perform and analyze conditioned fear data. We provide two examples of fear conditioning experiments, one in rats and one in mice, and the types of data that can be collected in a single experiment. © 2012 Springer Science+Business Media, LLC.
Resumo:
A review of literature on the role of emergency nurses in Indonesia revealed a dearth of research. Anecdotal evidence suggests a lack of clarity in role definition which has led to uncertainty and role ambiguity. Despite advances in the development of specialist nursing roles in Indonesia, that of the emergency nurse remains unclear. This study explored the role of nurses working in emergency care services in three general hospitals in West Java, Indonesia. The theoretical framework is grounded in Charmaz’s constructivist grounded theory. Data collection methods were observation, in-depth interviews and interrogation of related documents. Phase one of data collection involved 74 h of observation and nterviews with 35 nurses working in the three ED settings. For the purposes of theoretical sampling, a second phase of data collection was conducted. This involved a second nterview with eight participants from the three EDs. nterviews were also undertaken with the three key informants of nursing management of three related hospitals; key informants from the Indonesian Nurses Association; the Directorate of Nursing, Ministry of Health; and from the organization for ED nurses. Data analysis drew on Charmaz’s constructivist approach and the concepts of simultaneous data collection and analysis, constant comparison, coding, and theoretical sampling. The analysis generated four theoretical concepts that characterized the role of the emergency nurse: An arbitrary scope of practice, Struggling for recognition, Learning on the job and Looking to better practice. These concepts provided analytical direction for an exploration of the clinical and political dimensions of the role of the emergency nurse in Indonesia.
Resumo:
Objective This study was to investigate issues that arose from pre-admission to post-discharge, for people in Toowoomba, Queensland over the age of 65 admitted to an acute facility. This paper concentrates on a significant concern that emerged from the large amount of data collected during this project, that is,the role of the nurse in the continuum of health care involving elderly people. Method The study involved a multi-site, multi-agency and multi-method (qualitative and quantitative) approach. Data was collected from regional service providers, the Department of Health and Aged Care (DHAC), the Australian Bureau of Statistics (ABS), Home and Community Care (HACC), the Aged Care Assessment Team (ACAT), elderly people who had been discharged from regional hospitals and their carers, residents of regional aged care facilities, area health professionals and elderly regional hospital inpatients. Results The data indicated that nurses in this provincial area currently play a limited role in preadmission planning, being mostly concerned with elective surgery, especially joint replacements. While nurses deliver the majority of care during hospitalisation, they do not appear to be cognizant of the needs of the elderly regarding post-acute discharge. Conclusion The recent introduction of the model of nurse case management in the acute sector appears to be a positive development that will streamline and optimise the health care of the elderly across the continuum in the Toowoomba area. The paper recommends some strategies, such as discharge liaison nurses based in Emergency Departments and the expansion of the nurse case management role, which would optimise care for the elderly person at the interface of care.
Resumo:
The travel industry has come to rely heavily on information and communication technologies to facilitate relations with consumers. Compiling consumer data profiles has become easier and it is generally thought that this has led to an increase in consumers' privacy concerns, which may have an adverse impact on their willingness to purchase online. Three specific aspects of privacy that have received attention from researchers are unauthorized secondary use of data, invasion of privacy, and errors. A study was undertaken to examine the effects of these factors on prior purchase of travel services via the Internet and future purchase probability. No evidence was found to indicate that such privacy concerns affect online purchase behavior within the travel industry. Managerial implications are discussed.
Resumo:
Countless studies have stressed the importance of social identity, particularly its role in various organizational outcomes, yet questions remain as to how identities initially develop, shift and change based on the configuration of multiple, pluralistic relationships grounded in an organizational setting. The interactive model of social identity formation has been proposed recently to explain the internalization of shared norms and values – critical in identity formation – has not received empirical examination. We analyzed multiple sources of data from nine nuclear professionals over three years to understand the construction of social identity in new entrants entering an organization. Informed by our data analyses, we found support for the interactive model and that age and level of experience influenced whether they undertook an inductive or deductive route of the group norm and value internalization. This study represents an important contribution to the study of social identity and the process by which identities are formed, particularly under conditions of duress or significant organizational disruption.
Resumo:
Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.
Resumo:
Diagnostics of rolling element bearings have been traditionally developed for constant operating conditions, and sophisticated techniques, like Spectral Kurtosis or Envelope Analysis, have proven their effectiveness by means of experimental tests, mainly conducted in small-scale laboratory test-rigs. Algorithms have been developed for the digital signal processing of data collected at constant speed and bearing load, with a few exceptions, allowing only small fluctuations of these quantities. Owing to the spreading of condition based maintenance in many industrial fields, in the last years a need for more flexible algorithms emerged, asking for compatibility with highly variable operating conditions, such as acceleration/deceleration transients. This paper analyzes the problems related with significant speed and load variability, discussing in detail the effect that they have on bearing damage symptoms, and propose solutions to adapt existing algorithms to cope with this new challenge. In particular, the paper will i) discuss the implication of variable speed on the applicability of diagnostic techniques, ii) address quantitatively the effects of load on the characteristic frequencies of damaged bearings and iii) finally present a new approach for bearing diagnostics in variable conditions, based on envelope analysis. The research is based on experimental data obtained by using artificially damaged bearings installed on a full scale test-rig, equipped with actual train traction system and reproducing the operation on a real track, including all the environmental noise, owing to track irregularity and electrical disturbances of such a harsh application.
Resumo:
Due to the demand for better and deeper analysis in sports, organizations (both professional teams and broadcasters) are looking to use spatiotemporal data in the form of player tracking information to obtain an advantage over their competitors. However, due to the large volume of data, its unstructured nature, and lack of associated team activity labels (e.g. strategic/tactical), effective and efficient strategies to deal with such data have yet to be deployed. A bottleneck restricting such solutions is the lack of a suitable representation (i.e. ordering of players) which is immune to the potentially infinite number of possible permutations of player orderings, in addition to the high dimensionality of temporal signal (e.g. a game of soccer last for 90 mins). Leveraging a recent method which utilizes a "role-representation", as well as a feature reduction strategy that uses a spatiotemporal bilinear basis model to form a compact spatiotemporal representation. Using this representation, we find the most likely formation patterns of a team associated with match events across nearly 14 hours of continuous player and ball tracking data in soccer. Additionally, we show that we can accurately segment a match into distinct game phases and detect highlights. (i.e. shots, corners, free-kicks, etc) completely automatically using a decision-tree formulation.
Resumo:
Police reported crash data are the primary source of crash information in most jurisdictions. However, the definition of serious injury within police-reported data is not consistent across jurisdictions and may not be accurate. With the Australian National Road Safety Strategy targeting the reduction of serious injuries, there is a greater need to assess the accuracy of the methods used to identify these injuries. A possible source of more accurate information relating to injury severity is hospital data. While other studies have compared police and hospital data to highlight the under-reporting in police-reported data, little attention has been given to the accuracy of the methods used by police to identify serious injuries. The current study aimed to assess how accurate the identification of serious injuries is in police-reported crash data, by comparing the profiles of transport-related injuries in the Queensland Road Crash Database with an aligned sample of data from the Queensland Hospital Admitted Patients Data Collection. Results showed that, while a similar number of traffic injuries were recorded in both data sets, the profile of these injuries was different based on gender, age, location, and road user. The results suggest that the ‘hospitalisation’ severity category used by police may not reflect true hospitalisations in all cases. Further, it highlights the wide variety of severity levels within hospitalised cases that are not captured by the current police-reported definitions. While a data linkage study is required to confirm these results, they highlight that a reliance on police-reported serious traffic injury data alone could result in inaccurate estimates of the impact and cost of crashes and lead to a misallocation of valuable resources.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research examining effects of uncertainties of generic WSN platform and verifying the capability of SHM-oriented WSNs, particularly on demanding SHM applications like modal analysis and damage identification of real civil structures. This article first reviews the major technical uncertainties of both generic and SHM-oriented WSN platforms and efforts of SHM research community to cope with them. Then, effects of the most inherent WSN uncertainty on the first level of a common Output-only Modal-based Damage Identification (OMDI) approach are intensively investigated. Experimental accelerations collected by a wired sensory system on a benchmark civil structure are initially used as clean data before being contaminated with different levels of data pollutants to simulate practical uncertainties in both WSN platforms. Statistical analyses are comprehensively employed in order to uncover the distribution pattern of the uncertainty influence on the OMDI approach. The result of this research shows that uncertainties of generic WSNs can cause serious impact for level 1 OMDI methods utilizing mode shapes. It also proves that SHM-WSN can substantially lessen the impact and obtain truly structural information without having used costly computation solutions.
Resumo:
In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques.