955 resultados para Microarray data
Resumo:
This paper provides an outline of the work undertaken by nurses who participated in the relief effort as members of Australian medical teams during the Sumatra-Andaman earthquake and tsunami response. This profile is contrasted with the information provided by nurses who registered their interest in volunteering to help via the Australian Tsunami Hotline. The paper provides an overview of the skills and background of the nurses who provided information to the hotline and describes the range and extent of experience among this cohort of potential volunteers. This data is compared to nursing workforce data and internal rates of volunteering in Australia. The paper concludes that further research is necessary to examine the motivations of and disincentives for nurses to volunteer for overseas (disaster) work and, to develop an improved understanding within the discipline of the skills and experience required of volunteer responders. Further, it is argued that the development of standards for the collection of disaster health volunteer data would assist future responses and provide better tools for developing an improved understanding of disaster volunteering.
Resumo:
Background Random Breath Testing (RBT) has proven to be a cornerstone of enforcement attempts to deter (as well as apprehend) motorists from drink driving in Queensland (Australia) for decades. However, scant published research has examined the relationship between the frequency of implementing RBT activities and subsequent drink driving apprehension rates across time. Aim This study aimed to examine the prevalence of apprehending drink drivers in Queensland over a 12 year period. It was hypothesised that an increase in breath testing rates would result in a corresponding decrease in the frequency of drink driving apprehension rates over time, which would reflect general deterrent effects. Method The Queensland Police Service provided RBT data that was analysed. Results Between the 1st of January 2000 and 31st of December 2011, 35,082,386 random breath tests (both mobile and stationary) were conducted in Queensland, resulting in 248,173 individuals being apprehended for drink driving offences. A total of 342,801 offences were recorded during this period, representing an intercept rate of .96. Of these offences, 276,711 (80.72%) were recorded against males and 66,024 (19.28%) offences committed by females. The most common drink driving offence was between 0.05 and 0.08 BAC limit. The largest proportion of offences was detected on the weekends, with Saturdays (27.60%) proving to be the most common drink driving night followed by Sundays (21.41%). The prevalence of drink driving detection rates rose steadily across time, peaking in 2008 and 2009, before slightly declining. This decline was observed across all Queensland regions and any increase in annual figures was due to new offence types being developed. Discussion This paper will further outline the major findings of the study in regards to tailoring RBT operations to increase detection rates as well as improve the general deterrent effect of the initiative.
Resumo:
Repeatable and accurate seagrass mapping is required for understanding seagrass ecology and supporting management decisions. For shallow (< 5 m) seagrass habitats, these maps can be created by integrating high spatial resolution imagery with field survey data. Field survey data for seagrass is often collected via snorkelling or diving. However, these methods are limited by environmental and safety considerations. Autonomous Underwater Vehicles (AUVs) are used increasingly to collect field data for habitat mapping, albeit mostly in deeper waters (>20 m). Here we demonstrate and evaluate the use and potential advantages of AUV field data collection for calibration and validation of seagrass habitat mapping of shallow waters (< 5 m), from multispectral satellite imagery. The study was conducted in the seagrass habitats of the Eastern Banks (142 km2), Moreton Bay, Australia. In the field, georeferenced photos of the seagrass were collected along transects via snorkelling or an AUV. Photos from both collection methods were analysed manually for seagrass species composition and then used as calibration and validation data to map seagrass using an established semi-automated object based mapping routine. A comparison of the relative advantages and disadvantages of AUV and snorkeller collected field data sets and their influence on the mapping routine was conducted. AUV data collection was more consistent, repeatable and safer in comparison to snorkeller transects. Inclusion of deeper water AUV data resulted in mapping of a larger extent of seagrass (~7 km2, 5 % of study area) in the deeper waters of the site. Although overall map accuracies did not differ considerably, inclusion of the AUV data from deeper water transects corrected errors in seagrass mapped at depths to 5 m, but where the bottom is visible on satellite imagery. Our results demonstrate that further development of AUV technology is justified for the monitoring of seagrass habitats in ongoing management programs.
Resumo:
In this paper we present research adapting a state of the art condition-invariant robotic place recognition algorithm to the role of automated inter- and intra-image alignment of sensor observations of environmental and skin change over time. The approach involves inverting the typical criteria placed upon navigation algorithms in robotics; we exploit rather than attempt to fix the limited camera viewpoint invariance of such algorithms, showing that approximate viewpoint repetition is realistic in a wide range of environments and medical applications. We demonstrate the algorithms automatically aligning challenging visual data from a range of real-world applications: ecological monitoring of environmental change, aerial observation of natural disasters including flooding, tsunamis and bushfires and tracking wound recovery and sun damage over time and present a prototype active guidance system for enforcing viewpoint repetition. We hope to provide an interesting case study for how traditional research criteria in robotics can be inverted to provide useful outcomes in applied situations.
Resumo:
An updated analysis of the previous analysis available here: http://eprints.qut.edu.au/76230/
Resumo:
Due to their unobtrusive nature, vision-based approaches to tracking sports players have been preferred over wearable sensors as they do not require the players to be instrumented for each match. Unfortunately however, due to the heavy occlusion between players, variation in resolution and pose, in addition to fluctuating illumination conditions, tracking players continuously is still an unsolved vision problem. For tasks like clustering and retrieval, having noisy data (i.e. missing and false player detections) is problematic as it generates discontinuities in the input data stream. One method of circumventing this issue is to use an occupancy map, where the field is discretised into a series of zones and a count of player detections in each zone is obtained. A series of frames can then be concatenated to represent a set-play or example of team behaviour. A problem with this approach though is that the compressibility is low (i.e. the variability in the feature space is incredibly high). In this paper, we propose the use of a bilinear spatiotemporal basis model using a role representation to clean-up the noisy detections which operates in a low-dimensional space. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labeled data.
Resumo:
Focus groups are a popular qualitative research method for information systems researchers. However, compared with the abundance of research articles and handbooks on planning and conducting focus groups, surprisingly, there is little research on how to analyse focus group data. Moreover, those few articles that specifically address focus group analysis are all in fields other than information systems, and offer little specific guidance for information systems researchers. Further, even the studies that exist in other fields do not provide a systematic and integrated procedure to analyse both focus group ‘content’ and ‘interaction’ data. As the focus group is a valuable method to answer the research questions of many IS studies (in the business, government and society contexts), we believe that more attention should be paid to this method in the IS research. This paper offers a systematic and integrated procedure for qualitative focus group data analysis in information systems research.
Resumo:
The development of microfinance in Vietnam since 1990s has coincided with a remarkable progress in poverty reduction. Numerous descriptive studies have illustrated that microfinance is an effective tool to eradicate poverty in Vietnam but evidence from quantitative studies is mixed. This study contributes to the literature by providing new evidence on the impact of microfinance to poverty reduction in Vietnam using the repeated cross - sectional data from the Vietnam Living Standard s Survey (VLSS) during period 1992 - 2010. Our results show that micro - loans contribute significantly to household consumption.
Resumo:
The upstream oil and gas industry has been contending with massive data sets and monolithic files for many years, but “Big Data” is a relatively new concept that has the potential to significantly re-shape the industry. Despite the impressive amount of value that is being realized by Big Data technologies in other parts of the marketplace, however, much of the data collected within the oil and gas sector tends to be discarded, ignored, or analyzed in a very cursory way. This viewpoint examines existing data management practices in the upstream oil and gas industry, and compares them to practices and philosophies that have emerged in organizations that are leading the way in Big Data. The comparison shows that, in companies that are widely considered to be leaders in Big Data analytics, data is regarded as a valuable asset—but this is usually not true within the oil and gas industry insofar as data is frequently regarded there as descriptive information about a physical asset rather than something that is valuable in and of itself. The paper then discusses how the industry could potentially extract more value from data, and concludes with a series of policy-related questions to this end.
Resumo:
Long-term systematic population monitoring data sets are rare but are essential in identifying changes in species abundance. In contrast, community groups and natural history organizations have collected many species lists. These represent a large, untapped source of information on changes in abundance but are generally considered of little value. The major problem with using species lists to detect population changes is that the amount of effort used to obtain the list is often uncontrolled and usually unknown. It has been suggested that using the number of species on the list, the "list length," can be a measure of effort. This paper significantly extends the utility of Franklin's approach using Bayesian logistic regression. We demonstrate the value of List Length Analysis to model changes in species prevalence (i.e., the proportion of lists on which the species occurs) using bird lists collected by a local bird club over 40 years around Brisbane, southeast Queensland, Australia. We estimate the magnitude and certainty of change for 269 bird species and calculate the probabilities that there have been declines and increases of given magnitudes. List Length Analysis confirmed suspected species declines and increases. This method is an important complement to systematically designed intensive monitoring schemes and provides a means of utilizing data that may otherwise be deemed useless. The results of List Length Analysis can be used for targeting species of conservation concern for listing purposes or for more intensive monitoring. While Bayesian methods are not essential for List Length Analysis, they can offer more flexibility in interrogating the data and are able to provide a range of parameters that are easy to interpret and can facilitate conservation listing and prioritization. © 2010 by the Ecological Society of America.
Resumo:
Quantifying the competing rates of intake and elimination of persistent organic pollutants (POPs) in the human body is necessary to understand the levels and trends of POPs at a population level. In this paper we reconstruct the historical intake and elimination of ten polychlorinated biphenyls (PCBs) and five organochlorine pesticides (OCPs) from Australian biomonitoring data by fitting a population-level pharmacokinetic (PK) model. Our analysis exploits two sets of cross-sectional biomonitoring data for PCBs and OCPs in pooled blood serum samples from the Australian population that were collected in 2003 and 2009. The modeled adult reference intakes in 1975 for PCB congeners ranged from 0.89 to 24.5 ng/kg bw/day, lower than the daily intakes of OCPs ranging from 73 to 970 ng/kg bw/day. Modeled intake rates are declining with half-times from 1.1 to 1.3 years for PCB congeners and 0.83 to 0.97 years for OCPs. The shortest modeled intrinsic human elimination half-life among the compounds studied here is 6.4 years for hexachlorobenzene, and the longest is 30 years for PCB-74. Our results indicate that it is feasible to reconstruct intakes and to estimate intrinsic human elimination half-lives using the population-level PK model and biomonitoring data only. Our modeled intrinsic human elimination half-lives are in good agreement with values from a similar study carried out for the population of the United Kingdom, and are generally longer than reported values from other industrialized countries in the Northern Hemisphere.
Resumo:
Despite being used since 1976, Delusions-Symptoms-States-Inventory/states of Anxiety and Depression (DSSI/sAD) has not yet been validated for use among people with diabetes. The aim of this study was to examine the validity of the personal disturbance scale (DSSI/sAD) among women with diabetes using Mater-University of Queensland Study of Pregnancy (MUSP) cohort data. The DSSI subscales were compared against DSM-IV disorders, the Mental Component Score of the Short Form 36 (SF-36 MCS), and Center for Epidemiologic Studies Depression Scale (CES-D). Factor analyses, odds ratios, receiver operating characteristic (ROC) analyses and diagnostic efficiency tests were used to report findings. Exploratory factor analysis and fit indices confirmed the hypothesized two-factor model of DSSI/sAD. We found significant variations in the DSSI/sAD domain scores that could be explained by CES-D (DSSI-Anxiety: 55%, DSSI-Depression: 46%) and SF-36 MCS (DSSI-Anxiety: 66%, DSSI-Depression: 56%). The DSSI subscales predicted DSM-IV diagnosed depression and anxiety disorders. The ROC analyses show that although the DSSI symptoms and DSM-IV disorders were measured concurrently the estimates of concordance remained only moderate. The findings demonstrate that the DSSI/sAD items have similar relationships to one another in both the diabetes and non-diabetes data sets which therefore suggest that they have similar interpretations.
Resumo:
In recommender systems based on multidimensional data, additional metadata provides algorithms with more information for better understanding the interaction between users and items. However, most of the profiling approaches in neighbourhood-based recommendation approaches for multidimensional data merely split or project the dimensional data and lack the consideration of latent interaction between the dimensions of the data. In this paper, we propose a novel user/item profiling approach for Collaborative Filtering (CF) item recommendation on multidimensional data. We further present incremental profiling method for updating the profiles. For item recommendation, we seek to delve into different types of relations in data to understand the interaction between users and items more fully, and propose three multidimensional CF recommendation approaches for top-N item recommendations based on the proposed user/item profiles. The proposed multidimensional CF approaches are capable of incorporating not only localized relations of user-user and/or item-item neighbourhoods but also latent interaction between all dimensions of the data. Experimental results show significant improvements in terms of recommendation accuracy.
Resumo:
A number of online algorithms have been developed that have small additional loss (regret) compared to the best “shifting expert”. In this model, there is a set of experts and the comparator is the best partition of the trial sequence into a small number of segments, where the expert of smallest loss is chosen in each segment. The regret is typically defined for worst-case data / loss sequences. There has been a recent surge of interest in online algorithms that combine good worst-case guarantees with much improved performance on easy data. A practically relevant class of easy data is the case when the loss of each expert is iid and the best and second best experts have a gap between their mean loss. In the full information setting, the FlipFlop algorithm by De Rooij et al. (2014) combines the best of the iid optimal Follow-The-Leader (FL) and the worst-case-safe Hedge algorithms, whereas in the bandit information case SAO by Bubeck and Slivkins (2012) competes with the iid optimal UCB and the worst-case-safe EXP3. We ask the same question for the shifting expert problem. First, we ask what are the simple and efficient algorithms for the shifting experts problem when the loss sequence in each segment is iid with respect to a fixed but unknown distribution. Second, we ask how to efficiently unite the performance of such algorithms on easy data with worst-case robustness. A particular intriguing open problem is the case when the comparator shifts within a small subset of experts from a large set under the assumption that the losses in each segment are iid.
Resumo:
The majority of sugar mill locomotives are equipped with GPS devices from which locomotive position data is stored. Locomotive run information (e.g. start times, run destinations and activities) is electronically stored in software called TOTools. The latest software development allows TOTools to interpret historical GPS information by combining this data with run information recorded in TOTools and geographic information from a GIS application called MapInfo. As a result, TOTools is capable of summarising run activity details such as run start and finish times and shunt activities with great accuracy. This paper presents 15 reports developed to summarise run activities and speed information. The reports will be of use pre-season to assist in developing the next year's schedule and for determining priorities for investment in the track infrastructure. They will also be of benefit during the season to closely monitor locomotive run performance against the existing schedule.