111 resultados para multiple data


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction and Aims. At present there is little research into the use of drug detection dogs. The present study sought to explore the use of detection dogs in Sydney, Australia, utilising multiple data sources.

Design and Methods. Data were taken from interviews with 100 regular ecstasy users and 20 key experts as part of the 2006 New South Wales arm of the Ecstasy and Related Drugs Reporting System, and secondary data sources.

Results.
The majority of regular ecstasy users reported taking some form of precaution if made aware that dogs would be at an event they were attending. A small proportion of the sample reported consuming their drugs when coming into contact with detection dogs. One group of key experts viewed the use of detection dogs as useful; one group disliked the use of detection dogs though cooperated with law enforcement when dogs were used; and one group considered that detection dogs contribute to greater harm. Secondary data sources further suggested that the use of detection dogs do not significantly assist police in identifying and apprehending drug suppliers.

Discussion and Conclusions.
The present study suggests that regular ecstasy users do not see detection dogs as an obstacle to their drug use. Future research is necessary to explore in greater depth the experiences that drug users have with detection dogs; the effect detection dogs may have on deterring drug consumption; whether encounters with detection dogs contribute to drug-related harm; and the cost–benefit analysis of this law enforcement exercise.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aims:This paper examines the epidemiology of ecstasy use and harm in Australia using multiple data sources.

Design: The data included (1) Australian Customs Service 3,4-methylenedioxymethamphetamine (MDMA) detections; (2) the National Drug Strategy Household and Australian Secondary Student Alcohol and Drug Surveys; (3) data from Australia's ecstasy and Related Drugs Reporting System; (4) the number of recorded police incidents for ecstasy possession and distribution collated by the N.S.W. Bureau of Crime Statistics and Research; (5) the number of calls to the Alcohol and Drug Information Service and Family Drug Support relating to ecstasy; (6) the Alcohol and Other Drug Treatment Services National Minimum Dataset on number of treatment episodes for ecstasy, and (7) N.S.W. Division of Analytical Laboratories toxicology data on number of deaths where MDMA was detected.

Findings: Recent ecstasy use among adults in the general population has increased, whereas among secondary students it has remained low and stable. The patterns of ecstasy consumption among regular ecstasy users have changed over time. Polydrug use and use for extended periods of time (>48 h) remain common among this group. Frequent ecstasy use is associated with a range of risk behaviours and other problems, which tend to be attributed to a number of drugs along with ecstasy. Few ecstasy users present for treatment for problems related to their ecstasy consumption.

Conclusions: Messages and interventions to reduce the risks associated with polydrug use and patterns of extended periods of use are clearly warranted. These messages should be delivered outside of traditional health care settings, as few of these users are engaged with such services.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hierarchical beta process has found interesting applications in recent years. In this paper we present a modified hierarchical beta process prior with applications to hierarchical modeling of multiple data sources. The novel use of the prior over a hierarchical factor model allows factors to be shared across different sources. We derive a slice sampler for this model, enabling tractable inference even when the likelihood and the prior over parameters are non-conjugate. This allows the application of the model in much wider contexts without restrictions. We present two different data generative models – a linear Gaussian-Gaussian model for real valued data and a linear Poisson-gamma model for count data. Encouraging transfer learning results are shown for two real world applications – text modeling and content based image retrieval.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Efficient management of chronic diseases is critical in modern health care. We consider diabetes mellitus, and our ongoing goal is to examine how machine learning can deliver information for clinical efficiency. The challenge is to aggregate highly heterogeneous sources including demographics, diagnoses, pathologies and treatments, and extract similar groups so that care plans can be designed. To this end, we extend our recent model, the mixed-variate restricted Boltzmann machine (MV.RBM), as it seamlessly integrates multiple data types for each patient aggregated over time and outputs a homogeneous representation called "latent profile" that can be used for patient clustering, visualisation, disease correlation analysis and prediction. We demonstrate that the method outperforms all baselines on these tasks - the primary characteristics of patients in the same groups are able to be identified and the good result can be achieved for the diagnosis codes prediction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The period of interest for this report is the beginning of 2011 to the end of 2012. The period commenced when the Regional Network Leader of the Barwon South Network of schools in the Barwon South Region of the Department of Education and Early Childhood contacted the School of Education at Deakin University, Waurn Ponds Campus Geelong. The Regional Network Leader outlined a desire to engage with Deakin University to research a short-term-cycle model of school improvement to be implemented in the region. While the model was expected to be taken on by all schools in the region the research was limited to the 23 schools in the Barwon South Network with four schools to be investigated more closely for each of two years (2001 & 2012) – eight focus schools in total.

Many positive outcomes flowed from the implementation of short-term-cycle school improvement plans and their associated practices but there was wide variation in the nature and degrees of success and of the perception of the process. The research team asked the following questions of the data:

1. What aspects of the School Improvement Plan (SIP) approach were important for initiating and supporting worthwhile change?
2. What might we take from this, to provide guidance on how best to support change in teaching and learning processes in schools?

The School Improvement Plan (SIP) worked in a range of ways. At one level it was strongly focused on school leadership, and a need to improve principals’ capacity to initiate worthwhile teaching and learning processes in their schools. Underlying this intent, one might think an assumption is operation is that the leadership process involves top down decision-making and a willingness to hold staff accountable for the quality of their practice.

The second strong focus was on the translation into practice and the consequent effect on student learning, involving an emphasis on data and evidence led practice. Hence, along with the leadership focus there was a demand for the process of school improvement to reach down into students and classrooms. Thus, the SIP process inevitably involved a chain of decision-making by which student learning quality drove the intervention, and teachers responsible for this had a common view. The model therefore should not be seen as an intervention only on the principal, but rather on the school decision-making system and focus. Even though it was the principal receiving the SIP planning template, and reporting to the network, the reporting was required to include description of the operation of the school processes, of classroom processes, and of student learning. This of course placed significant constraints on principals, which may help explain the variation in responses and outcomes described above.

The findings from this study are based on multiple data sources: analysis of both open and closed survey questions which all teachers in the 23 schools in the network were invited to complete; interviews with principals, teachers and leaders in the eight case study schools; some interviews with students in the case study schools; and interviews with leaders who worked in the regional network office; and field notes from network meetings including the celebrations days. Celebrations days occurred each school term when groups of principals came together to share and celebrate the improvements and processes happening in their schools. Many of the themes emerging from the analysis of the different data sources were similar or overlapping, providing some confidence in the evidence-base for the findings.

The study, conducted over two years of data collection and analysis, has demonstrated a range of positive outcomes in at the case study schools relating to school communication and collaboration processes, professional learning of principals, leadership teams and classroom teachers. There was evidence in the survey responses and field notes from ‘celebration days’ that these outcomes were also represented in other schools in the network. The key points of change concerned the leadership processes of planning for improvement, and the rigorous attention to student data in framing teaching and learning processes. This latter point of change had the effect of basing SIP processes on a platform of evidence-based change. The research uncovered considerable anecdotal and observational evidence of improvements in student learning, in teacher accounts in interview, and presentations of student work. Interviews with students, although not as representative as the team would have liked, showed evidence of student awareness of learning goals, a key driver in the SIP improvement model. It was, however, not possible over this timescale to collect objective comparative evidence of enhanced learning outcomes.

A number of features of the short-term-cycle SIP were identified that supported positive change across the network. These were: 1) the support structures represented by the network leader and support personnel within schools, 2) the nature of the SIP model – focusing strongly on change leadership but within a collaborative structure that combined top-down and bottom-up elements, 3) the focus on data-led planning and implementation that helped drill down to explicit elements of classroom practice, and 4) the accountability regimes represented by network leader presence, and the celebration days in which principals became effectively accountable to their peers. We found that in the second year of the project, momentum was lost in the case study schools, as the network was dismantled. This raised issues also for the conduct of research in situations of systemic change.

Alongside the finding of evidence of positive outcomes in the case study schools overall, was the finding that the SIP processes and outcomes varied considerably across schools. A number of contextual factors were identified that led to this variation, including school histories of reform, principal management style, and school size and structure that made the short-term-cycle model unmanageable. In some cases there was overt resistance to the SIP model, at least in some part, and this led to an element of performativity in which the language of the SIP was conscripted to other purposes. The study found that even with functioning schools the SIP was understood differently and the processes performed differently, raising the question of whether in the study we are dealing with one SIP or many. The final take home message from the research is that schools are complex institutions, and models of school improvement need to involve both strong principled features, and flexibility in local application, if all schools’ interests in improving teaching and learning processes and outcomes are to be served.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Educational campaigning has received little attention in the literature. This study investigates long-term and organised urban campaigns that are collectively lobbying the Victorian State Government in Australia, for a new public high school to be constructed in their suburb. A public high school is also known as a state school, government school, or an ordinary comprehensive school. It receives the majority of its funding from the State and Federal Australian Government, and is generally regarded as ‘free’ education, in comparison to a private school. Whilst the campaigners frame their requests as for a ‘public school’, their primary appeal is for a local school in their community. This study questions how collective campaigning for a locale-specific public school is influenced by geography, class and identity. In order to explore these campaigns, I draw on formative studies of middle-class school choice from an Australian and United Kingdom perspective (Campbell, Proctor, & Sherington, 2009; Reay, Crozier, & James, 2011). To think about the role of geography and space in these processes of choice, I look to apply Harvey’s (1973) theory of absolute, relational and relative space. I use Bourdieu (1999b) as a sociological lens that is attentive to “site effects” and it is through this lens that I think about class as a “collection of properties” (Bourdieu, 1984, p. 106), actualised via mechanisms of identity and representation (Hall, 1996; Rose, 1996a, 1996b). This study redresses three distinct gaps in the literature: first, I focus attention on a contemporary middle-class choice strategy—that is, collective campaigning for a public school. Research within this field is significantly under-developed, despite this choice strategy being on the rise. Second, previous research argues that certain middle-class choosers regard the local public school as “inferior” in some way (Reay, et al., 2011, p. 111), merely acting as a “safety net” (Campbell, et al., 2009, p. 5) and connected to the working-class chooser (Reay & Ball, 1997). The campaigners are characteristic of the middle-class school chooser, but they are purposefully and strategically seeking out the local public school. Therefore, this study looks to build on work by Reay, et al. (2011) in thinking about “against-the-grain school choice”, specifically within the Australian context. Third, this study uses visual and graphic methods in order to examine the influence of geography in the education market (Taylor, 2001). I see the visualisation of space and schooling that I offer in this dissertation as a key theoretical contribution of this study. I draw on a number of data sets, both qualitative and quantitative, to explore the research questions. I interviewed campaigners and attended campaign meetings as participant observer; I collected statistical data from fifteen different suburbs and schools, and conducted comparative analyses of each. These analyses are displayed by using visual graphs. This study uses maps created by a professional graphic designer and photographs by a professional photographer; I draw on publications by the campaigners themselves, such as surveys, reports and social media; but also, interviews with campaigners that are published in local or state newspapers. The multiple data sets enable an immersive and rich graphic ethnography. This study contributes by building on understandings of how particular sociological cohorts of choosers are engaging with, and choosing, the urban public school in Australia. It is relevant for policy making, in that it comes at a time of increasing privatisation and a move toward independent public schools. This study identifies cohorts of choosers that are employing individual and collective political strategies to obtain a specific school, and it identifies this cohort via explicit class-based characteristics and their school choice behaviours. I look to use fresh theoretical and methodological approaches that emphasise space and geography, theorising geo-identity and the pseudo-private school

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this article, we examine whether or not the inflation rate for 17 OECD countries can be modelled as a stationary process. We find that (1) conventional univariate unit root tests without any structural breaks generally reveal that the inflation rate contains a unit root; (2) the KPSS univariate test with multiple structural breaks reveals that for 10 out of 17 countries inflation is stationary; and (3) the KPSS panel unit root test reveals strong evidence for stationarity of the inflation rate for panels consisting of countries which were declared nonstationary by univariate tests.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recognising behaviours of multiple people, especially high-level behaviours, is an important task in surveillance systems. When the reliable assignment of people to the set of observations is unavailable, this task becomes complicated. To solve this task, we present an approach, in which the hierarchical hidden Markov model (HHMM) is used for modeling the behaviour of each person and the joint probabilistic data association filters (JPDAF) is applied for data association. The main contributions of this paper lie in the integration of multiple HHMMs for recognising high-level behaviours of multiple people and the construction of the Rao-Blackwellised particle filters (RBPF) for approximate inference. Preliminary experimental results in a real environment show the robustness of our integrated method in behaviour recognition and its advantage over the use of Kalman filter in tracking people.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

For multiple heterogeneous multicore server processors across clouds and data centers, the aggregated performance of the cloud of clouds can be optimized by load distribution and balancing. Energy efficiency is one of the most important issues for large-scale server systems in current and future data centers. The multicore processor technology provides new levels of performance and energy efficiency. The present paper aims to develop power and performance constrained load distribution methods for cloud computing in current and future large-scale data centers. In particular, we address the problem of optimal power allocation and load distribution for multiple heterogeneous multicore server processors across clouds and data centers. Our strategy is to formulate optimal power allocation and load distribution for multiple servers in a cloud of clouds as optimization problems, i.e., power constrained performance optimization and performance constrained power optimization. Our research problems in large-scale data centers are well-defined multivariable optimization problems, which explore the power-performance tradeoff by fixing one factor and minimizing the other, from the perspective of optimal load distribution. It is clear that such power and performance optimization is important for a cloud computing provider to efficiently utilize all the available resources. We model a multicore server processor as a queuing system with multiple servers. Our optimization problems are solved for two different models of core speed, where one model assumes that a core runs at zero speed when it is idle, and the other model assumes that a core runs at a constant speed. Our results in this paper provide new theoretical insights into power management and performance optimization in data centers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Distributing multiple replicas in geographically-dispersed clouds is a popular approach to reduce latency to users. It is important to ensure that each replica should have availability and data integrity features; that is, the same as the original data without any corruption and tampering. Remote data possession checking is a valid method to verify the replicass availability and integrity. Since remotely checking the entire data is time-consuming due to both the large data volume and the limited bandwidth, efficient data-possession- verifying methods generally sample and check a small hash (or random blocks) of the data to greatly reduce the I/O cost. Most recent research on data possession checking considers only single replica. However, multiple replicas data possession checking is much more challenging, since it is difficult to optimize the remote communication cost among multiple geographically-dispersed clouds. In this paper, we provide a novel efficient Distributed Multiple Replicas Data Possession Checking (DMRDPC) scheme to tackle new challenges. Our goal is to improve efficiency by finding an optimal spanning tree to define the partial order of scheduling multiple replicas data possession checking. But since the bandwidths have geographical diversity on the different replica links and the bandwidths between two replicas are asymmetric, we must resolve the problem of Finding an Optimal Spanning Tree in a Complete Bidirectional Directed Graph, which we call the FOSTCBDG problem. Particularly, we provide theories for resolving the FOSTCBDG problem through counting all the available paths that viruses attack in clouds network environment. Also, we help the cloud users to achieve efficient multiple replicas data possession checking by an approximate algorithm for tackling the FOSTCBDG problem, and the effectiveness is demonstrated by an experimental study. © 2011 Elsevier Inc.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVES: This study investigated the extent that psychosocial job stressors had lasting effects on a scaled measure of mental health. We applied econometric approaches to a longitudinal cohort to: (1) control for unmeasured individual effects; (2) assess the role of prior (lagged) exposures of job stressors on mental health and (3) the persistence of mental health.

METHODS: We used a panel study with 13 annual waves and applied fixed-effects, first-difference and fixed-effects Arellano-Bond models. The Short Form 36 (SF-36) Mental Health Component Summary score was the outcome variable and the key exposures included: job control, job demands, job insecurity and fairness of pay.

RESULTS: Results from the Arellano-Bond models suggest that greater fairness of pay (β-coefficient 0.34, 95% CI 0.23 to 0.45), job control (β-coefficient 0.15, 95% CI 0.10 to 0.20) and job security (β-coefficient 0.37, 95% CI 0.32 to 0.42) were contemporaneously associated with better mental health. Similar results were found for the fixed-effects and first-difference models. The Arellano-Bond model also showed persistent effects of individual mental health, whereby individuals' previous reports of mental health were related to their reporting in subsequent waves. The estimated long-run impact of job demands on mental health increased after accounting for time-related dynamics, while there were more minimal impacts for the other job stressor variables.

CONCLUSIONS: Our results showed that the majority of the effects of psychosocial job stressors on a scaled measure of mental health are contemporaneous except for job demands where accounting for the lagged dynamics was important.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

To generate realistic predictions, species distribution models require the accurate coregistration of occurrence data with environmental variables. There is a common assumption that species occurrence data are accurately georeferenced; however, this is often not the case. This study investigates whether locational uncertainty and sample size affect the performance and interpretation of fine-scale species distribution models. This study evaluated the effects of locational uncertainty across multiple sample sizes by subsampling and spatially degrading occurrence data. Distribution models were constructed for kelp (Ecklonia radiata), across a large study site (680 km2) off the coast of southeastern Australia. Generalized additive models were used to predict distributions based on fine-resolution (2·5 m cell size) seafloor variables, generated from multibeam echosounder data sets, and occurrence data from underwater towed video. The effects of different levels of locational uncertainty in combination with sample size were evaluated by comparing model performance and predicted distributions. While locational uncertainty was observed to influence some measures of model performance, in general this was small and varied based on the accuracy metric used. However, simulated locational uncertainty caused changes in variable importance and predicted distributions at fine scales, potentially influencing model interpretation. This was most evident with small sample sizes. Results suggested that seemingly high-performing, fine-scale models can be generated from data containing locational uncertainty, although interpreting their predictions can be misleading if the predictions are interpreted at scales similar to the spatial errors. This study demonstrated the need to consider predictions across geographic space rather than performance alone. The findings are important for conservation managers as they highlight the inherent variation in predictions between equally performing distribution models, and the subsequent restrictions on ecological interpretations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An impediment to sustainable dryland salinity management is the lack of information on contributing factors. GIS and satellite imagery now offer a cost-effective means of generating relevant land and water resource information for integrated regional management of salinity. In this paper the relationships between patterns in land uselcover distribution and base flow salt concentration in streams (indicated by EC) are investigated and modelled. The Glenelg-Hopkins area is a large regional watershed in southwest Victoria, Australia, covering approximately 2.6 million ha. It is currently estimated that 27,400 ha of land is affected by dryland salinity and this is predicted to rapidly increase in the next decade' if current conditions prevail. Salt concentration data from five gauging stations were analysed with multi-temporal land use maps obtained from satellite imagery. Multiple regression analyses demonstrated that the variables Native Vegetation and Dry/and Grain Cropping were the most significant influences on in~stream salinity in the whole catchment (1=88.9%) and 500 m V=88.3%) and 100 m riparian buffers (1=86.9%) during times of base flow. The implications for future land use planning, effectiveness of riparian zones and revegetation programmes is discussed. This work also demonstrates the utility of applying nmltivariate statistical analyses, spatial statistics, and remote sensing with data integrated in a GIS framework for the purpose of predicting and managing the regional salinity threat.