876 resultados para coalescing random walk
Resumo:
This paper describes the approach taken to the clustering task at INEX 2009 by a group at the Queensland University of Technology. The Random Indexing (RI) K-tree has been used with a representation that is based on the semantic markup available in the INEX 2009 Wikipedia collection. The RI K-tree is a scalable approach to clustering large document collections. This approach has produced quality clustering when evaluated using two different methodologies.
Resumo:
In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.
Resumo:
What can we learn from Chinese parks and Chinese people? We took a walk through Lu Xun Park, one of Shanghai’s most popular parks. Here hundreds of people participate in dozens of physical and mental exercises that are considered to be essential for good health and fitness. We found that the most successful spaces for social interaction and inclusiveness were not just coded for ‘doing’ a diverse range of activities but also for ‘showing’ those activities. This dual role of park spaces could be given greater design consideration in encouraging occupants of small households in Australia to make greater use of public parks in the future.
Resumo:
Purpose – The construction industry in Australia is characterised by a long work-hours culture, with conditions that make it difficult for staff to balance their work and non-work lives. The objective of this paper is to measure the success of a work-place intervention designed to improve work-life balance (WLB) in an alliance project in the construction industry, and the role the project manager plays in this success. Design/methodology/approach – The paper focuses on an alliance case study. Interviews were conducted at two points in time, several months apart, after the interventions were implemented. Findings – Results showed that staff on the whole were more satisfied with their work experience after the interventions, and indicated the important role that managers' attitudes and behaviours played. Originality/value – Managerial support for work-life initiatives is a critical element in achieving WLB and satisfaction with working arrangements. The fact that the manager “talked the talk and walked the walk” was a major contributing success factor, which has not previously been demonstrated.
Resumo:
Quantitative studies of nascent entrepreneurs such as GEM and PSED are required to generate their samples by screening the adult population, usually by phone in developed economies. Phone survey research has recently been challenged by shifting patterns of ownership and response rates of landline versus mobile (cell) phones, particularly for younger respondents. This challenge is acutely intense for entrepreneurship which is a strongly age-dependent phenomenon. Although shifting ownership rates have received some attention, shifting response rates have remained largely unexplored. For the Australian GEM 2010 adult population study we conducted a dual-frame approach that allows comparison between samples of mobile and landline phones. We find a substantial response bias towards younger, male and metropolitan respondents for mobile phones – far greater than explained by ownership rates. We also found these response rate differences significantly biases the estimates of the prevalence of early stage entrepreneurship by both samples, even when each sample is weighted to match the Australian population.
Resumo:
In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.
Resumo:
Objective: The global implementation of oral random roadside drug testing is relatively limited, and correspondingly, the literature that focuses on the effectiveness of this intervention is scant. This study aims to provide a preliminary indication of the impact of roadside drug testing in Queensland. Methods: A sample of Queensland motorists’ (N= 922) completed a self-report questionnaire to investigate their drug driving behaviour, as well as examine the perceived affect of legal sanctions (certainty, severity and swiftness) and knowledge of the countermeasure on their subsequent offending behaviour. Results: Analysis of the collected data revealed that approximately 20% of participants reported drug driving at least once in the last six months. Overall, there was considerable variability in respondent’s perceptions regarding the certainty, severity and swiftness of legal sanctions associated with the testing regime and a considerable proportion remained unaware of testing practices. In regards to predicting those who intended to drug driving again in the future, perceptions of apprehension certainty, more specifically low certainty of apprehension, were significantly associated with self-reported intentions to offend. Additionally, self-reported recent drug driving activity and frequent drug consumption were also identified as significant predictors, which indicates that in the current context, past behaviour is a prominent predictor of future behaviour. To a lesser extent, awareness of testing practices was a significant predictor of intending not to drug drive in the future. Conclusion: The results indicate that drug driving is relatively prevalent on Queensland roads, and a number of factors may influence such behaviour. Additionally, while the roadside testing initiative is beginning to have a deterrent impact, its success will likely be linked with targeted intelligence-led implementation in order to increase apprehension levels as well as the general deterrent effect.
Resumo:
Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.
Resumo:
Analytical expressions are derived for the mean and variance, of estimates of the bispectrum of a real-time series assuming a cosinusoidal model. The effects of spectral leakage, inherent in discrete Fourier transform operation when the modes present in the signal have a nonintegral number of wavelengths in the record, are included in the analysis. A single phase-coupled triad of modes can cause the bispectrum to have a nonzero mean value over the entire region of computation owing to leakage. The variance of bispectral estimates in the presence of leakage has contributions from individual modes and from triads of phase-coupled modes. Time-domain windowing reduces the leakage. The theoretical expressions for the mean and variance of bispectral estimates are derived in terms of a function dependent on an arbitrary symmetric time-domain window applied to the record. the number of data, and the statistics of the phase coupling among triads of modes. The theoretical results are verified by numerical simulations for simple test cases and applied to laboratory data to examine phase coupling in a hypothesis testing framework