154 resultados para Random-Walk Hypothesis


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The global implementation of oral random roadside drug testing is relatively limited, and correspondingly, the literature that focuses on the effectiveness of this intervention is scant. This study aims to provide a preliminary indication of the impact of roadside drug testing in Queensland. Methods: A sample of Queensland motorists’ (N= 922) completed a self-report questionnaire to investigate their drug driving behaviour, as well as examine the perceived affect of legal sanctions (certainty, severity and swiftness) and knowledge of the countermeasure on their subsequent offending behaviour. Results: Analysis of the collected data revealed that approximately 20% of participants reported drug driving at least once in the last six months. Overall, there was considerable variability in respondent’s perceptions regarding the certainty, severity and swiftness of legal sanctions associated with the testing regime and a considerable proportion remained unaware of testing practices. In regards to predicting those who intended to drug driving again in the future, perceptions of apprehension certainty, more specifically low certainty of apprehension, were significantly associated with self-reported intentions to offend. Additionally, self-reported recent drug driving activity and frequent drug consumption were also identified as significant predictors, which indicates that in the current context, past behaviour is a prominent predictor of future behaviour. To a lesser extent, awareness of testing practices was a significant predictor of intending not to drug drive in the future. Conclusion: The results indicate that drug driving is relatively prevalent on Queensland roads, and a number of factors may influence such behaviour. Additionally, while the roadside testing initiative is beginning to have a deterrent impact, its success will likely be linked with targeted intelligence-led implementation in order to increase apprehension levels as well as the general deterrent effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The CDKN2 gene, encoding the cyclin-dependent kinase inhibitor p16, is a tumour suppressor gene that maps to chromosome band 9p21-p22. The most common mechanism of inactivation of this gene in human cancers is through homozygous deletion; however, in a smaller proportion of tumours and tumour cell lines intragenic mutations occur. In this study we have compiled a database of over 120 published point mutations in the CDKN2 gene from a wide variety of tumour types. A further 50 deletions, insertions, and splice mutations in CDKN2 have also been compiled. Furthermore, we have standardised the numbering of all mutations according to the full-length 156 amino acid form of p16. From this study we are able to define several hot spots, some of which occur at conserved residues within the ankyrin domains of p16. While many of the hotspots are shared by a number of cancers, the relative importance of each position varies, possibly reflecting the role of different carcinogens in the development of certain tumours. As reported previously, the mutational spectrum of CDKN2 in melanomas differs from that of internal malignancies and supports the involvement of UV in melanoma tumorigenesis. Notably, 52% of all substitutions in melanoma-derived samples occurred at just six nucleotide positions. Nonsense mutations comprise a comparatively high proportion of mutations present in the CDKN2 gene, and possible explanations for this are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the dangers associated with drink walking, limited research is currently available regarding the factors which influence individuals to engage in this risky behaviour. This study examined the influence of psychosocial factors upon individuals’ intentions to drink walk across four experimental scenarios (and a control condition). Specifically, a 2 × 2 repeated measures design was utilised in which all of the scenarios incorporated a risky pedestrian crossing situation (i.e., a pedestrian crossing against a red man signal) but differed according to the level of group identity (i.e., low/strangers and high/friends) and conformity (low and high). Individuals were assessed for their intentions to drink walk within each of these different scenarios. Undergraduate students (N = 151), aged 17–30 years, completed a questionnaire. Overall, most of the study's hypotheses were supported with individuals reporting the highest intentions to drink walk when in the presence of friends (i.e., high group identity) and their friends were said to be also crossing against the red man signal (i.e., high conformity). The findings may have significant implications for the design of countermeasures to reduce drink walking. For instance, the current findings would suggest that potentially effective strategies may be to promote resilience to peer influence as well as highlight the negative consequences associated with following the behaviour of other intoxicated pedestrians who are crossing against a red signal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large amounts of money due to product recalls, consumer impact and subsequent loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and microorganisms to enter the package. In the food processing and packaging industry worldwide, there is an increasing demand for cost effective state of the art inspection technologies that are capable of reliably detecting leaky seals and delivering products at six-sigma. The new technology will develop non-destructive testing technology using digital imaging and sensing combined with a differential vacuum technique to assess seal integrity of food packages on a high-speed production line. The cost of leaky packages in Australian food industries is estimated close to AUD $35 Million per year. Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large sums of money due to product recalls, compensation claims and loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and micro-organisms to enter the package. Flexible plastic packages are widely used, and are the least expensive form of retaining the quality of the product. These packets can be used to seal, and therefore maximise, the shelf life of both dry and moist products. The seals of food packages need to be airtight so that the food content is not contaminated due to contact with microorganisms that enter as a result of air leakage. Airtight seals also extend the shelf life of packaged foods, and manufacturers attempt to prevent food products with leaky seals being sold to consumers. There are many current NDT (non-destructive testing) methods of testing the seal of flexible packages best suited to random sampling, and for laboratory purposes. The three most commonly used methods are vacuum/pressure decay, bubble test, and helium leak detection. Although these methods can detect very fine leaks, they are limited by their high processing time and are not viable in a production line. Two nondestructive in-line packaging inspection machines are currently available and are discussed in the literature review. The detailed design and development of the High-Speed Sensing and Detection System (HSDS) is the fundamental requirement of this project and the future prototype and production unit. Successful laboratory testing was completed and a methodical design procedure was needed for a successful concept. The Mechanical tests confirmed the vacuum hypothesis and seal integrity with good consistent results. Electrically, the testing also provided solid results to enable the researcher to move the project forward with a certain amount of confidence. The laboratory design testing allowed the researcher to confirm theoretical assumptions before moving into the detailed design phase. Discussion on the development of the alternative concepts in both mechanical and electrical disciplines enables the researcher to make an informed decision. Each major mechanical and electrical component is detailed through the research and design process. The design procedure methodically works through the various major functions both from a mechanical and electrical perspective. It opens up alternative ideas for the major components that although are sometimes not practical in this application, show that the researcher has exhausted all engineering and functionality thoughts. Further concepts were then designed and developed for the entire HSDS unit based on previous practice and theory. In the future, it would be envisaged that both the Prototype and Production version of the HSDS would utilise standard industry available components, manufactured and distributed locally. Future research and testing of the prototype unit could result in a successful trial unit being incorporated in a working food processing production environment. Recommendations and future works are discussed, along with options in other food processing and packaging disciplines, and other areas in the non-food processing industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fusion techniques have received considerable attention for achieving performance improvement with biometrics. While a multi-sample fusion architecture reduces false rejects, it also increases false accepts. This impact on performance also depends on the nature of subsequent attempts, i.e., random or adaptive. Expressions for error rates are presented and experimentally evaluated in this work by considering the multi-sample fusion architecture for text-dependent speaker verification using HMM based digit dependent speaker models. Analysis incorporating correlation modeling demonstrates that the use of adaptive samples improves overall fusion performance compared to randomly repeated samples. For a text dependent speaker verification system using digit strings, sequential decision fusion of seven instances with three random samples is shown to reduce the overall error of the verification system by 26% which can be further reduced by 6% for adaptive samples. This analysis novel in its treatment of random and adaptive multiple presentations within a sequential fused decision architecture, is also applicable to other biometric modalities such as finger prints and handwriting samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poisson distribution has often been used for count like accident data. Negative Binomial (NB) distribution has been adopted in the count data to take care of the over-dispersion problem. However, Poisson and NB distributions are incapable of taking into account some unobserved heterogeneities due to spatial and temporal effects of accident data. To overcome this problem, Random Effect models have been developed. Again another challenge with existing traffic accident prediction models is the distribution of excess zero accident observations in some accident data. Although Zero-Inflated Poisson (ZIP) model is capable of handling the dual-state system in accident data with excess zero observations, it does not accommodate the within-location correlation and between-location correlation heterogeneities which are the basic motivations for the need of the Random Effect models. This paper proposes an effective way of fitting ZIP model with location specific random effects and for model calibration and assessment the Bayesian analysis is recommended.