954 resultados para computer algorithm
Resumo:
This paper describes algorithms that can musically augment the realtime performance of electronic dance music by generating new musical material by morphing. Note sequence morphing involves the algorithmic generation of music that smoothly transitions between two existing musical segments. The potential of musical morphing in electronic dance music is outlined and previous research is summarised; including discussions of relevant music theoretic and algorithmic concepts. An outline and explanation is provided of a novel Markov morphing process that uses similarity measures to construct transition matrices. The paper reports on a ‘focus-concert’ study used to evaluate this morphing algorithm and to compare its output with performances from a professional DJ. Discussions of this trial include reflections on some of the aesthetic characteristics of note sequence morphing. The research suggests that the proposed morphing technique could be effectively used in some electronic dance music contexts.
Resumo:
This paper reports a study investigating the effect of individual cognitive styles on learning through computer-based instruction. The study adopted a quasi-experimental design involving four groups which were presented with instructional material that either matched or mismatched with their preferred cognitive styles. Cognitive styles were measured by cognitive style assessment software (Riding, 1991). The instructional material was designed to cater for the four cognitive styles identified by Riding. Students' learning outcomes were measured by the time taken to perform test tasks and the number of marks scored. The results indicate no significant difference between the matched and mismatched groups on both time taken and scores on test tasks. However, there was significant difference between the four cognitive styles on test score. The Wholist/Verbaliser group performed better then all other groups. There was no significant difference between the other groups. An analysis of the performance on test task by each cognitive style showed significant difference between the groups on recall, labelling and explanation. Difference between the cognitive style groups did not reach significance level for problem-solving tasks. The findings of the study indicate a potential for cognitive style to influence learning outcomes measured by performance on test tasks.
Resumo:
Video surveillance technology, based on Closed Circuit Television (CCTV) cameras, is one of the fastest growing markets in the field of security technologies. However, the existing video surveillance systems are still not at a stage where they can be used for crime prevention. The systems rely heavily on human observers and are therefore limited by factors such as fatigue and monitoring capabilities over long periods of time. To overcome this limitation, it is necessary to have “intelligent” processes which are able to highlight the salient data and filter out normal conditions that do not pose a threat to security. In order to create such intelligent systems, an understanding of human behaviour, specifically, suspicious behaviour is required. One of the challenges in achieving this is that human behaviour can only be understood correctly in the context in which it appears. Although context has been exploited in the general computer vision domain, it has not been widely used in the automatic suspicious behaviour detection domain. So, it is essential that context has to be formulated, stored and used by the system in order to understand human behaviour. Finally, since surveillance systems could be modeled as largescale data stream systems, it is difficult to have a complete knowledge base. In this case, the systems need to not only continuously update their knowledge but also be able to retrieve the extracted information which is related to the given context. To address these issues, a context-based approach for detecting suspicious behaviour is proposed. In this approach, contextual information is exploited in order to make a better detection. The proposed approach utilises a data stream clustering algorithm in order to discover the behaviour classes and their frequency of occurrences from the incoming behaviour instances. Contextual information is then used in addition to the above information to detect suspicious behaviour. The proposed approach is able to detect observed, unobserved and contextual suspicious behaviour. Two case studies using video feeds taken from CAVIAR dataset and Z-block building, Queensland University of Technology are presented in order to test the proposed approach. From these experiments, it is shown that by using information about context, the proposed system is able to make a more accurate detection, especially those behaviours which are only suspicious in some contexts while being normal in the others. Moreover, this information give critical feedback to the system designers to refine the system. Finally, the proposed modified Clustream algorithm enables the system to both continuously update the system’s knowledge and to effectively retrieve the information learned in a given context. The outcomes from this research are: (a) A context-based framework for automatic detecting suspicious behaviour which can be used by an intelligent video surveillance in making decisions; (b) A modified Clustream data stream clustering algorithm which continuously updates the system knowledge and is able to retrieve contextually related information effectively; and (c) An update-describe approach which extends the capability of the existing human local motion features called interest points based features to the data stream environment.
Resumo:
This paper reports the findings of a pilot study aimed at improving learning outcomes from Computer Assisted Instruction (CAI). The study involved second year nursing students at the Queensland University of Technology. Students were assessed for their preferred cognitive style and presented with either matched or mismatched instructional material. The instructional material was developed in accordance with four cognitive styles (Riding & Cheema, 1991). The findings indicate groups that received instructional material which matched their preferred cognitive style, possibly, performed better than groups that received mismatched instructional material. The matched group was particularly better in the explanation and problem solving tasks.
Resumo:
Visual recording devices such as video cameras, CCTVs, or webcams have been broadly used to facilitate work progress or safety monitoring on construction sites. Without human intervention, however, both real-time reasoning about captured scenes and interpretation of recorded images are challenging tasks. This article presents an exploratory method for automated object identification using standard video cameras on construction sites. The proposed method supports real-time detection and classification of mobile heavy equipment and workers. The background subtraction algorithm extracts motion pixels from an image sequence, the pixels are then grouped into regions to represent moving objects, and finally the regions are identified as a certain object using classifiers. For evaluating the method, the formulated computer-aided process was implemented on actual construction sites, and promising results were obtained. This article is expected to contribute to future applications of automated monitoring systems of work zone safety or productivity.
Resumo:
Focusing on the conditions that an optimization problem may comply with, the so-called convergence conditions have been proposed and sequentially a stochastic optimization algorithm named as DSZ algorithm is presented in order to deal with both unconstrained and constrained optimizations. The principle is discussed in the theoretical model of DSZ algorithm, from which we present the practical model of DSZ algorithm. Practical model efficiency is demonstrated by the comparison with the similar algorithms such as Enhanced simulated annealing (ESA), Monte Carlo simulated annealing (MCS), Sniffer Global Optimization (SGO), Directed Tabu Search (DTS), and Genetic Algorithm (GA), using a set of well-known unconstrained and constrained optimization test cases. Meanwhile, further attention goes to the strategies how to optimize the high-dimensional unconstrained problem using DSZ algorithm.
Resumo:
This study uses dosimetry film measurements and Monte Carlo simulations to investigate the accuracy of type-a (pencil-beam) dose calculations for predicting the radiation doses delivered during stereotactic radiotherapy treatments of the brain. It is shown that when evaluating doses in a water phantom, the type-a algorithm provides dose predictions which are accurate to within clinically relevant criteria, gamma(3%,3mm), but these predictions are nonetheless subtly different from the results of evaluating doses from the same fields using radiochromic film and Monte Carlo simulations. An analysis of a clinical meningioma treatment suggests that when predicting stereotactic radiotherapy doses to the brain, the inaccuracies of the type-a algorithm can be exacerbated by inadequate evaluation of the effects of nearby bone or air, resulting in dose differences of up to 10% for individual fields. The results of this study indicate the possible advantage of using Monte Carlo calculations, as well as measurements with high-spatial resolution media, to verify type-a predictions of dose delivered in cranial treatments.
Resumo:
The aim of this project was to implement a just-in-time hints help system into a real time strategy (RTS) computer game that would deliver information to the user at the time that it would be of the most benefit. The goal of this help system is to improve the user’s learning in terms of their rate of learning, retention and avoidance of stagnation. The first stage of this project was implementing a computer game to incorporate four different types of skill that the user must acquire, namely motor, perceptual, declarative knowledge and strategic. Subsequently, the just-in-time hints help system was incorporated into the game to assess the user’s knowledge and deliver hints accordingly. The final stage of the project was to test the effectiveness of this help system by conducting two phases of testing. The goal of this testing was to demonstrate an increase in the user’s assessment of the helpfulness of the system from phase one to phase two. The results of this testing showed that there was no significant difference in the user’s responses in the two phases. However, when the results were analysed with respect to several categories of hints that were identified, it became apparent that patterns in the data were beginning to emerge. The conclusions of the project were that further testing with a larger sample size would be required to provide more reliable results and that factors such as the user’s skill level and different types of goals should be taken into account.
Resumo:
In this paper we present pyktree, an implementation of the K-tree algorithm in the Python programming language. The K-tree algorithm provides highly balanced search trees for vector quantization that scales up to very large data sets. Pyktree is highly modular and well suited for rapid-prototyping of novel distance measures and centroid representations. It is easy to install and provides a python package for library use as well as command line tools.
Resumo:
Fractures of long bones are sometimes treated using various types of fracture fixation devices including internal plate fixators. These are specialised plates which are used to bridge the fracture gap(s) whilst anatomically aligning the bone fragments. The plate is secured in position by screws. The aim of such a device is to support and promote the natural healing of the bone. When using an internal fixation device, it is necessary for the clinician to decide upon many parameters, for example, the type of plate and where to position it; how many and where to position the screws. While there have been a number of experimental and computational studies conducted regarding the configuration of screws in the literature, there is still inadequate information available concerning the influence of screw configuration on fracture healing. Because screw configuration influences the amount of flexibility at the area of fracture, it has a direct influence on the fracture healing process. Therefore, it is important that the chosen screw configuration does not inhibit the healing process. In addition to the impact on the fracture healing process, screw configuration plays an important role in the distribution of stresses in the plate due to the applied loads. A plate that experiences high stresses is prone to early failure. Hence, the screw configuration used should not encourage the occurrence of high stresses. This project develops a computational program in Fortran programming language to perform mathematical optimisation to determine the screw configuration of an internal fixation device within constraints of interfragmentary movement by minimising the corresponding stress in the plate. Thus, the optimal solution suggests the positioning and number of screws which satisfies the predefined constraints of interfragmentary movements. For a set of screw configurations the interfragmentary displacement and the stress occurring in the plate were calculated by the Finite Element Method. The screw configurations were iteratively changed and each time the corresponding interfragmentary displacements were compared with predefined constraints. Additionally, the corresponding stress was compared with the previously calculated stress value to determine if there was a reduction. These processes were continued until an optimal solution was achieved. The optimisation program has been shown to successfully predict the optimal screw configuration in two cases. The first case was a simplified bone construct whereby the screw configuration solution was comparable with those recommended in biomechanical literature. The second case was a femoral construct, of which the resultant screw configuration was shown to be similar to those used in clinical cases. The optimisation method and programming developed in this study has shown that it has potential to be used for further investigations with the improvement of optimisation criteria and the efficiency of the program.
Resumo:
Background In order to provide insights into the complex biochemical processes inside a cell, modelling approaches must find a balance between achieving an adequate representation of the physical phenomena and keeping the associated computational cost within reasonable limits. This issue is particularly stressed when spatial inhomogeneities have a significant effect on system's behaviour. In such cases, a spatially-resolved stochastic method can better portray the biological reality, but the corresponding computer simulations can in turn be prohibitively expensive. Results We present a method that incorporates spatial information by means of tailored, probability distributed time-delays. These distributions can be directly obtained by single in silico or a suitable set of in vitro experiments and are subsequently fed into a delay stochastic simulation algorithm (DSSA), achieving a good compromise between computational costs and a much more accurate representation of spatial processes such as molecular diffusion and translocation between cell compartments. Additionally, we present a novel alternative approach based on delay differential equations (DDE) that can be used in scenarios of high molecular concentrations and low noise propagation. Conclusions Our proposed methodologies accurately capture and incorporate certain spatial processes into temporal stochastic and deterministic simulations, increasing their accuracy at low computational costs. This is of particular importance given that time spans of cellular processes are generally larger (possibly by several orders of magnitude) than those achievable by current spatially-resolved stochastic simulators. Hence, our methodology allows users to explore cellular scenarios under the effects of diffusion and stochasticity in time spans that were, until now, simply unfeasible. Our methodologies are supported by theoretical considerations on the different modelling regimes, i.e. spatial vs. delay-temporal, as indicated by the corresponding Master Equations and presented elsewhere.