959 resultados para Computer software maintenance
Resumo:
The SoundCipher software library provides an easy way to create music in the Processing development environment. With the SoundCipher library added to Processing you can write software programs that make music to go along with your graphics and you can add sounds to enhance your Processing animations or games. SoundCipher provides an easy interface for playing 'notes' on the JavaSound synthesizer, for playback of audio files, and comunicating via MIDI. It provides accurate scheduling and allows events to be organised in musical time; using beats and tempo. It uses a 'score' metaphor that allows the construction of simple or complex musical arrangements. SoundCipher is designed to facilitate the basics of algorithmic music and interactive sound design as well as providing a platform for sophisticated computational music, it allows integration with the Minim library when more sophisticated audio and synthesis functionality is required and integration with the oscP5 library for communicating via open sound control.
Resumo:
Cultural objects are increasingly generated and stored in digital form, yet effective methods for their indexing and retrieval still remain an important area of research. The main problem arises from the disconnection between the content-based indexing approach used by computer scientists and the description-based approach used by information scientists. There is also a lack of representational schemes that allow the alignment of the semantics and context with keywords and low-level features that can be automatically extracted from the content of these cultural objects. This paper presents an integrated approach to address these problems, taking advantage of both computer science and information science approaches. We firstly discuss the requirements from a number of perspectives: users, content providers, content managers and technical systems. We then present an overview of our system architecture and describe various techniques which underlie the major components of the system. These include: automatic object category detection; user-driven tagging; metadata transform and augmentation, and an expression language for digital cultural objects. In addition, we discuss our experience on testing and evaluating some existing collections, analyse the difficulties encountered and propose ways to address these problems.
Resumo:
The over representation of novice drivers in crashes is alarming. Research indicates that one in five drivers’ crashes within their first year of driving. Driver training is one of the interventions aimed at decreasing the number of crashes that involve young drivers. Currently, there is a need to develop comprehensive driver evaluation system that benefits from the advances in Driver Assistance Systems. Since driving is dependent on fuzzy inputs from the driver (i.e. approximate distance calculation from the other vehicles, approximate assumption of the other vehicle speed), it is necessary that the evaluation system is based on criteria and rules that handles uncertain and fuzzy characteristics of the drive. This paper presents a system that evaluates the data stream acquired from multiple in-vehicle sensors (acquired from Driver Vehicle Environment-DVE) using fuzzy rules and classifies the driving manoeuvres (i.e. overtake, lane change and turn) as low risk or high risk. The fuzzy rules use parameters such as following distance, frequency of mirror checks, gaze depth and scan area, distance with respect to lanes and excessive acceleration or braking during the manoeuvre to assess risk. The fuzzy rules to estimate risk are designed after analysing the selected driving manoeuvres performed by driver trainers. This paper focuses mainly on the difference in gaze pattern for experienced and novice drivers during the selected manoeuvres. Using this system, trainers of novice drivers would be able to empirically evaluate and give feedback to the novice drivers regarding their driving behaviour.
Resumo:
Several approaches have been proposed to recognize handwritten Bengali characters using different curve fitting algorithms and curvature analysis. In this paper, a new algorithm (Curve-fitting Algorithm) to identify various strokes of a handwritten character is developed. The curve-fitting algorithm helps recognizing various strokes of different patterns (line, quadratic curve) precisely. This reduces the error elimination burden heavily. Implementation of this Modified Syntactic Method demonstrates significant improvement in the recognition of Bengali handwritten characters.
Resumo:
Discusses the contentious issues surrounding computer software patents and patenting in connection with the Peer-to-Patent Australia project, a joint initiative of Queensland University of Technology (QUT) and New York Law School (NYLS) that operates with the support and endorsement of IP Australia, the government body housing Australia's patent office. Explains that the project is based on the successful Peer-to-Patent pilots run recently in the USA and Japan that are designed to improve the quality of issued patents and the patent examination process by facilitating community participation in that process. Describes how members of the public are allowed to put forward prior art references that will be considered by IP Australia's patent examiners when determining whether participating applications are novel and inventive, and therefore deserving of a patent. Concludes that, while Peer-to-Patent Australia is not a complete solution to the problems besetting patent law, the model has considerable advantages over the traditional model of patent examination
Resumo:
This book disseminates current information pertaining to the modulatory effects of foods and other food substances on behavior and neurological pathways and, importantly, vice versa. This ranges from the neuroendocrine control of eating to the effects of life-threatening disease on eating behavior. The importance of this contribution to the scientific literature lies in the fact that food and eating are an essential component of cultural heritage but the effects of perturbations in the food/cognitive axis can be profound. The complex interrelationship between neuropsychological processing, diet, and behavioral outcome is explored within the context of the most contemporary psychobiological research in the area. This comprehensive psychobiology- and pathology-themed text examines the broad spectrum of diet, behavioral, and neuropsychological interactions from normative function to occurrences of severe and enduring psychopathological processes
Resumo:
Patent systems around the world are being pressed to recognise and protect challengingly new and exciting subject matter in order to keep pace with the rapid technological advancement of our age and the fact we are moving into the era of the ‘knowledge economy’. This rapid development and pressure to expand the bounds of what has traditionally been recognised as patentable subject matter has created uncertainty regarding what it is that the patent system is actually supposed to protect. Among other things, the patent system has had to contend with uncertainty surrounding claims to horticultural and agricultural methods, artificial living micro-organisms, methods of treating the human body, computer software and business methods. The contentious issue of the moment is one at whose heart lies the important distinction between what is a mere abstract idea and what is properly an invention deserving of the monopoly protection afforded by a patent. That question is whether purely intangible inventions, being methods that do not involve a physical aspect or effect or cause a physical transformation of matter, constitute patentable subject matter. This paper goes some way to addressing these uncertainties by considering how the Australian approach to the question can be informed by developments arising in the United States of America, and canvassing some of the possible lessons we in Australia might learn from the approaches taken thus far in the United States.
Resumo:
Expert knowledge is valuable in many modelling endeavours, particularly where data is not extensive or sufficiently robust. In Bayesian statistics, expert opinion may be formulated as informative priors, to provide an honest reflection of the current state of knowledge, before updating this with new information. Technology is increasingly being exploited to help support the process of eliciting such information. This paper reviews the benefits that have been gained from utilizing technology in this way. These benefits can be structured within a six-step elicitation design framework proposed recently (Low Choy et al., 2009). We assume that the purpose of elicitation is to formulate a Bayesian statistical prior, either to provide a standalone expert-defined model, or for updating new data within a Bayesian analysis. We also assume that the model has been pre-specified before selecting the software. In this case, technology has the most to offer to: targeting what experts know (E2), eliciting and encoding expert opinions (E4), whilst enhancing accuracy (E5), and providing an effective and efficient protocol (E6). Benefits include: -providing an environment with familiar nuances (to make the expert comfortable) where experts can explore their knowledge from various perspectives (E2); -automating tedious or repetitive tasks, thereby minimizing calculation errors, as well as encouraging interaction between elicitors and experts (E5); -cognitive gains by educating users, enabling instant feedback (E2, E4-E5), and providing alternative methods of communicating assessments and feedback information, since experts think and learn differently; and -ensuring a repeatable and transparent protocol is used (E6).
Resumo:
As organizations reach higher levels of Business Process Management maturity, they tend to collect numerous business process models. Such models may be linked with each other or mutually overlap, supersede one another and evolve over time. Moreover, they may be represented at different abstraction levels depending on the target audience and modeling purpose, and may be available in multiple languages (e.g. due to company mergers). Thus, it is common that organizations struggle with keeping track of their process models. This demonstration introduces AProMoRe (Advanced Process Model Repository) which aims to facilitate the management of (large) process model collections.
Resumo:
Several studies have developed metrics for software quality attributes of object-oriented designs such as reusability and functionality. However, metrics which measure the quality attribute of information security have received little attention. Moreover, existing security metrics measure either the system from a high level (i.e. the whole system’s level) or from a low level (i.e. the program code’s level). These approaches make it hard and expensive to discover and fix vulnerabilities caused by software design errors. In this work, we focus on the design of an object-oriented application and define a number of information security metrics derivable from a program’s design artifacts. These metrics allow software designers to discover and fix security vulnerabilities at an early stage, and help compare the potential security of various alternative designs. In particular, we present security metrics based on composition, coupling, extensibility, inheritance, and the design size of a given object-oriented, multi-class program from the point of view of potential information flow.
Resumo:
Refactoring focuses on improving the reusability, maintainability and performance of programs. However, the impact of refactoring on the security of a given program has received little attention. In this work, we focus on the design of object-oriented applications and use metrics to assess the impact of a number of standard refactoring rules on their security by evaluating the metrics before and after refactoring. This assessment tells us which refactoring steps can increase the security level of a given program from the point of view of potential information flow, allowing application designers to improve their system’s security at an early stage.
Resumo:
The technologies employed for the preparation of conventional tissue engineering scaffolds restrict the materials choice and the extent to which the architecture can be designed. Here we show the versatility of stereolithography with respect to materials and freedom of design. Porous scaffolds are designed with computer software and built with either a poly(d,l-lactide)-based resin or a poly(d,l-lactide-co-ε-caprolactone)-based resin. Characterisation of the scaffolds by micro-computed tomography shows excellent reproduction of the designs. The mechanical properties are evaluated in compression, and show good agreement with finite element predictions. The mechanical properties of scaffolds can be controlled by the combination of material and scaffold pore architecture. The presented technology and materials enable an accurate preparation of tissue engineering scaffolds with a large freedom of design, and properties ranging from rigid and strong to highly flexible and elastic.