974 resultados para machine intelligence
Resumo:
In practice, parallel-machine job-shop scheduling (PMJSS) is very useful in the development of standard modelling approaches and generic solution techniques for many real-world scheduling problems. In this paper, based on the analysis of structural properties in an extended disjunctive graph model, a hybrid shifting bottleneck procedure (HSBP) algorithm combined with Tabu Search metaheuristic algorithm is developed to deal with the PMJSS problem. The original-version SBP algorithm for the job-shop scheduling (JSS) has been significantly improved to solve the PMJSS problem with four novelties: i) a topological-sequence algorithm is proposed to decompose the PMJSS problem into a set of single-machine scheduling (SMS) and/or parallel-machine scheduling (PMS) subproblems; ii) a modified Carlier algorithm based on the proposed lemmas and the proofs is developed to solve the SMS subproblem; iii) the Jackson rule is extended to solve the PMS subproblem; iv) a Tabu Search metaheuristic algorithm is embedded under the framework of SBP to optimise the JSS and PMJSS cases. The computational experiments show that the proposed HSBP is very efficient in solving the JSS and PMJSS problems.
Resumo:
This project investigates machine listening and improvisation in interactive music systems with the goal of improvising musically appropriate accompaniment to an audio stream in real-time. The input audio may be from a live musical ensemble, or playback of a recording for use by a DJ. I present a collection of robust techniques for machine listening in the context of Western popular dance music genres, and strategies of improvisation to allow for intuitive and musically salient interaction in live performance. The findings are embodied in a computational agent – the Jambot – capable of real-time musical improvisation in an ensemble setting. Conceptually the agent’s functionality is split into three domains: reception, analysis and generation. The project has resulted in novel techniques for addressing a range of issues in each of these domains. In the reception domain I present a novel suite of onset detection algorithms for real-time detection and classification of percussive onsets. This suite achieves reasonable discrimination between the kick, snare and hi-hat attacks of a standard drum-kit, with sufficiently low-latency to allow perceptually simultaneous triggering of accompaniment notes. The onset detection algorithms are designed to operate in the context of complex polyphonic audio. In the analysis domain I present novel beat-tracking and metre-induction algorithms that operate in real-time and are responsive to change in a live setting. I also present a novel analytic model of rhythm, based on musically salient features. This model informs the generation process, affording intuitive parametric control and allowing for the creation of a broad range of interesting rhythms. In the generation domain I present a novel improvisatory architecture drawing on theories of music perception, which provides a mechanism for the real-time generation of complementary accompaniment in an ensemble setting. All of these innovations have been combined into a computational agent – the Jambot, which is capable of producing improvised percussive musical accompaniment to an audio stream in real-time. I situate the architectural philosophy of the Jambot within contemporary debate regarding the nature of cognition and artificial intelligence, and argue for an approach to algorithmic improvisation that privileges the minimisation of cognitive dissonance in human-computer interaction. This thesis contains extensive written discussions of the Jambot and its component algorithms, along with some comparative analyses of aspects of its operation and aesthetic evaluations of its output. The accompanying CD contains the Jambot software, along with video documentation of experiments and performances conducted during the project.
Resumo:
The Toolbox, combined with MATLAB ® and a modern workstation computer, is a useful and convenient environment for investigation of machine vision algorithms. For modest image sizes the processing rate can be sufficiently ``real-time'' to allow for closed-loop control. Focus of attention methods such as dynamic windowing (not provided) can be used to increase the processing rate. With input from a firewire or web camera (support provided) and output to a robot (not provided) it would be possible to implement a visual servo system entirely in MATLAB. Provides many functions that are useful in machine vision and vision-based control. Useful for photometry, photogrammetry, colorimetry. It includes over 100 functions spanning operations such as image file reading and writing, acquisition, display, filtering, blob, point and line feature extraction, mathematical morphology, homographies, visual Jacobians, camera calibration and color space conversion.
Resumo:
The Time magazine ‘Person of theYear’ award is a venerable institution. Established by Time’s founder Henry Luce in 1927 as ‘Man of the Year’, it is an annual award given to ‘a person, couple, group, idea, place, or machine that ‘for better or for worse ... has done the most to influence the events of the year’ (Time 2002, p. 1). In 2010, the award was given to Mark Zuckerberg, the founder and CEO of the social networking site Facebook.There was, however, a strong campaign for the ‘People’s Choice’ award to be given to Julian Assange, the founder and editor-in-chief of Wikileaks, the online whistleblowing site. Earlier in the year Wikileaks had released more than 250 000 US government diplomatic cables through the internet, and the subsequent controver- sies around the actions of Wikileaks and Assange came to be known worldwide as ‘Cablegate’. The focus of this chapter is not on the implications of ‘Cablegate’ for international diplomacy, which continue to have great significance, but rather upon what the emergence of Wikileaks has meant for journalism, and whether it provides insights into the future of journalism. Both Facebook and Wikileaks, as well as social media platforms such as Twitter and YouTube, and independent media practices such as blogging, citizen journalism and crowdsourcing, are manifestations of the rise of social media, or what has also been termed web 2.0.The term ‘web 2.0’ was coined by Tim O’Reilly, and captures the rise of online social media platforms and services, that better realise the collaborative potential of digitally networked media. They do this by moving from the relatively static, top-down notions of interactivity that informed early internet development, towards more open and evolutionary models that better harness collective intelligence by enabling users to become the creators and collaborators in the development of online media content (Musser and O’Reilly 2007; Bruns 2008).
Resumo:
Transnational Organised Crime (TOC) has become a focal point for a range of private and public stakeholders. While not a new phenomenon, the rapid expansion of TOC activities and interests, its increasingly complex structures and ability to maximise opportunity by employing new technologies at a rate impossible for law enforcement to match complicates law enforcement’s ability to develop strategies to detect, disrupt, prevent and investigate them. In an age where the role of police has morphed from simplistic response and enforcement activities to one of managing human security risk, it is argued that intelligence can be used to reduce the impact of strategic surprise from evolving criminal threats and environmental change. This review specifically focuses on research that has implications for strategic intelligence and strategy setting in a TOC context. The review findings suggest that current law enforcement intelligence literature focuses narrowly on the management concept of intelligence-led policing in a tactical, operational setting. As such the review identifies central issues surrounding strategic intelligence and highlights key questions that future research agendas must address to improve strategic intelligence outcomes, particularly in the fight against TOC.
Resumo:
In an age where the role of police has morphed from simplistic response and enforcement activities to one of managing human security risk, it is argued that intelligence can be used to reduce the impact of strategic surprise from evolving criminal threats and environmental change. This review specifically focusses on research that has implications for strategic intelligence in law enforcement. The review findings highlight the absence of detailed research of law enforcement strategic intelligence. Findings suggest that current law enforcement intelligence literature focuses narrowly on the management concept of intelligence-led policing in a tactical, operational setting. As a result there is little theory on how to improve strategic intelligence outcomes. This is despite the fact that intelligence –led policing is envisaged as a management tool to guide strategic decision making. the review identifies central issues surrounding strategic intelligence and highlights key questions that future research agendas must address to improve strategic intelligence outcomes
Resumo:
The relationship between intellectual functioning and criminal offending has received considerable focus within the literature. While there remains debate regarding the existence (and strength) of this relationship, there is a wider consensus that individuals with below average functioning (in particular cognitive impairments) are disproportionately represented within the prison population. This paper focuses on research that has implications for the effective management of lower functioning individuals within correctional environments as well as the successful rehabilitation and release of such individuals back into the community. This includes a review of the literature regarding the link between lower intelligence and offending and the identification of possible factors that either facilitate (or confound) this relationship. The main themes to emerge from this review are that individuals with lower intellectual functioning continue to be disproportionately represented in custodial settings and that there is a need to increase the provision of specialised programs to cater for their needs. Further research is also needed into a range of areas including: (1) the reason for this over-representation in custodial settings, (2) the existence and effectiveness of rehabilitation and release programs that cater for lower IQ offenders, (3) the effectiveness of custodial alternatives for this group (e.g. intensive corrections orders) and (4) what post-custodial release services are needed to reduce the risk of recidivism.
Resumo:
The discovery of protein variation is an important strategy in disease diagnosis within the biological sciences. The current benchmark for elucidating information from multiple biological variables is the so called “omics” disciplines of the biological sciences. Such variability is uncovered by implementation of multivariable data mining techniques which come under two primary categories, machine learning strategies and statistical based approaches. Typically proteomic studies can produce hundreds or thousands of variables, p, per observation, n, depending on the analytical platform or method employed to generate the data. Many classification methods are limited by an n≪p constraint, and as such, require pre-treatment to reduce the dimensionality prior to classification. Recently machine learning techniques have gained popularity in the field for their ability to successfully classify unknown samples. One limitation of such methods is the lack of a functional model allowing meaningful interpretation of results in terms of the features used for classification. This is a problem that might be solved using a statistical model-based approach where not only is the importance of the individual protein explicit, they are combined into a readily interpretable classification rule without relying on a black box approach. Here we incorporate statistical dimension reduction techniques Partial Least Squares (PLS) and Principal Components Analysis (PCA) followed by both statistical and machine learning classification methods, and compared them to a popular machine learning technique, Support Vector Machines (SVM). Both PLS and SVM demonstrate strong utility for proteomic classification problems.
Resumo:
Material for this paper comes from as report commissioned by the Department of Family Services, Aboriginal and Islander Affairs. The report is the result of a multi strategy research project designed to assess the impact of gaming machines on the fundraising capacity of charitable and community organisations in Queensland. The study was conducted during the 1993 calendar year. The first Queensland gaming machine was commissioned on the 11 February, 1992 at 11.30 am in Brisbane at the Kedron Wavell Services Club. Eighteen more clubs followed that week. Six months later there were gaming machines in 335 clubs, and 250 hotels and taverns, representing a state wide total of 7,974 machines in operation. The 10,000 gaming machine was commissioned on the 18 March, 1993 and the 1,000 operational gaming machine site was opened on 18th February, 1994.
Resumo:
Improving energy efficiency has become increasingly important in data centers in recent years to reduce the rapidly growing tremendous amounts of electricity consumption. The power dissipation of the physical servers is the root cause of power usage of other systems, such as cooling systems. Many efforts have been made to make data centers more energy efficient. One of them is to minimize the total power consumption of these servers in a data center through virtual machine consolidation, which is implemented by virtual machine placement. The placement problem is often modeled as a bin packing problem. Due to the NP-hard nature of the problem, heuristic solutions such as First Fit and Best Fit algorithms have been often used and have generally good results. However, their performance leaves room for further improvement. In this paper we propose a Simulated Annealing based algorithm, which aims at further improvement from any feasible placement. This is the first published attempt of using SA to solve the VM placement problem to optimize the power consumption. Experimental results show that this SA algorithm can generate better results, saving up to 25 percentage more energy than First Fit Decreasing in an acceptable time frame.
Resumo:
Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches to the virtual machine placement problem consider the energy consumption by physical machines in a data center only, but do not consider the energy consumption in communication network in the data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement in order to make the data center more energy-efficient. In this paper, we propose a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both the servers and the communication network in the data center. Experimental results show that the genetic algorithm performs well when tackling test problems of different kinds, and scales up well when the problem size increases.
Resumo:
The security of power transfer across a given transmission link is typically a steady state assessment. This paper develops tools to assess machine angle stability as affected by a combination of faults and uncertainty of wind power using probability analysis. The paper elaborates on the development of the theoretical assessment tool and demonstrates its efficacy using single machine infinite bus system.
Resumo:
This proposal combines ethnographic techniques and discourse studies to investigating a collective of people engaged with audiovisual productions who collaborate in Curta Favela’s workshops in Rio de Janeiro’s favelas. ‘Favela’ is often translated simply as ‘slum’ or ‘shantytown’, but these terms connote negative characteristics such as shortage, poverty, and deprivation referring to favelas which end up stigmatizing these low income suburbs. Curta Favela (Favela Shorts) is an independent project which all participants join to use photography and participatory audiovisual production as a tool for social change and raising consciousness. As cameras are not affordable for favelas dwellers, Curta Favela’s volunteers teach favela residents how they can use their mobile phones and compact cameras to take pictures and make movies, and afterwards, how they can edit the data using free editing video software programs and publish it on the Internet. To record audio, they use their mp3 or mobile phones. The main aim of this study is to shed light not only on how this project operates, but also to highlight how collective intelligence can be used as a way of fighting against the lack of basic resources.