978 resultados para computer art


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Encompasses the whole BPM lifecycle, including process identification, modelling, analysis, redesign, automation and monitoring Class-tested textbook complemented with additional teaching material on the accompanying website Covers both relevant conceptual background, industrial standards and actionable skills Business Process Management (BPM) is the art and science of how work should be performed in an organization in order to ensure consistent outputs and to take advantage of improvement opportunities, e.g. reducing costs, execution times or error rates. Importantly, BPM is not about improving the way individual activities are performed, but rather about managing entire chains of events, activities and decisions that ultimately produce added value for an organization and its customers. This textbook encompasses the entire BPM lifecycle, from process identification to process monitoring, covering along the way process modelling, analysis, redesign and automation. Concepts, methods and tools from business management, computer science and industrial engineering are blended into one comprehensive and inter-disciplinary approach. The presentation is illustrated using the BPMN industry standard defined by the Object Management Group and widely endorsed by practitioners and vendors worldwide. In addition to explaining the relevant conceptual background, the book provides dozens of examples, more than 100 hands-on exercises – many with solutions – as well as numerous suggestions for further reading. The textbook is the result of many years of combined teaching experience of the authors, both at the undergraduate and graduate levels as well as in the context of professional training. Students and professionals from both business management and computer science will benefit from the step-by-step style of the textbook and its focus on fundamental concepts and proven methods. Lecturers will appreciate the class-tested format and the additional teaching material available on the accompanying website fundamentals-of-bpm.org.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the field of face recognition, Sparse Representation (SR) has received considerable attention during the past few years. Most of the relevant literature focuses on holistic descriptors in closed-set identification applications. The underlying assumption in SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such assumption is easily violated in the more challenging face verification scenario, where an algorithm is required to determine if two faces (where one or both have not been seen before) belong to the same person. In this paper, we first discuss why previous attempts with SR might not be applicable to verification problems. We then propose an alternative approach to face verification via SR. Specifically, we propose to use explicit SR encoding on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which are then concatenated to form an overall face descriptor. Due to the deliberate loss spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment & various image deformations. Within the proposed framework, we evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN), and an implicit probabilistic technique based on Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems. The experiments also show that l1-minimisation based encoding has a considerably higher computational than the other techniques, but leads to higher recognition rates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Artists: Donna Hewitt, Julian Knowles, Wade Marynowsky, Tim Bruniges, Avril Huddy Macrophonics presents new Australian work emerging from the leading edge of where performance interface research is taking place. The program addresses the emerging dialogue between traditional media and emerging digital media, as well as the dialogue across a broad range of musical traditions. Due to recent technological developments, we have reached a point artistically where the relationships between media and genres are being completely re-evaluated. This program presents a cross-section of responses to this condition. Each of the works in the program foregrounds an approach to performance that integrates sensors and novel performance control devices and/or examine how machines can be made musical in performance. Containing works for voice, electronics, video, movement and sensor based gestural controllers, it critically surveys the interface between humans and machines in performance. From sensor based microphones and guitars, performance a/v, to post-rock dronescapes and experimental electronica; Macrophonics provides a broad and engaging survey of new performance approaches in mediatised environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports on the implementation of a non-invasive electroencephalography-based brain-computer interface to control functions of a car in a driving simulator. The system is comprised of a Cleveland Medical Devices BioRadio 150 physiological signal recorder, a MATLAB-based BCI and an OKTAL SCANeR advanced driving experience simulator. The system utilizes steady-state visual-evoked potentials for the BCI paradigm, elicited by frequency-modulated high-power LEDs and recorded with the electrode placement of Oz-Fz with Fz as ground. A three-class online brain-computer interface was developed and interfaced with an advanced driving simulator to control functions of the car, including acceleration and steering. The findings are mainly exploratory but provide an indication of the feasibility and challenges of brain-controlled on-road cars for the future, in addition to a safe, simulated BCI driving environment to use as a foundation for research into overcoming these challenges.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

utomatic pain monitoring has the potential to greatly improve patient diagnosis and outcomes by providing a continuous objective measure. One of the most promising methods is to do this via automatically detecting facial expressions. However, current approaches have failed due to their inability to: 1) integrate the rigid and non-rigid head motion into a single feature representation, and 2) incorporate the salient temporal patterns into the classification stage. In this paper, we tackle the first problem by developing a “histogram of facial action units” representation using Active Appearance Model (AAM) face features, and then utilize a Hidden Conditional Random Field (HCRF) to overcome the second issue. We show that both of these methods improve the performance on the task of pain detection in sequence level compared to current state-of-the-art-methods on the UNBC-McMaster Shoulder Pain Archive.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exploration of how Australia and Asia are intertwined in everyday culture, and in the imagined worlds of Australians of all backgrounds. Investigates Asian cultural production of art, literature, media and performance that embody Asian social and cultural experiences. Includes endnotes, bibliography and index. Ang and Chalmers work in the School of Cultural Studies at University of Western Sydney. Law and Thomas are Australian Research Council Postdoctoral Fellows at Australian National University and the Research Centre in Inter-communal Studies respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Australian Business Assessment of Computer User Security (ABACUS) survey is a nationwide assessment of the prevalence and nature of computer security incidents experienced by Australian businesses. This report presents the findings of the survey which may be used by businesses in Australia to assess the effectiveness of their information technology security measures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: The measurement of broadband ultrasonic attenuation (BUA) in cancellous bone for the assessment of osteoporosis follows a parabolic-type dependence with bone volume fraction; having minima values corresponding to both entire bone and entire marrow. Langton has recently proposed that the primary BUA mechanism may be significant phase interference due to variations in propagation transit time through the test sample as detected over the phase-sensitive surface of the receive ultrasound transducer. This fundamentally simple concept assumes that the propagation of ultrasound through a complex solid : liquid composite sample such as cancellous bone may be considered by an array of parallel ‘sonic rays’. The transit time of each ray is defined by the proportion of bone and marrow propagated, being a minimum (tmin) solely through bone and a maximum (tmax) solely through marrow. A Transit Time Spectrum (TTS), ranging from tmin to tmax, may be defined describing the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit time over the surface of the receive ultrasound transducer. Phase interference may result from interaction of ‘sonic rays’ of differing transit times. The aim of this study was to test the hypothesis that there is a dependence of phase interference upon the lateral inhomogenity of transit time by comparing experimental measurements and computer simulation predictions of ultrasound propagation through a range of relatively simplistic solid:liquid models exhibiting a range of lateral inhomogeneities. Methods: A range of test models was manufactured using acrylic and water as surrogates for bone and marrow respectively. The models varied in thickness in one dimension normal to the direction of propagation, hence exhibiting a range of transit time lateral inhomogeneities, ranging from minimal (single transit time) to maximal (wedge; ultimately the limiting case where each sonic ray has a unique transit time). For the experimental component of the study, two unfocused 1 MHz ¾” broadband diameter transducers were utilized in transmission mode; ultrasound signals were recorded for each of the models. The computer simulation was performed with Matlab, where the transit time and relative amplitude of each sonic ray was calculated. The transit time for each sonic ray was defined as the sum of transit times through acrylic and water components. The relative amplitude considered the reception area for each sonic ray along with absorption in the acrylic. To replicate phase-sensitive detection, all sonic rays were summed and the output signal plotted in comparison with the experimentally derived output signal. Results: From qualtitative and quantitative comparison of the experimental and computer simulation results, there is an extremely high degree of agreement of 94.2% to 99.0% between the two approaches, supporting the concept that propagation of an ultrasound wave, for the models considered, may be approximated by a parallel sonic ray model where the transit time of each ray is defined by the proportion of ‘bone’ and ‘marrow’. Conclusions: This combined experimental and computer simulation study has successfully demonstrated that lateral inhomogeneity of transit time has significant potential for phase interference to occur if a phase-sensitive ultrasound receive transducer is implemented as in most commercial ultrasound bone analysis devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

“Tranquility Falls” depicts a computer-generated waterfall set to sentimental stock music. As the water gushes, text borrowed from a popular talk show host’s self-help advice fade in and out graphically down the screen. As the animated phrases increase in tempo, the sounds of the waterfall begin to overwhelm the tender music. By creating overtly fabricated sensations of inspiration and awe, the work questions how and where we experience contemplation, wonderment and guidance in a contemporary context. “Tranquility Falls” contributes to studies in the field of contemporary art. It is particularly concerned with representations of spirituality and nature. These have been important themes in art practice for some time. For example, artists such as Olafur Eliasson and James Turrell have created artificial insertions in nature in order to question contemporary experiences of the natural environment. Other artists such as Nam Jun Paik have more directly addressed the changing relationship between spirituality and popular culture. Using a practice-led research methodology, “Tranquility Falls” extends these creative inquiries. By presenting an overtly synthetic but strangely evocative pun on a ‘fountain of knowledge’, it questions whether we are informed less by traditional engagements with organised religions and natural wonder, and instead, increasingly reliant on the mechanisms of popular culture for moments of insight and reflection. “Tranquility Falls” has been exhibited internationally at LA Louver Gallery, Venice, California in 2013 and nationally with GBK as part of Art Month Sydney, also in 2013. It has been critically reviewed in The Los Angeles Times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the explosive growth of resources available through the Internet, information mismatching and overload have become a severe concern to users. Web users are commonly overwhelmed by huge volume of information and are faced with the challenge of finding the most relevant and reliable information in a timely manner. Personalised information gathering and recommender systems represent state-of-the-art tools for efficient selection of the most relevant and reliable information resources, and the interest in such systems has increased dramatically over the last few years. However, web personalization has not yet been well-exploited; difficulties arise while selecting resources through recommender systems from a technological and social perspective. Aiming to promote high quality research in order to overcome these challenges, this paper provides a comprehensive survey on the recent work and achievements in the areas of personalised web information gathering and recommender systems. The report covers concept-based techniques exploited in personalised information gathering and recommender systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The question of the relationship between culture and power continues to exercise researchers. In this paper I argue that it is useful to consider the differences between ‘art’ and ‘entertainment’ as systems of culture, each involving a distinct set of power relationships between producers and audiences. Art wants to change audiences; entertainment wants to be changed by audiences. From these different starting points a series of differences unfold in the power possessed by producers and audiences. Artists pride themselves in not involving the audience in the process of making art. By contrast, entertainment wants audiences to contribute to the making of texts. As to the question of who controls the range of what forms of culture are available, it seems that entertainment consumers – unlike art consumers – are ill-disciplined. Historical evidence demonstrates that if legal corporate providers do not offer the kinds of entertainment they want, they will turn to illegal sources. The different ways in which ‘art’ and ‘entertainment’ function as cultural systems suggest that we must rethink our positions on ‘media power’.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates advanced channel compensation techniques for the purpose of improving i-vector speaker verification performance in the presence of high intersession variability using the NIST 2008 and 2010 SRE corpora. The performance of four channel compensation techniques: (a) weighted maximum margin criterion (WMMC), (b) source-normalized WMMC (SN-WMMC), (c) weighted linear discriminant analysis (WLDA), and; (d) source-normalized WLDA (SN-WLDA) have been investigated. We show that, by extracting the discriminatory information between pairs of speakers as well as capturing the source variation information in the development i-vector space, the SN-WLDA based cosine similarity scoring (CSS) i-vector system is shown to provide over 20% improvement in EER for NIST 2008 interview and microphone verification and over 10% improvement in EER for NIST 2008 telephone verification, when compared to SN-LDA based CSS i-vector system. Further, score-level fusion techniques are analyzed to combine the best channel compensation approaches, to provide over 8% improvement in DCF over the best single approach, (SN-WLDA), for NIST 2008 interview/ telephone enrolment-verification condition. Finally, we demonstrate that the improvements found in the context of CSS also generalize to state-of-the-art GPLDA with up to 14% relative improvement in EER for NIST SRE 2010 interview and microphone verification and over 7% relative improvement in EER for NIST SRE 2010 telephone verification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previously, expected satiety (ES) has been measured using software and two-dimensional pictures presented on a computer screen. In this context, ES is an excellent predictor of self-selected portions, when quantified using similar images and similar software. In the present study we sought to establish the veracity of ES as a predictor of behaviours associated with real foods. Participants (N = 30) used computer software to assess their ES and ideal portion of three familiar foods. A real bowl of one food (pasta and sauce) was then presented and participants self-selected an ideal portion size. They then consumed the portion ad libitum. Additional measures of appetite, expected and actual liking, novelty, and reward, were also taken. Importantly, our screen-based measures of expected satiety and ideal portion size were both significantly related to intake (p < .05). By contrast, measures of liking were relatively poor predictors (p > .05). In addition, consistent with previous studies, the majority (90%) of participants engaged in plate cleaning. Of these, 29.6% consumed more when prompted by the experimenter. Together, these findings further validate the use of screen-based measures to explore determinants of portion-size selection and energy intake in humans.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This practice-led doctorate involved the development of a collection – a bricolage – of interwoven fragments of literary texts and visual imagery explor-ing questions of speculative fiction, urban space and embodiment. As a sup-plement to the creative work, I also developed an exegesis, using a combina-tion of theoretical and contextual analysis combined with critical reflections on my creative process and outputs. An emphasis on issues of creative practice and a sustained investigation into an aesthetics of fragmentation and assem-blage is organised around the concept and methodology of bricolage, the eve-ryday art of ‘making do’. The exegesis also addresses my interest in the city and urban forms of subjectivity and embodiment through the use of a range of theorists, including Michel de Certeau and Elizabeth Grosz.