744 resultados para Music, Computation, Interactive, Visual Art
Resumo:
Background Acetabular fractures still are among the most challenging fractures to treat because of complex anatomy, involved surgical access to fracture sites and the relatively low incidence of these lesions. Proper evaluation and surgical planning is necessary to achieve anatomic reduction of the articular surface and stable fixation of the pelvic ring. The goal of this study was to test the feasibility of preoperative surgical planning in acetabular fractures using a new prototype planning tool based on an interactive virtual reality-style environment. Methods 7 patients (5 male and 2 female; median age 53 y (25 to 92 y)) with an acetabular fracture were prospectively included. Exclusion criterions were simple wall fractures, cases with anticipated surgical dislocation of the femoral head for joint debridement and accurate fracture reduction. According to the Letournel classification 4 cases had two column fractures, 2 cases had anterior column fractures and 1 case had a T-shaped fracture including a posterior wall fracture. The workflow included following steps: (1) Formation of a patient-specific bone model from preoperative computed tomography scans, (2) interactive virtual fracture reduction with visuo-haptic feedback, (3) virtual fracture fixation using common osteosynthesis implants and (4) measurement of implant position relative to landmarks. The surgeon manually contoured osteosynthesis plates preoperatively according to the virtually defined deformation. Screenshots including all measurements for the OR were available. The tool was validated comparing the preoperative planning and postoperative results by 3D-superimposition. Results Preoperative planning was feasible in all cases. In 6 of 7 cases superimposition of preoperative planning and postoperative follow-up CT showed a good to excellent correlation. In one case part of the procedure had to be changed due to impossibility of fracture reduction from an ilioinguinal approach. In 3 cases with osteopenic bone patient-specific prebent fixation plates were helpful in guiding fracture reduction. Additionally, anatomical landmark based measurements were helpful for intraoperative navigation. Conclusion The presented prototype planning tool for pelvic surgery was successfully integrated in a clinical workflow to improve patient-specific preoperative planning, giving visual and haptic information about the injury and allowing a patient-specific adaptation of osteosynthesis implants to the virtually reduced pelvis.
Resumo:
Short, unfamiliar melodies were presented to young and older adults and to Alzheimer's disease (AD) patients in an implicit and an explicit memory task. The explicit task was yes–no recognition, and the implicit task was pleasantness ratings, in which memory was shown by higher ratings for old versus new melodies (the mere exposure effect). Young adults showed retention of the melodies in both tasks. Older adults showed little explicit memory but did show the mere exposure effect. The AD patients showed neither. The authors considered and rejected several artifactual reasons for this null effect in the context of the many studies that have shown implicit memory among AD patients. As the previous studies have almost always used the visual modality for presentation, they speculate that auditory presentation, especially of nonverbal material, may be compromised in AD because of neural degeneration in auditory areas in the temporal lobes.
Resumo:
Mr. Michl posed the question of how the institutional framework that the former communist regime set up around art production contributed to the success of Czech applied arts. In his theoretical review of the question he discussed the reasons for the lack of success of socialist industrial design as opposed to what he terms pre-industrial arts (such as art glass), and also for the current lack of interest into art institutions of the past regime. His findings in the second, historical section of his work were based largely on interviews with artists and other insiders, as an initial attempt to use questionnaires was unsuccessful. His original assumption that the institutional framework was imposed on artists against their will in fact proved mistaken, as it turned out to have been proposed by the artists themselves. The basic blueprint for communist art institutions was the Memorandum document published on behalf of Czechoslovak visual artists in March 1947, i.e. before the communist coup of February 1948. Thus, while the communist state provided a beneficial institutional framework for artists' work, it was the artists themselves who designed this framework. Mr. Michl concludes that the text of the memorandum appealed to the general left-wing and anti-market sentiments of the immediate post-war period and by this and by later working through the administrative channels of the new state, the artists succeeded in gaining all of their demands over the next 15 years. The one exception was artistic freedom, although this they came to enjoy, if only by default and for a short time, during the ideological thaw of the 1960s. Mr. Michl also examined the art-related legislative framework in detail and looked at the main features of key art institutions in the field, such as the Czech Fund for Visual Arts and the 1960s art export enterprise Art Centrum, which opened the doors into foreign markets for artists.
Resumo:
Recovering the architecture is the first step towards reengineering a software system. Many reverse engineering tools use top-down exploration as a way of providing a visual and interactive process for architecture recovery. During the exploration process, the user navigates through various views on the system by choosing from several exploration operations. Although some sequences of these operations lead to views which, from the architectural point of view, are mode relevant than others, current tools do not provide a way of predicting which exploration paths are worth taking and which are not. In this article we propose a set of package patterns which are used for augmenting the exploration process with in formation about the worthiness of the various exploration paths. The patterns are defined based on the internal package structure and on the relationships between the package and the other packages in the system. To validate our approach, we verify the relevance of the proposed patterns for real-world systems by analyzing their frequency of occurrence in six open-source software projects.
Resumo:
From Bush’s September 20, 2001 “War on Terror” speech to Congress to President-Elect Barack Obama’s acceptance speech on November 4, 2008, the U.S. Army produced visual recruitment material that addressed the concerns of falling enlistment numbers—due to the prolonged and difficult war in Iraq—with quickly-evolving and compelling rhetorical appeals: from the introduction of an “Army of One” (2001) to “Army Strong” (2006); from messages focused on education and individual identity to high-energy adventure and simulated combat scenarios, distributed through everything from printed posters and music videos to first-person tactical-shooter video games. These highly polished, professional visual appeals introduced to the American public during a time of an unpopular war fought by volunteers provide rich subject matter for research and analysis. This dissertation takes a multidisciplinary approach to the visual media utilized as part of the Army’s recruitment efforts during the War on Terror, focusing on American myths—as defined by Barthes—and how these myths are both revealed and reinforced through design across media platforms. Placing each selection in its historical context, this dissertation analyzes how printed materials changed as the War on Terror continued. It examines the television ad that introduced “Army Strong” to the American public, considering how the combination of moving image, text, and music structure the message and the way we receive it. This dissertation also analyzes the video game America’s Army, focusing on how the interaction of the human player and the computer-generated player combine to enhance the persuasive qualities of the recruitment message. Each chapter discusses how the design of the particular medium facilitates engagement/interactivity of the viewer. The conclusion considers what recruitment material produced during this time period suggests about the persuasive strategies of different media and how they create distinct relationships with their spectators. It also addresses how theoretical frameworks and critical concepts used by a variety of disciplines can be combined to analyze recruitment media utilizing a Selber inspired three literacy framework (functional, critical, rhetorical) and how this framework can contribute to the multimodal classroom by allowing instructors and students to do a comparative analysis of multiple forms of visual media with similar content.
Resumo:
In the last years, the well known ray tracing algorithm gained new popularity with the introduction of interactive ray tracing methods. The high modularity and the ability to produce highly realistic images make ray tracing an attractive alternative to raster graphics hardware. Interactive ray tracing also proved its potential in the field of Mixed Reality rendering and provides novel methods for seamless integration of real and virtual content. Actor insertion methods, a subdomain of Mixed Reality and closely related to virtual television studio techniques, can use ray tracing for achieving high output quality in conjunction with appropriate visual cues like shadows and reflections at interactive frame rates. In this paper, we show how interactive ray tracing techniques can provide new ways of implementing virtual studio applications.
Resumo:
Visual fixation is employed by humans and some animals to keep a specific 3D location at the center of the visual gaze. Inspired by this phenomenon in nature, this paper explores the idea to transfer this mechanism to the context of video stabilization for a handheld video camera. A novel approach is presented that stabilizes a video by fixating on automatically extracted 3D target points. This approach is different from existing automatic solutions that stabilize the video by smoothing. To determine the 3D target points, the recorded scene is analyzed with a stateof- the-art structure-from-motion algorithm, which estimates camera motion and reconstructs a 3D point cloud of the static scene objects. Special algorithms are presented that search either virtual or real 3D target points, which back-project close to the center of the image for as long a period of time as possible. The stabilization algorithm then transforms the original images of the sequence so that these 3D target points are kept exactly in the center of the image, which, in case of real 3D target points, produces a perfectly stable result at the image center. Furthermore, different methods of additional user interaction are investigated. It is shown that the stabilization process can easily be controlled and that it can be combined with state-of-theart tracking techniques in order to obtain a powerful image stabilization tool. The approach is evaluated on a variety of videos taken with a hand-held camera in natural scenes.
Resumo:
This paper reports on a Virtual Reality theater experiment named Il était Xn fois, conducted by artists and computer scientists working in cognitive science. It offered the opportunity for knowledge and ideas exchange between these groups, highlighting the benefits of collaboration of this kind. Section 1 explains the link between enaction in cognitive science and virtual reality, and specifically the need to develop an autonomous entity which enhances presence in an artificial world. Section 2 argues that enactive artificial intelligence is able to produce such autonomy. This was demonstrated by the theatrical experiment, "Il était Xn fois" (in English: Once upon Xn time), explained in section 3. Its first public performance was in 2009, by the company Dérézo. The last section offers the view that enaction can form a common ground between the artistic and computer science areas.
Resumo:
Modern e-learning systems represent a special type of web information systems. By definition, information systems are special computerized systems used to perform data operations by multiple users simultaneously. Each active user consumes an amount of hardware resources. A shortage of hardware resources can be caused by growing number of simultaneous users. Such situation can result in overall malfunctioning or slowed-down system. In order to avoid this problem, the underlying hardware system gets usually continuously upgraded. These upgrades, typically accompanied with various software updates, usually result in a temporarily increased amount of available resources. This work deals with the problem in a different way by proposing an implementation of a web e-learning system with a modified software architecture reducing resource usage of the server part to the bare minimum. In order to implement a full-scale e-learning system that could be used as a substitute to a conventional web e-learning system, a Rich Internet Application framework was used as basis. The technology allowed implementation of advanced interactivity features and provided an easy transfer of a substantial part of the application logic from server to clients. In combination with a special server application, the server part of the new system is able to run with a reasonable performance on a hardware with very limited computing resources.
Resumo:
New tools for editing of digital images, music and films have opened up new possibilities to enable wider circles of society to engage in ’artistic’ activities of different qualities. User-generated content has produced a plethora of new forms of artistic expression. One type of user-generated content is the mashup. Mashups are compositions that combine existing works (often) protected by copyright and transform them into new original creations. The European legislative framework has not yet reacted to the copyright problems provoked by mashups. Neither under the US fair use doctrine, nor under the strict corset of limitations and exceptions in Art 5 (2)-(3) of the Copyright Directive (2001/29/EC) have mashups found room to develop in a safe legal environment. The contribution analyzes the current European legal framework and identifies its insufficiencies with regard to enabling a legal mashup culture. By comparison with the US fair use approach, in particular the parody defense, a recent CJEU judgment serves as a comparative example. Finally, an attempt is made to suggest solutions for the European legislator, based on the policy proposals of the EU Commission’s “Digital Agenda” and more recent policy documents (e.g. “On Content in the Digital Market”, “Licenses for Europe”). In this context, a distinction is made between non-commercial mashup artists and the emerging commercial mashup scene.
Resumo:
Music is an intriguing stimulus widely used in movies to increase the emotional experience. However, no brain imaging study has to date examined this enhancement effect using emotional pictures (the modality mostly used in emotion research) and musical excerpts. Therefore, we designed this functional magnetic resonance imaging study to explore how musical stimuli enhance the feeling of affective pictures. In a classical block design carefully controlling for habituation and order effects, we presented fearful and sad pictures (mostly taken from the IAPS) either alone or combined with congruent emotional musical excerpts (classical pieces). Subjective ratings clearly indicated that the emotional experience was markedly increased in the combined relative to the picture condition. Furthermore, using a second-level analysis and regions of interest approach, we observed a clear functional and structural dissociation between the combined and the picture condition. Besides increased activation in brain areas known to be involved in auditory as well as in neutral and emotional visual-auditory integration processes, the combined condition showed increased activation in many structures known to be involved in emotion processing (including for example amygdala, hippocampus, parahippocampus, insula, striatum, medial ventral frontal cortex, cerebellum, fusiform gyrus). In contrast, the picture condition only showed an activation increase in the cognitive part of the prefrontal cortex, mainly in the right dorsolateral prefrontal cortex. Based on these findings, we suggest that emotional pictures evoke a more cognitive mode of emotion perception, whereas congruent presentations of emotional visual and musical stimuli rather automatically evoke strong emotional feelings and experiences.
Resumo:
Most previous neurophysiological studies evoked emotions by presenting visual stimuli. Models of the emotion circuits in the brain have for the most part ignored emotions arising from musical stimuli. To our knowledge, this is the first emotion brain study which examined the influence of visual and musical stimuli on brain processing. Highly arousing pictures of the International Affective Picture System and classical musical excerpts were chosen to evoke the three basic emotions of happiness, sadness and fear. The emotional stimuli modalities were presented for 70 s either alone or combined (congruent) in a counterbalanced and random order. Electroencephalogram (EEG) Alpha-Power-Density, which is inversely related to neural electrical activity, in 30 scalp electrodes from 24 right-handed healthy female subjects, was recorded. In addition, heart rate (HR), skin conductance responses (SCR), respiration, temperature and psychometrical ratings were collected. Results showed that the experienced quality of the presented emotions was most accurate in the combined conditions, intermediate in the picture conditions and lowest in the sound conditions. Furthermore, both the psychometrical ratings and the physiological involvement measurements (SCR, HR, Respiration) were significantly increased in the combined and sound conditions compared to the picture conditions. Finally, repeated measures ANOVA revealed the largest Alpha-Power-Density for the sound conditions, intermediate for the picture conditions, and lowest for the combined conditions, indicating the strongest activation in the combined conditions in a distributed emotion and arousal network comprising frontal, temporal, parietal and occipital neural structures. Summing up, these findings demonstrate that music can markedly enhance the emotional experience evoked by affective pictures.
Resumo:
Medical doctors often do not trust the result of fully automatic segmentations because they have no possibility to make corrections if necessary. On the other hand, manual corrections can introduce a user bias. In this work, we propose to integrate the possibility for quick manual corrections into a fully automatic segmentation method for brain tumor images. This allows for necessary corrections while maintaining a high objectiveness. The underlying idea is similar to the well-known Grab-Cut algorithm, but here we combine decision forest classification with conditional random field regularization for interactive segmentation of 3D medical images. The approach has been evaluated by two different users on the BraTS2012 dataset. Accuracy and robustness improved compared to a fully automatic method and our interactive approach was ranked among the top performing methods. Time for computation including manual interaction was less than 10 minutes per patient, which makes it attractive for clinical use.