537 resultados para computer forensics tools
Resumo:
Firstly, we would like to thank Ms. Alison Brough and her colleagues for their positive commentary on our published work [1] and their appraisal of our utility of the “off-set plane” protocol for anthropometric analysis. The standardized protocols described in our manuscript have wide applications, ranging from forensic anthropology and paleodemographic research to clinical settings such as paediatric practice and orthopaedic surgical design. We affirm that the use of geometrically based reference tools commonly found in computer aided design (CAD) programs such as Geomagic Design X® are imperative for more automated and precise measurement protocols for quantitative skeletal analysis. Therefore we stand by our recommendation of the use of software such as Amira and Geomagic Design X® in the contexts described in our manuscript...
Resumo:
This PhD research has provided novel solutions to three major challenges which have prevented the wide spread deployment of speaker recognition technology: (1) combating enrolment/ verification mismatch, (2) reducing the large amount of development and training data that is required and (3) reducing the duration of speech required to verify a speaker. A range of applications of speaker recognition technology from forensics in criminal investigations to secure access in banking will benefit from the research outcomes.
Resumo:
As a Lecturer of Animation History and 3D Computer Animator, I received a copy of Moving Innovation: A History of Computer Animation by Tom Sito with an element of anticipation in the hope that this text would clarify the complex evolution of Computer Graphics (CG). Tom Sito did not disappoint, as this text weaves together the multiple development streams and convergent technologies and techniques throughout history that would ultimately result in modern CG. Universities now have students who have never known a world without computer animation and many students are younger than the first 3D CG animated feature film, Toy Story (1996); this text is ideal for teaching computer animation history and, as I would argue, it also provides a model for engaging young students in the study of animation history in general. This is because Sito places the development of computer animation within the context of its pre-digital ancestry and throughout the text he continues to link the discussion to the broader history of animation, its pioneers, technologies and techniques...
Resumo:
Media architecture’s combination of the digital and the physical can trigger, enhance, and amplify urban experiences. In this paper, we examine how to bring about and foster more open and participatory approaches to engage communities through media architecture by identifying novel ways to put some of the creative process into the hands of laypeople. We review technical, spatial, and social aspects of DIY phenomena with a view to better understand maker cultures, communities, and practices. We synthesise our findings and ask if and how media architects as a community of practice can encourage the ‘open-sourcing’ of information and tools allowing laypeople to not only participate but become active instigators of change in their own right. We argue that enabling true DIY practices in media architecture may increase citizen control. Seeking design strategies that foster DIY approaches, we propose five areas for further work and investigation. The paper begs many questions indicating ample room for further research into DIY Media Architecture.
Resumo:
Process modelling is an integral part of any process industry. Several sugar factory models have been developed over the years to simulate the unit operations. An enhanced and comprehensive milling process simulation model has been developed to analyse the performance of the milling train and to assess the impact of changes and advanced control options for improved operational efficiency. The developed model is incorporated in a proprietary software package ‘SysCAD’. As an example, the milling process model has been used to predict a significant loss of extraction by returning the cush from the juice screen before #3 mill instead of before #2 mill as is more commonly done. Further work is being undertaken to more accurately model extraction processes in a milling train, to examine extraction issues dynamically and to integrate the model into a whole factory model.
Resumo:
Energy usage in general, and electricity usage in particular, are major concerns internationally due to the increased cost of providing energy supplies and the environmental impacts of electricity generation using carbon-based fuels. If a "systems" approach is taken to understanding energy issues then both supply and demand need to be considered holistically. This paper examines two research projects in the energy area with IT tools as key deliverables, one examining supply issues and the other studying demand side issues. The supply side project used hard engineering methods to build the models and software, while the demand side project used a social science approach. While the projects are distinct, there was an overlap in personnel. Comparing the knowledge extraction, model building, implementation and interface issues of these two deliverables identifies both interesting contrasts and commonalities.
Resumo:
This paper describes a design framework intended to conceptually map the influence that game design has on the creative activity people engage in during gameplay. The framework builds on behavioral and verbal analysis of people playing puzzle games. The analysis was designed to better understand the extent to which gameplay activities within different games facilitate creative problem solving. We have used an expert review process to evaluate these games in terms of their game design elements and have taken a cognitive action approach to this process to investigate how particular elements produce the potential for creative activity. This paper proposes guidelines that build upon our understanding of the relationship between the creative processes that players undertake during a game and the components of the game that allow these processes to occur. These guidelines may be used in the game design process to better facilitate creative gameplay activity.
Resumo:
Tangled (2011) demonstrated that Walt Disney Animation has successfully extended the traditional Disney animation aesthetic to the 3D medium. The very next film produced by the studio however, Wreck-it Ralph (2012), required the animators (trained in the traditional Disney style) to develop a limited style of animation inspired by the 8-bit motion of 1980s video games. This paper examines the 8-bit style motion in Wreck-it Ralph to understand if and how the principles of animation were adapted for the film.
Resumo:
Background: A major challenge for assessing students’ conceptual understanding of STEM subjects is the capacity of assessment tools to reliably and robustly evaluate student thinking and reasoning. Multiple-choice tests are typically used to assess student learning and are designed to include distractors that can indicate students’ incomplete understanding of a topic or concept based on which distractor the student selects. However, these tests fail to provide the critical information uncovering the how and why of students’ reasoning for their multiple-choice selections. Open-ended or structured response questions are one method for capturing higher level thinking, but are often costly in terms of time and attention to properly assess student responses. Purpose: The goal of this study is to evaluate methods for automatically assessing open-ended responses, e.g. students’ written explanations and reasoning for multiple-choice selections. Design/Method: We incorporated an open response component for an online signals and systems multiple-choice test to capture written explanations of students’ selections. The effectiveness of an automated approach for identifying and assessing student conceptual understanding was evaluated by comparing results of lexical analysis software packages (Leximancer and NVivo) to expert human analysis of student responses. In order to understand and delineate the process for effectively analysing text provided by students, the researchers evaluated strengths and weakness for both the human and automated approaches. Results: Human and automated analyses revealed both correct and incorrect associations for certain conceptual areas. For some questions, that were not anticipated or included in the distractor selections, showing how multiple-choice questions alone fail to capture the comprehensive picture of student understanding. The comparison of textual analysis methods revealed the capability of automated lexical analysis software to assist in the identification of concepts and their relationships for large textual data sets. We also identified several challenges to using automated analysis as well as the manual and computer-assisted analysis. Conclusions: This study highlighted the usefulness incorporating and analysing students’ reasoning or explanations in understanding how students think about certain conceptual ideas. The ultimate value of automating the evaluation of written explanations is that it can be applied more frequently and at various stages of instruction to formatively evaluate conceptual understanding and engage students in reflective
Resumo:
Project work can involve multiple people from varying disciplines coming together to solve problems as a group. Large scale interactive displays are presenting new opportunities to support such interactions with interactive and semantically enabled cooperative work tools such as intelligent mind maps. In this paper, we present a novel digital, touch-enabled mind-mapping tool as a first step towards achieving such a vision. This first prototype allows an evaluation of the benefits of a digital environment for a task that would otherwise be performed on paper or flat interactive surfaces. Observations and surveys of 12 participants in 3 groups allowed the formulation of several recommendations for further research into: new methods for capturing text input on touch screens; inclusion of complex structures; multi-user environments and how users make the shift from single- user applications; and how best to navigate large screen real estate in a touch-enabled, co-present multi-user setting.
Resumo:
This paper explores how traditional media organizations (such as magazines, music, film, books, and newspapers) develop routines for coping with an increasingly productive audience. While previous studies have reported on how such organizations have been affected by digital technologies, this study makes a contribution to this literature by being one of the first to show how organizational routines for engaging with an increasingly productive audience actually emerge and diffuse between industries. The paper explores to what extent routines employed by two traditional media organizations have been brought in from other organizational settings, specifically from so-called ‘software platform operators’. Data on routines for engaging with productive audiences have been collected from two information-rich cases in the music and the magazine industries, and from eight high-profile software platform operators. The paper concludes that the routines employed by the two traditional media organizations and by the software platform operators are based on the same set of principles: Provide the audience with (a) tools that allow them to easily generate cultural content; (b) building blocks which facilitate their creative activities; and (c) recognition and rewards based on both rationality and emotion.
Resumo:
This paper presents an overview of the strengths and limitations of existing and emerging geophysical tools for landform studies. The objectives are to discuss recent technical developments and to provide a review of relevant recent literature, with a focus on propagating field methods with terrestrial applications. For various methods in this category, including ground-penetrating radar (GPR), electrical resistivity (ER), seismics, and electromagnetic (EM) induction, the technical backgrounds are introduced, followed by section on novel developments relevant to landform characterization. For several decades, GPR has been popular for characterization of the shallow subsurface and in particular sedimentary systems. Novel developments in GPR include the use of multi-offset systems to improve signal-to-noise ratios and data collection efficiency, amongst others, and the increased use of 3D data. Multi-electrode ER systems have become popular in recent years as they allow for relatively fast and detailed mapping. Novel developments include time-lapse monitoring of dynamic processes as well as the use of capacitively-coupled systems for fast, non-invasive surveys. EM induction methods are especially popular for fast mapping of spatial variation, but can also be used to obtain information on the vertical variation in subsurface electrical conductivity. In recent years several examples of the use of plane wave EM for characterization of landforms have been published. Seismic methods for landform characterization include seismic reflection and refraction techniques and the use of surface waves. A recent development is the use of passive sensing approaches. The use of multiple geophysical methods, which can benefit from the sensitivity to different subsurface parameters, is becoming more common. Strategies for coupled and joint inversion of complementary datasets will, once more widely available, benefit the geophysical study of landforms.Three cases studies are presented on the use of electrical and GPR methods for characterization of landforms in the range of meters to 100. s of meters in dimension. In a study of polygonal patterned ground in the Saginaw Lowlands, Michigan, USA, electrical resistivity tomography was used to characterize differences in subsurface texture and water content associated with polygon-swale topography. Also, a sand-filled thermokarst feature was identified using electrical resistivity data. The second example is on the use of constant spread traversing (CST) for characterization of large-scale glaciotectonic deformation in the Ludington Ridge, Michigan. Multiple CST surveys parallel to an ~. 60. m high cliff, where broad (~. 100. m) synclines and narrow clay-rich anticlines are visible, illustrated that at least one of the narrow structures extended inland. A third case study discusses internal structures of an eolian dune on a coastal spit in New Zealand. Both 35 and 200. MHz GPR data, which clearly identified a paleosol and internal sedimentary structures of the dune, were used to improve understanding of the development of the dune, which may shed light on paleo-wind directions.