978 resultados para computer forensics tools


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In his dialogue - Near Term Computer Management Strategy For Hospitality Managers and Computer System Vendors - by William O'Brien, Associate Professor, School of Hospitality Management at Florida International University, Associate Professor O’Brien initially states: “The computer revolution has only just begun. Rapid improvement in hardware will continue into the foreseeable future; over the last five years it has set the stage for more significant improvements in software technology still to come. John Naisbitt's information electronics economy¹ based on the creation and distribution of information has already arrived and as computer devices improve, hospitality managers will increasingly do at least a portion of their work with software tools.” At the time of this writing Assistant Professor O’Brien will have you know, contrary to what some people might think, the computer revolution is not over, it’s just beginning; it’s just an embryo. Computer technology will only continue to develop and expand, says O’Brien with citation. “A complacent few of us who feel “we have survived the computer revolution” will miss opportunities as a new wave of technology moves through the hospitality industry,” says ‘Professor O’Brien. “Both managers who buy technology and vendors who sell it can profit from strategy based on understanding the wave of technological innovation,” is his informed opinion. Property managers who embrace rather than eschew innovation, in this case computer technology, will benefit greatly from this new science in hospitality management, O’Brien says. “The manager who is not alert to or misunderstands the nature of this wave of innovation will be the constant victim of technology,” he advises. On the vendor side of the equation, O’Brien observes, “Computer-wise hospitality managers want systems which are easier and more profitable to operate. Some view their own industry as being somewhat behind the times… They plan to pay significantly less for better computer devices. Their high expectations are fed by vendor marketing efforts…” he says. O’Brien warns against taking a gamble on a risky computer system by falling victim to un-substantiated claims and pie-in-the-sky promises. He recommends affiliating with turn-key vendors who provide hardware, software, and training, or soliciting the help of large mainstream vendors such as IBM, NCR, or Apple. Many experts agree that the computer revolution has merely and genuinely morphed into the software revolution, informs O’Brien; “…recognizing that a computer is nothing but a box in which programs run.” Yes, some of the empirical data in this article is dated by now, but the core philosophy of advancing technology, and properties continually tapping current knowledge is sound.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.

The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.

Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.

Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.

The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Property and Equipment Department has a central supply of automotive parts, tools, and maintenance supplies. This central supply is used to supply the repair shop and also to supply parts to the various field garages and all departments of the Commission. The old procedure involved keeping track manually of all of the parts, which involved some 22,000 items. All records, billings, arid re-order points were kept manually. Mani times the re-order points were located by reaching into a bin and finding nothing there. Desiring to improve this situation, an inventory control system was established for use on the computer. A complete record of the supplies that are stored in the central warehouse was prepared and this information was used to make a catalog. Each time an item is issued or received, it is processed through the inventory program. When the re-order point is reached, a notice is given to reorder. The procedure for taking inventory has been improved. A voucher invoice is now prepared by the computer for all issues to departments. These are some of the many benefits that have been de rived from this system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer game technology provides us with the tools to create web-based educational materials for autonomous and collaborative learning. At Worcester, we have researched the use of this technology in various educational contexts. This paper reports one such study; the use of the commercial game engine “Unreal Tournament 2004” (UT2004) to produce materials suitable for education of Architects. We map the concepts and principles of Architectural Design onto the affordances (development tools) provided by UT2004, leading to a systematic procedure for the realization of buildings and urban environments using this game engine. A theory for the production of web-based learning materials which supports both autonomous and collaborative learning is developed. A heuristic evaluation of our materials, used with second-year students is presented. Associated web-pages provide on-line materials for delegates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Digital forensics is a rapidly expanding field, due to the continuing advances in computer technology and increases in data stage capabilities of devices. However, the tools supporting digital forensics investigations have not kept pace with this evolution, often leaving the investigator to analyse large volumes of textual data and rely heavily on their own intuition and experience. Aim: This research proposes that given the ability of information visualisation to provide an end user with an intuitive way to rapidly analyse large volumes of complex data, such approached could be applied to digital forensics datasets. Such methods will be investigated; supported by a review of literature regarding the use of such techniques in other fields. The hypothesis of this research body is that by utilising exploratory information visualisation techniques in the form of a tool to support digital forensic investigations, gains in investigative effectiveness can be realised. Method:To test the hypothesis, this research examines three different case studies which look at different forms of information visualisation and their implementation with a digital forensic dataset. Two of these case studies take the form of prototype tools developed by the researcher, and one case study utilises a tool created by a third party research group. A pilot study by the researcher is conducted on these cases, with the strengths and weaknesses of each being drawn into the next case study. The culmination of these case studies is a prototype tool which was developed to resemble a timeline visualisation of the user behaviour on a device. This tool was subjected to an experiment involving a class of university digital forensics students who were given a number of questions about a synthetic digital forensic dataset. Approximately half were given the prototype tool, named Insight, to use, and the others given a common open-source tool. The assessed metrics included: how long the participants took to complete all tasks, how accurate their answers to the tasks were, and how easy the participants found the tasks to complete. They were also asked for their feedback at multiple points throughout the task. Results:The results showed that there was a statistically significant increase in accuracy for one of the six tasks for the participants using the Insight prototype tool. Participants also found completing two of the six tasks significantly easier when using the prototype tool. There were no statistically significant different difference between the completion times of both participant groups. There were no statistically significant differences in the accuracy of participant answers for five of the six tasks. Conclusions: The results from this body of research show that there is evidence to suggest that there is the potential for gains in investigative effectiveness when information visualisation techniques are applied to a digital forensic dataset. Specifically, in some scenarios, the investigator can draw conclusions which are more accurate than those drawn when using primarily textual tools. There is also evidence so suggest that the investigators found these conclusions to be reached significantly more easily when using a tool with a visual format. None of the scenarios led to the investigators being at a significant disadvantage in terms of accuracy or usability when using the prototype visual tool over the textual tool. It is noted that this research did not show that the use of information visualisation techniques leads to any statistically significant difference in the time taken to complete a digital forensics investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: to evaluate the cognitive learning of nursing students in neonatal clinical evaluation from a blended course with the use of computer and laboratory simulation; to compare the cognitive learning of students in a control and experimental group testing the laboratory simulation; and to assess the extracurricular blended course offered on the clinical assessment of preterm infants, according to the students. Method: a quasi-experimental study with 14 Portuguese students, containing pretest, midterm test and post-test. The technologies offered in the course were serious game e-Baby, instructional software of semiology and semiotechnique, and laboratory simulation. Data collection tools developed for this study were used for the course evaluation and characterization of the students. Nonparametric statistics were used: Mann-Whitney and Wilcoxon. Results: the use of validated digital technologies and laboratory simulation demonstrated a statistically significant difference (p = 0.001) in the learning of the participants. The course was evaluated as very satisfactory for them. The laboratory simulation alone did not represent a significant difference in the learning. Conclusions: the cognitive learning of participants increased significantly. The use of technology can be partly responsible for the course success, showing it to be an important teaching tool for innovation and motivation of learning in healthcare.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the air traffic management system is to ensure the safe and efficient flow of air traffic. Therefore, while augmenting efficiency, throughput and capacity in airport operations, attention has rightly been placed on doing it in a safe manner. In the control tower, many advances in operational safety have come in the form of visualization tools for tower controllers. However, there is a paradox in developing such systems to increase controllers' situational awareness: by creating additional computer displays, the controller's vision is pulled away from the outside view and the time spent looking down at the monitors is increased. This reduces their situational awareness by forcing them to mentally and physically switch between the head-down equipment and the outside view. This research is based on the idea that augmented reality may be able to address this issue. The augmented reality concept has become increasingly popular over the past decade and is being proficiently used in many fields, such as entertainment, cultural heritage, aviation, military & defense. This know-how could be transferred to air traffic control with a relatively low effort and substantial benefits for controllers’ situation awareness. Research on this topic is consistent with SESAR objectives of increasing air traffic controllers’ situation awareness and enable up to 10 % of additional flights at congested airports while still increasing safety and efficiency. During the Ph.D., a research framework for prototyping augmented reality tools was set up. This framework consists of methodological tools for designing the augmented reality overlays, as well as of hardware and software equipment to test them. Several overlays have been designed and implemented in a simulated tower environment, which is a virtual reconstruction of Bologna airport control tower. The positive impact of such tools was preliminary assessed by means of the proposed methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Changing or creating an organisation means creating a new process. Each process involves many risks that need to be identified and managed. The main risks considered here are procedural and legal risks. The former are related to the risks of errors that may occur during processes, while the latter are related to the compliance of processes with regulations. Managing the risks implies proposing changes to the processes that allow the desired result: an optimised process. In order to manage a company and optimise it in the best possible way, not only should the organisational aspect, risk management and legal compliance be taken into account, but it is important that they are all analysed simultaneously with the aim of finding the right balance that satisfies them all. This is the aim of this thesis, to provide methods and tools to balance these three characteristics, and to enable this type of optimisation, ICT support is used. This work isn’t a thesis in computer science or law, but rather an interdisciplinary thesis. Most of the work done so far is vertical and in a specific domain. The particularity and aim of this thesis is not to carry out an in-depth analysis of a particular aspect, but rather to combine several important aspects, normally analysed separately, which however have an impact and influence each other. In order to carry out this kind of interdisciplinary analysis, the knowledge base of both areas was involved and the combination and collaboration of different experts in the various fields was necessary. Although the methodology described is generic and can be applied to all sectors, the case study considered is a new type of healthcare service that allows patients in acute disease to be hospitalised to their home. This provide the possibility to perform experiments using real hospital database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most visionary goals of Artificial Intelligence is to create a system able to mimic and eventually surpass the intelligence observed in biological systems including, ambitiously, the one observed in humans. The main distinctive strength of humans is their ability to build a deep understanding of the world by learning continuously and drawing from their experiences. This ability, which is found in various degrees in all intelligent biological beings, allows them to adapt and properly react to changes by incrementally expanding and refining their knowledge. Arguably, achieving this ability is one of the main goals of Artificial Intelligence and a cornerstone towards the creation of intelligent artificial agents. Modern Deep Learning approaches allowed researchers and industries to achieve great advancements towards the resolution of many long-standing problems in areas like Computer Vision and Natural Language Processing. However, while this current age of renewed interest in AI allowed for the creation of extremely useful applications, a concerningly limited effort is being directed towards the design of systems able to learn continuously. The biggest problem that hinders an AI system from learning incrementally is the catastrophic forgetting phenomenon. This phenomenon, which was discovered in the 90s, naturally occurs in Deep Learning architectures where classic learning paradigms are applied when learning incrementally from a stream of experiences. This dissertation revolves around the Continual Learning field, a sub-field of Machine Learning research that has recently made a comeback following the renewed interest in Deep Learning approaches. This work will focus on a comprehensive view of continual learning by considering algorithmic, benchmarking, and applicative aspects of this field. This dissertation will also touch on community aspects such as the design and creation of research tools aimed at supporting Continual Learning research, and the theoretical and practical aspects concerning public competitions in this field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research project aims to improve the Design for Additive Manufacturing of metal components. Firstly, the scenario of Additive Manufacturing is depicted, describing its role in Industry 4.0 and in particular focusing on Metal Additive Manufacturing technologies and the Automotive sector applications. Secondly, the state of the art in Design for Additive Manufacturing is described, contextualizing the methodologies, and classifying guidelines, rules, and approaches. The key phases of product design and process design to achieve lightweight functional designs and reliable processes are deepened together with the Computer-Aided Technologies to support the approaches implementation. Therefore, a general Design for Additive Manufacturing workflow based on product and process optimization has been systematically defined. From the analysis of the state of the art, the use of a holistic approach has been considered fundamental and thus the use of integrated product-process design platforms has been evaluated as a key element for its development. Indeed, a computer-based methodology exploiting integrated tools and numerical simulations to drive the product and process optimization has been proposed. A validation of CAD platform-based approaches has been performed, as well as potentials offered by integrated tools have been evaluated. Concerning product optimization, systematic approaches to integrate topology optimization in the design have been proposed and validated through product optimization of an automotive case study. Concerning process optimization, the use of process simulation techniques to prevent manufacturing flaws related to the high thermal gradients of metal processes is developed, providing case studies to validate results compared to experimental data, and application to process optimization of an automotive case study. Finally, an example of the product and process design through the proposed simulation-driven integrated approach is provided to prove the method's suitability for effective redesigns of Additive Manufacturing based high-performance metal products. The results are then outlined, and further developments are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The world currently faces a paradox in terms of accessibility for people with disabilities. While digital technologies hold immense potential to improve their quality of life, the majority of web content still exhibits critical accessibility issues. This PhD thesis addresses this challenge by proposing two interconnected research branches. The first introduces a groundbreaking approach to improving web accessibility by rethinking how it is approached, making it more accessible itself. It involves the development of: 1. AX, a declarative framework of web components that enforces the generation of accessible markup by means of static analysis. 2. An innovative accessibility testing and evaluation methodology, which communicates test results by exploiting concepts that developers are already familiar with (visual rendering and mouse operability) to convey the accessibility of a page. This methodology is implemented through the SAHARIAN browser extension. 3. A11A, a categorized and structured collection of curated accessibility resources aimed at facilitating their intended audiences discover and use them. The second branch focuses on unleashing the full potential of digital technologies to improve accessibility in the physical world. The thesis proposes the SCAMP methodology to make scientific artifacts accessible to blind, visually impaired individuals, and the general public. It enhances the natural characteristics of objects, making them more accessible through interactive, multimodal, and multisensory experiences. Additionally, the prototype of \gls{a11yvt}, a system supporting accessible virtual tours, is presented. It provides blind and visually impaired individuals with features necessary to explore unfamiliar indoor environments, while maintaining universal design principles that makes it suitable for usage by the general public. The thesis extensively discusses the theoretical foundations, design, development, and unique characteristics of these innovative tools. Usability tests with the intended target audiences demonstrate the effectiveness of the proposed artifacts, suggesting their potential to significantly improve the current state of accessibility.