727 resultados para informatica forense, computer forensics, best practice, alterazioni, dati, sistema operativo, Windows XP
Resumo:
I problemi di sicurezza nel software sono in crescita e gli strumenti di analisi adottati nei sistemi GNU/Linux non permettono di evidenziare le finestre di vulnerabilità a cui un pacchetto è stato soggetto. L'obiettivo di questa tesi è quello di sviluppare uno strumento di computer forensics in grado di ricostruire, incrociando informazioni ottenute dal package manager con security advisory ufficiali, i problemi di sicurezza che potrebbero aver causato una compromissione del sistema in esame.
Resumo:
Esta es una colección de 10 vídeos tutoriales que pueden ser empleados como material educativo en los cursos de fonética básica en el ámbito universitario. Los vídeos 1-3 tratan aspectos relacionados con la grabación: el tipo de micrófonos que se emplean, las clases de espacios en las que se suelen llevar a cabo la captura de señales de audio y las grabadoras que se suelen emplear. El vídeo 4 explora técnicas de captura y observación de datos de flujo y presión en fonética aerodinámica. Los vídeos 5-10 presentan información sobre los principales usos que se le brindan al programa Praat (Boersma y Weenink, 2014) en los estudios actuales de fonética acústica, desde la clase de información sobre modos de articulación de las consonantes que se puede identificar en oscilogramas hasta la creación de señales sonoras sintetizadas por medio de unos procedimientos que tiene el programa para tal propósito, los cuales son susceptibles de ser empleados en experimentos de percepción auditiva.
Resumo:
Ubuntu Touch è un nuovo sistema operativo per Desktop e Cellulari che nasce dalla necessità di unire sistemi eterogenei sotto un'unica piattaforma. L'infrastruttura di Touch, che garantisce convergenza fra dispositivi diversi, è basata sull'innovativo server grafico Mir e sull'interfaccia grafica Unity. L'Application Model è notevolmente migliorato, le applicazioni sono confinate attraverso AppArmor, e scambiano fra loro contenuti tramite il servizio Content-Hub. I tool di sviluppo supportati sono le tecnologie web (HTML5 e JavaScript) e C++ su Framework Qt (possibilità di utilizzare QML). Gli aggiornamenti di sistema, del core, sono sia parziali, attraverso archivi "delta" che introducono solo i cambiamenti necessari, sia full, sovrascrivono l'intero dispositivo. Lo sviluppo in Ubuntu SDK è veloce e agile. Notevole la gestione degli emulatori, ma pecca di alcune feature tutt'ora mancanti. Gli Scope sono application content indipendent, vera innovazione in Ubuntu. Per sperimentare questa tecnologia si sviluppa uno scope per la ricerca di libri nella Biblioteca Gian Paolo Dore della Facoltà di Ingegneria di Bologna.
Resumo:
Para empezar, se ha hecho un análisis de las diferentes posibilidades que se podían implementar para poder conseguir el objetivo del trabajo. El resultado final debe ser, disponer de máquinas para que el sistema operativo fuese independiente del hardware que se tiene instalado en él . Para ello, se decide montar un sistema operativo de base en todos los equipos del laboratorio, que tenga las necesidades mínimas que se necesitan, las cuales son una interfaz gráfica y conexión de red. Hay que intentar reducir el consumo de recursos al máximo con este sistema operativo mínimo para que el rendimiento de las máquinas sea lo más fluido posible para los usuarios. El sistema elegido fue Linux con su distribución Ubuntu [ubu, http] con los módulos mínimos que permita funcionar el software necesario. Una vez se instala el sistema operativo anfitrión, se instala el escritorio Xfce [ubu2, http], que es el más ligero de Ubuntu, pero que proporciona buen rendimiento. Después, se procedió a instalar un software de virtualización en cada equipo. En este caso se decidió, por las buenas prestaciones que ofrecía, que fuera VirtualBox [vir2,http] de Oracle. Sobre éste software se crean tantas máquinas virtuales (con sistema operativo Windows) como asignaturas diferentes se cursan en el laboratorio donde se trabaje. Con esto, se consigue que al arrancar el programa los alumnos pudieran escoger qué máquina arrancar y lo que es más importante, se permite realizar cualquier cambio en el hardware (exceptuando el disco duro porque borraría todo lo que se tuviera guardado). Además de no tener que volver a reinstalar el sistema operativo nuevamente, se consigue la abstracción del software y hardware. También se decide que, para tener un respaldo de las máquinas virtuales que se tengan creadas en VirtualBox, se utiliza un servidor NAS. Uno de los motivos de utilizar dicho servidor fue por aprovechar una infraestructura ya creada. Un servidor NAS da la posibilidad de recuperar cualquier archivo (máquina virtual) cuando haga falta porque haya alguna máquina virtual corrupta en algún equipo, o en varios. Este tipo de servidor tiene la gran ventaja de ser multicast, es decir, permite solicitudes simultáneas. ABSTRACT For starters, there has been an analysis of the different possibilities that could be implemented to achieve the objective of the work. This objective was to have machines for the operating system to be independent of the hardware we have installed on it. Therefore, we decided to create an operating system based on all computers in the laboratory, taking the minimum needs we need. This is a graphical interface and network connection. We must try to reduce the consumption of resources to the maximum for the performance of the machines is as fluid as possible for users. The system was chosen with its Ubuntu Linux distribution with minimum modules that allow us to run software that is necessary for us. Once the base is installed, we install the Xfce desktop, which is the lightest of Ubuntu, but which provided good performance. Then we proceeded to install a virtualization software on each computer. In this case we decided, for good performance that gave us, it was Oracle VirtualBox. About this software create many virtual machines (Windows operating system) as different subjects are studied in the laboratory where we are. With that, we got it at program startup students could choose which machine start and what is more important, allowed us to make any changes to the hardware (except the hard drive because it would erase all we have). Besides not having to reinstall the operating system again, we get the software and hardware abstraction. We also decided that in order to have a backup of our virtual machines that we created in VirtualBox, we use a NAS server. One reason to use that server was to leverage their existing network infrastructure. A NAS server gives us the ability to retrieve any file (image) when we do need because there is some corrupt virtual machine in a team, or several. This is possible because this type of server allows multicast connection.
Resumo:
Questa tesi ha come obiettivo la sperimentazione del nuovo sistema operativo Windows 10 IoT Core su tecnologia Raspberry Pi 2, verificandone la compatibilita con alcuni sensori in commercio. Tale studio viene poi applicato in un contesto di Home Intelligence al fine di creare un agente per la gestione di luci LED, in prospettiva della sua integrazione nel sistema prototipale Home Manager.
Resumo:
This study was undertaken to examine the influence that a set of Professional Development (PD) initiatives had on faculty use of Moodle, a well known Course Management System. The context of the study was a private language university just outside Tokyo, Japan. Specifically, it aimed to identify the way in which the PD initiatives adhered to professional development best practice criteria; how faculty members perceived the PD initiatives; what impact the PD initiatives had on faculty use of Moodle; and other variables that may have influenced faculty in their use of Moodle. The study utilised a mixed methods approach. Participants in the study were 42 teachers who worked at the university in the academic year 2008/9. The online survey consisted of 115 items, factored into 10 constructs. Data was collected through an online survey, semi-structured face-to-face interviews, post-workshop surveys, and a collection of textual artefacts. The quantitative data were analysed in SPSS, using descriptive statistics, Spearman's Rank Order correlation tests and a Kruskal-Wallis means test. The qualitative data was used to develop and expand findings and ideas. The results indicated that the PD initiatives adhered closely to criteria posited in technology-related professional development best practice criteria. Further, results from the online survey, post workshop surveys, and follow up face-to-face interviews indicated that while the PD initiatives that were implemented were positively perceived by faculty, they did not have the anticipated impact on Moodle use among faculty. Further results indicated that other variables, such as perceptions of Moodle, and institutional issues, had a considerable influence on Moodle use. The findings of the study further strengthened the idea that the five variables Everett Rogers lists in his Diffusion of Innovations model, including perceived attributes of an innovation; type of innovation decision; communication channels; nature of the social system; extent of change agents' promotion efforts, most influence the adoption of an innovation. However, the results also indicated that some of the variables in Rogers' DOI seem to have more of an influence than others, particularly the perceived attributes of an innovation variable. In addition, the findings of the study could serve to inform universities that have Course Management Systems (CMS), such as Moodle, about how to utilise them most efficiently and effectively. The findings could also help to inform universities about how to help faculty members acquire the skills necessary to incorporate CMSs into curricula and teaching practice. A limitation of this study was the use of a non-randomised sample, which could appear to have limited the generalisations of the findings to this particular Japanese context.
Resumo:
Layering is a widely used method for structuring data in CAD-models. During the last few years national standardisation organisations, professional associations, user groups for particular CAD-systems, individual companies etc. have issued numerous standards and guidelines for the naming and structuring of layers in building design. In order to increase the integration of CAD data in the industry as a whole ISO recently decided to define an international standard for layer usage. The resulting standard proposal, ISO 13567, is a rather complex framework standard which strives to be more of a union than the least common denominator of the capabilities of existing guidelines. A number of principles have been followed in the design of the proposal. The first one is the separation of the conceptual organisation of information (semantics) from the way this information is coded (syntax). The second one is orthogonality - the fact that many ways of classifying information are independent of each other and can be applied in combinations. The third overriding principle is the reuse of existing national or international standards whenever appropriate. The fourth principle allows users to apply well-defined subsets of the overall superset of possible layernames. This article describes the semantic organisation of the standard proposal as well as its default syntax. Important information categories deal with the party responsible for the information, the type of building element shown, whether a layer contains the direct graphical description of a building part or additional information needed in an output drawing etc. Non-mandatory information categories facilitate the structuring of information in rebuilding projects, use of layers for spatial grouping in large multi-storey projects, and storing multiple representations intended for different drawing scales in the same model. Pilot testing of ISO 13567 is currently being carried out in a number of countries which have been involved in the definition of the standard. In the article two implementations, which have been carried out independently in Sweden and Finland, are described. The article concludes with a discussion of the benefits and possible drawbacks of the standard. Incremental development within the industry, (where ”best practice” can become ”common practice” via a standard such as ISO 13567), is contrasted with the more idealistic scenario of building product models. The relationship between CAD-layering, document management product modelling and building element classification is also discussed.
Resumo:
Human service organizations are increasingly using knowledge as a mechanism for implementing change. Knowledge emerging from many sources that may include academic publications, grey literature, and service user and practitioner wisdom contributes toward informing best practice. The question is: how do we harness this knowledge to make practice more effective? This paper synthesizes the lessons learned from eight international organizations that have made a commitment to knowledge mobilization as an important priority in their mission and operation. The paper provides a conceptual model, tools and resources to help human services organizations create strategies for building, enhancing or sustaining their knowledge mobilization efforts. The paper describes a flexible blueprint for human service organizations to leverage knowledge mobilization efforts at all levels of service delivery.
Resumo:
Planning a project with proper considerations of all necessary factors and managing a project to ensure its successful implementation will face a lot of challenges. Initial stage in planning a project for bidding a project is costly, time consuming and usually with poor accuracy on cost and effort predictions. On the other hand, detailed information for previous projects may be buried in piles of archived documents which can be increasingly difficult to learn from the previous experiences. Project portfolio has been brought into this field aiming to improve the information sharing and management among different projects. However, the amount of information that could be shared is still limited to generic information. This paper, we report a recently developed software system COBRA to automatically generate a project plan with effort estimation of time and cost based on data collected from previous completed projects. To maximise the data sharing and management among different projects, we proposed a method of using product based planning from PRINCE2 methodology. (Automated Project Information Sharing and Management System -�COBRA) Keywords: project management, product based planning, best practice, PRINCE2
Resumo:
Studio che approfondisce e compare le diverse metodologie e tecniche utilizzabili per l'analisi di dispositivi di telefonia cellulare, in particolar modo smartphone, nel contesto di indagini di mobile device forensics
Resumo:
La prova informatica richiede l’adozione di precauzioni come in un qualsiasi altro accertamento scientifico. Si fornisce una panoramica sugli aspetti metodologici e applicativi dell’informatica forense alla luce del recente standard ISO/IEC 27037:2012 in tema di trattamento del reperto informatico nelle fasi di identificazione, raccolta, acquisizione e conservazione del dato digitale. Tali metodologie si attengono scrupolosamente alle esigenze di integrità e autenticità richieste dalle norme in materia di informatica forense, in particolare della Legge 48/2008 di ratifica della Convenzione di Budapest sul Cybercrime. In merito al reato di pedopornografia si offre una rassegna della normativa comunitaria e nazionale, ponendo l’enfasi sugli aspetti rilevanti ai fini dell’analisi forense. Rilevato che il file sharing su reti peer-to-peer è il canale sul quale maggiormente si concentra lo scambio di materiale illecito, si fornisce una panoramica dei protocolli e dei sistemi maggiormente diffusi, ponendo enfasi sulla rete eDonkey e il software eMule che trovano ampia diffusione tra gli utenti italiani. Si accenna alle problematiche che si incontrano nelle attività di indagine e di repressione del fenomeno, di competenza delle forze di polizia, per poi concentrarsi e fornire il contributo rilevante in tema di analisi forensi di sistemi informatici sequestrati a soggetti indagati (o imputati) di reato di pedopornografia: la progettazione e l’implementazione di eMuleForensic consente di svolgere in maniera estremamente precisa e rapida le operazioni di analisi degli eventi che si verificano utilizzando il software di file sharing eMule; il software è disponibile sia in rete all’url http://www.emuleforensic.com, sia come tool all’interno della distribuzione forense DEFT. Infine si fornisce una proposta di protocollo operativo per l’analisi forense di sistemi informatici coinvolti in indagini forensi di pedopornografia.
Resumo:
Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.
Resumo:
The purpose of this study was to compare the effects of three student response conditions during computer-assisted instruction on the acquisition and maintenance of social-studies facts. Two of the conditions required active student responding (ASR), whereas the other required an on-task (OT) response. Participants were five fifth-grade students, with learning disabilities enrolled in a private school. An alternating treatments design with a best treatments phase was used to compare the effects of the response procedures on three major dependent measures: same-day tests, next-day tests, and maintenance tests. ^ Each week for six weeks, participants were provided daily one-to-one instruction on sets of 21 unknown social-studies facts using a hypermedia computer program, with a new set of facts being practiced each week. Each set of 21 facts was divided randomly into three conditions: Clicking-ASR, Repeating-ASR, and Listening-OT. Hypermedia lesson began weekly with the concept introduction lesson, followed by practice and testing. Practice and testing occurred four days per week, per set. During Clicking-ASR, student practice involved the selection of a social-studies response by clicking on an item with the mouse on the hypermedia card. Repeating-ASR instruction required students to orally repeat the social-studies facts when prompted by the computer. During Listening-OT, students listened to the social-studies facts being read by the computer. During weeks seven and eight, instruction occurred with seven unknown facts using only the best treatment. ^ Test results show that all for all 5 students, the Repeating-ASR practice procedure resulted in more social-studies facts stated correctly on same-day tests, next-day tests, and one-and two-week maintenance tests. Clicking-ASR was the next most effective procedure. During the seventh and eighth week of instruction when only the best practice condition was implemented, Repeating-ASR produced higher scores than all conditions (including Repeating-ASR) during the first six weeks of the study. ^ The results lend further support to the growing body of literature that demonstrates the positive relation between ASR and student achievement. Much of the ASR literature has focused on the effects of increased ASR during teacher-led or peer-mediated instruction. This study adds a dimension to that research in that it demonstrated the importance of ASR during computer-assisted instruction and further suggests that the type of ASR used during computer-assisted instruction may influence learning. Future research is needed to investigate the effectiveness of other types of ASR during computer-assisted instruction and to identify other fundamental characteristics of an effective computer-assisted instruction. ^
Resumo:
Recent developments in interactive technologies have seen major changes in the manner in which artists, performers, and creative individuals interact with digital music technology; this is due to the increasing variety of interactive technologies that are readily available today. Digital Musical Instruments (DMIs) present musicians with performance challenges that are unique to this form of computer music. One of the most significant deviations from conventional acoustic musical instruments is the level of physical feedback conveyed by the instrument to the user. Currently, new interfaces for musical expression are not designed to be as physically communicative as acoustic instruments. Specifically, DMIs are often void of haptic feedback and therefore lack the ability to impart important performance information to the user. Moreover, there currently is no standardised way to measure the effect of this lack of physical feedback. Best practice would expect that there should be a set of methods to effectively, repeatedly, and quantifiably evaluate the functionality, usability, and user experience of DMIs. Earlier theoretical and technological applications of haptics have tried to address device performance issues associated with the lack of feedback in DMI designs and it has been argued that the level of haptic feedback presented to a user can significantly affect the user’s overall emotive feeling towards a musical device. The outcome of the investigations contained within this thesis are intended to inform new haptic interface.