845 resultados para Computer operating systems


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2016

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fast spreading unknown viruses have caused major damage on computer systems upon their initial release. Current detection methods have lacked capabilities to detect unknown viruses quickly enough to avoid mass spreading and damage. This dissertation has presented a behavior based approach to detecting known and unknown viruses based on their attempt to replicate. Replication is the qualifying fundamental characteristic of a virus and is consistently present in all viruses making this approach applicable to viruses belonging to many classes and executing under several conditions. A form of replication called self-reference replication, (SR-replication), has been formalized as one main type of replication which specifically replicates by modifying or creating other files on a system to include the virus itself. This replication type was used to detect viruses attempting replication by referencing themselves which is a necessary step to successfully replicate files. The approach does not require a priori knowledge about known viruses. Detection was accomplished at runtime by monitoring currently executing processes attempting to replicate. Two implementation prototypes of the detection approach called SRRAT were created and tested on the Microsoft Windows operating systems focusing on the tracking of user mode Win32 API system calls and Kernel mode system services. The research results showed SR-replication capable of distinguishing between file infecting viruses and benign processes with little or no false positives and false negatives. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fast spreading unknown viruses have caused major damage on computer systems upon their initial release. Current detection methods have lacked capabilities to detect unknown virus quickly enough to avoid mass spreading and damage. This dissertation has presented a behavior based approach to detecting known and unknown viruses based on their attempt to replicate. Replication is the qualifying fundamental characteristic of a virus and is consistently present in all viruses making this approach applicable to viruses belonging to many classes and executing under several conditions. A form of replication called self-reference replication, (SR-replication), has been formalized as one main type of replication which specifically replicates by modifying or creating other files on a system to include the virus itself. This replication type was used to detect viruses attempting replication by referencing themselves which is a necessary step to successfully replicate files. The approach does not require a priori knowledge about known viruses. Detection was accomplished at runtime by monitoring currently executing processes attempting to replicate. Two implementation prototypes of the detection approach called SRRAT were created and tested on the Microsoft Windows operating systems focusing on the tracking of user mode Win32 API system calls and Kernel mode system services. The research results showed SR-replication capable of distinguishing between file infecting viruses and benign processes with little or no false positives and false negatives.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. ^ We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. ^ We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. ^ We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). ^ In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Unequaled improvements in processor and I/O speeds make many applications such as databases and operating systems to be increasingly I/O bound. Many schemes such as disk caching and disk mirroring have been proposed to address the problem. In this thesis we focus only on disk mirroring. In disk mirroring, a logical disk image is maintained on two physical disks allowing a single disk failure to be transparent to application programs. Although disk mirroring improves data availability and reliability, it has two major drawbacks. First, writes are expensive because both disks must be updated. Second, load balancing during failure mode operation is poor because all requests are serviced by the surviving disk. Distorted mirrors was proposed to address the write problem and interleaved declustering to address the load balancing problem. In this thesis we perform a comparative study of these two schemes under various operating modes. In addition we also study traditional mirroring to provide a common basis for comparison.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

LAPMv2 is a research software solution specifically developed to allow marine scientists to produce geo-referenced visual maps of the seafloor, known as mosaics, from a set of underwater images and navigation data. LAPMv2 has a graphical user interface that guides the user through the different steps of the mosaicking workflow. LAPMv2 runs on 64-bit Windows, MacOS X and Linux operating systems. There are two versions for each operating system: (1) the WEB-installers (lightweight but require an internet connection during the installation) and (2) the MCR installers (large files but can be installed on computer without internet-connection). The user manual explains how to install and start the program on the different operating systems. Go to http://www.lapm.eu.com for further information about the latest versions of LAPMv2.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract: It is estimated that 1 in 5 will, at some point in their lives, experience a long-term illness or disability that will impact their day to day lives. Access to digital information and technologies can be life changing and a necessity to fully participate in education, work and society. Specialist assistive technologies, such as screen readers, have been available for many years and are now built-into operating systems and devices. In addition, web accessibility standards have been compiled and published since the advent of the World Wide Web over two decades ago. However, internet use by people with disabilities continues to lag significantly behind those with no disability and use of assistive technologies remains lower than should be the case with tools often abandoned. In this seminar we will talk about our work to identify digital accessibility challenges; the barriers experienced by those with disabilities and how computer scientists can play a part in removing obstacles to access and ease of use. We will discuss some of our projects focussing on: • Development of assistive technologies for niche groups of users, • improving accessibility standards to cover a wider range of disabilities, • creating accessibility training resources for developers and stakeholders • embedding accessibility practice within development projects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The growing demand for large-scale virtualization environments, such as the ones used in cloud computing, has led to a need for efficient management of computing resources. RAM memory is the one of the most required resources in these environments, and is usually the main factor limiting the number of virtual machines that can run on the physical host. Recently, hypervisors have brought mechanisms for transparent memory sharing between virtual machines in order to reduce the total demand for system memory. These mechanisms “merge” similar pages detected in multiple virtual machines into the same physical memory, using a copy-on-write mechanism in a manner that is transparent to the guest systems. The objective of this study is to present an overview of these mechanisms and also evaluate their performance and effectiveness. The results of two popular hypervisors (VMware and KVM) using different guest operating systems (Linux and Windows) and different workloads (synthetic and real) are presented herein. The results show significant performance differences between hypervisors according to the guest system workloads and execution time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays there is almost no crime committed without a trace of digital evidence, and since the advanced functionality of mobile devices today can be exploited to assist in crime, the need for mobile forensics is imperative. Many of the mobile applications available today, including internet browsers, will request the user’s permission to access their current location when in use. This geolocation data is subsequently stored and managed by that application's underlying database files. If recovered from a device during a forensic investigation, such GPS evidence and track points could hold major evidentiary value for a case. The aim of this paper is to examine and compare to what extent geolocation data is available from the iOS and Android operating systems. We focus particularly on geolocation data recovered from internet browsing applications, comparing the native Safari and Browser apps with Google Chrome, downloaded on to both platforms. All browsers were used over a period of several days at various locations to generate comparable test data for analysis. Results show considerable differences not only in the storage locations and formats, but also in the amount of geolocation data stored by different browsers and on different operating systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This document presents GEmSysC, an unified cryptographic API for embedded systems. Software layers implementing this API can be built over existing libraries, allowing embedded software to access cryptographic functions in a consistent way that does not depend on the underlying library. The API complies to good practices for API design and good practices for embedded software development and took its inspiration from other cryptographic libraries and standards. The main inspiration for creating GEmSysC was the CMSIS-RTOS standard, which defines an unified API for embedded software in an implementation-independent way, but targets operating systems instead of cryptographic functions. GEmSysC is made of a generic core and attachable modules, one for each cryptographic algorithm. This document contains the specification of the core of GEmSysC and three of its modules: AES, RSA and SHA-256. GEmSysC was built targeting embedded systems, but this does not restrict its use only in such systems – after all, embedded systems are just very limited computing devices. As a proof of concept, two implementations of GEmSysC were made. One of them was built over wolfSSL, which is an open source library for embedded systems. The other was built over OpenSSL, which is open source and a de facto standard. Unlike wolfSSL, OpenSSL does not specifically target embedded systems. The implementation built over wolfSSL was evaluated in a Cortex- M3 processor with no operating system while the implementation built over OpenSSL was evaluated on a personal computer with Windows 10 operating system. This document displays test results showing GEmSysC to be simpler than other libraries in some aspects. These results have shown that both implementations incur in little overhead in computation time compared to the cryptographic libraries themselves. The overhead of the implementation has been measured for each cryptographic algorithm and is between around 0% and 0.17% for the implementation over wolfSSL and between 0.03% and 1.40% for the one over OpenSSL. This document also presents the memory costs for each implementation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Catering to society’s demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Os serviços baseados em localização vieram dar um novo alento à criatividade dos programadores de aplicações móveis. A vulgarização de dispositivos com capacidades de localização integradas deu origem ao desenvolvimento de aplicações que gerem e apresentam informação baseada na posição do utilizador. Desde então, o mercado móvel tem assistido ao aparecimento de novas categorias de aplicações que tiram proveito desta capacidade. Entre elas, destaca-se a monitorização remota de dispositivos, que tem vindo a assumir uma importância crescente, tanto no sector particular como no sector empresarial. Esta dissertação começa por apresentar o estado da arte sobre os diferentes sistemas de posicionamento, categorizados pela sua eficácia em ambientes internos ou externos, assim como diferentes protocolos de comunicação em tempo quase-real. É também feita uma análise ao estado actual do mercado móvel. Actualmente o mercado possui diferentes plataformas móveis com características únicas que as fazem rivalizar entre si, com vista a expandirem a sua quota de mercado. É por isso elaborado um breve estudo sobre os sistemas operativos móveis mais relevantes da actualidade. É igualmente feita uma abordagem mais profunda à arquitectura da plataforma móvel da Apple - o iOS – que serviu de base ao desenvolvimento de uma solução optimizada para localização e monitorização de dispositivos móveis. A monitorização implica uma utilização intensiva de recursos energéticos e de largura de banda que os dispositivos móveis da actualidade não estão aptos a suportar. Dado o grande consumo energético do GPS face à precária autonomia destes dispositivos, é apresentado um estudo em que se expõem soluções que permitem gerir de forma optimizada a utilização do GPS. O elevado custo dos planos de dados facultados pelas operadoras móveis é também considerado, pelo que são exploradas soluções que visam minimizar a utilização de largura de banda. Deste trabalho, nasce a aplicação EyeGotcha, que para além de permitir localizar outros utilizadores de dispositivos móveis de forma optimizada, permite também monitorizar as suas acções baseando-se num conjunto de regras pré-definidas. Estas acções são reportadas às entidades monitoras, de modo automatizado e sob a forma de alertas. Visionando-se a comercialização da aplicação, é portanto apresentado um modelo de negócio que permite obter receitas capazes de cobrirem os custos de manutenção de serviços, aos quais o funcionamento da aplicação móvel está subjugado.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neste trabalho propus-me realizar um Sistema de Aquisição de Dados em Tempo Real via Porta Paralela. Para atingir com sucesso este objectivo, foi realizado um levantamento bibliográfico sobre sistemas operativos de tempo real, salientando e exemplificando quais foram marcos mais importantes ao longo da sua evolução. Este levantamento permitiu perceber o porquê da proliferação destes sistemas face aos custos que envolvem, em função da sua aplicação, bem como as dificuldades, científicas e tecnológicas, que os investigadores foram tendo, e que foram ultrapassando com sucesso. Para que Linux se comporte como um sistema de tempo real, é necessário configura-lo e adicionar um patch, como por exemplo o RTAI ou ADEOS. Como existem vários tipos de soluções que permitem aplicar as características inerentes aos sistemas de tempo real ao Linux, foi realizado um estudo, acompanhado de exemplos, sobre o tipo de arquitecturas de kernel mais utilizadas para o fazer. Nos sistemas operativos de tempo real existem determinados serviços, funcionalidades e restrições que os distinguem dos sistemas operativos de uso comum. Tendo em conta o objectivo do trabalho, e apoiado em exemplos, fizemos um pequeno estudo onde descrevemos, entre outros, o funcionamento escalonador, e os conceitos de latência e tempo de resposta. Mostramos que há apenas dois tipos de sistemas de tempo real o ‘hard’ que tem restrições temporais rígidas e o ‘soft’ que engloba as restrições temporais firmes e suaves. As tarefas foram classificadas em função dos tipos de eventos que as despoletam, e evidenciando as suas principais características. O sistema de tempo real eleito para criar o sistema de aquisição de dados via porta paralela foi o RTAI/Linux. Para melhor percebermos o seu comportamento, estudamos os serviços e funções do RTAI. Foi dada especial atenção, aos serviços de comunicação entre tarefas e processos (memória partilhada e FIFOs), aos serviços de escalonamento (tipos de escalonadores e tarefas) e atendimento de interrupções (serviço de rotina de interrupção - ISR). O estudo destes serviços levou às opções tomadas quanto ao método de comunicação entre tarefas e serviços, bem como ao tipo de tarefa a utilizar (esporádica ou periódica). Como neste trabalho, o meio físico de comunicação entre o meio ambiente externo e o hardware utilizado é a porta paralela, também tivemos necessidade de perceber como funciona este interface. Nomeadamente os registos de configuração da porta paralela. Assim, foi possível configura-lo ao nível de hardware (BIOS) e software (módulo do kernel) atendendo aos objectivos do presente trabalho, e optimizando a utilização da porta paralela, nomeadamente, aumentando o número de bits disponíveis para a leitura de dados. No desenvolvimento da tarefa de hard real-time, foram tidas em atenção as várias considerações atrás referenciadas. Foi desenvolvida uma tarefa do tipo esporádica, pois era pretendido, ler dados pela porta paralela apenas quando houvesse necessidade (interrupção), ou seja, quando houvesse dados disponíveis para ler. Desenvolvemos também uma aplicação para permitir visualizar os dados recolhidos via porta paralela. A comunicação entre a tarefa e a aplicação é assegurada através de memória partilhada, pois garantindo a consistência de dados, a comunicação entre processos do Linux e as tarefas de tempo real (RTAI) que correm ao nível do kernel torna-se muito simples. Para puder avaliar o desempenho do sistema desenvolvido, foi criada uma tarefa de soft real-time cujos tempos de resposta foram comparados com os da tarefa de hard real-time. As respostas temporais obtidas através do analisador lógico em conjunto com gráficos elaborados a partir destes dados, mostram e comprovam, os benefícios do sistema de aquisição de dados em tempo real via porta paralela, usando uma tarefa de hard real-time.