376 resultados para Linux Linux


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Questa tesi si pone l'obiettivo di implementare in ambiente Linux un'applicazione di sincronizzazione, chiamata DTNbox, che permetta lo scambio di file tra due nodi di una rete classificabile come Delay-/Disruption-Tolerant Network (DTN), ossia una rete in cui a causa di ritardi, interruzioni, partizionamento, non sia possibile utilizzare l'usuale architettura di rete TCP/IP. E' evidente che i problemi menzionati rendono estremamente più complessa la sincronizzazione fra cartelle rispetto ad Internet, da cui le peculiarità di DTNbox rispetto ad altre applicazioni in rete visto che, ad esempio, non è possibile la sincronizzazione tramite un nodo centrale, come in Dropbox e similari, ma occorre basarsi su comunicazioni peer-to-peer. L'oggetto della mia tesi si è quindi sviluppato principalmente su tre direzioni: • Implementare, utilizzando il linguaggio di programmazione C, le funzionalità previste dal nuovo progetto per Linux • Integrarne e modificarne le parti ritenute carenti, man mano che i test parziali ne hanno mostrato la necessità • Testarne il suo corretto funzionamento Si è deciso pertanto di dare precedenza alla scrittura delle parti fondamentali del programma quali i moduli di controllo, la struttura e gestione del database e lo scambio di messaggi tra due nodi appartenenti ad una rete DTN per poter arrivare ad una prima versione funzionante del programma stesso, in modo che eventuali future tesi possano concentrarsi sullo sviluppo di una interfaccia grafica e sull'aggiunta di nuovi comandi e funzionalità accessorie. Il programma realizzato è stato poi testato su macchine virtuali grazie all'uso dello strumento Virtualbricks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes an experiment in designing, implementing and testing a Transport layer cluster scheduling and dispatching architecture. The motivation for the experiment was the hypothesis that a Transport layer clustering solution may offer advantantages over the existing industry-standard Network layer and Data Link Layer approaches. The critical success factors initially established to guide and evaluate the experiment were reduced dispatcher work load, reduced dispatcher internal state memory requirements, distributed denial of service resilience, and cluster software design simplicity. The functional design stage of the experiment produced a Transport layer strategy for scheduling and load balancing based on the specification of two new TCP options. Implementation required the introduction of the newly specified TCP options into the Linux (2.4) kernel. The implementation produced an extended Linux Socket API to facilitate user-process access to the additional TCP capability. The testing stage of the experiment confirmed the operational efficiency of the solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper describes two new transport layer (TCP) options and an expanded transport layer queuing strategy that facilitate three functions that are fundamental to the dispatching-based clustered service. A transport layer option has been developed to facilitate. the use of client wait time data within the service request processing of the cluster. A second transport layer option has been developed to facilitate the redirection of service requests by the cluster dispatcher to the cluster processing member. An expanded transport layer service request queuing strategy facilitates the trust based filtering of incoming service requests so that a graceful degradation of service delivery may be achieved during periods of overload - most dramatically evidenced by distributed denial of service attacks against the clustered service. We describe how these new options and queues have been implemented and successfully tested within the transport layer of the Linux kernel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To exploit the popularity of TCP as still the dominant sender and protocol of choice for transporting data reliably across the heterogeneous Internet, this thesis explores end-to-end performance issues and behaviours of TCP senders when transferring data to wireless end-users. The theme throughout is on end-users located specifically within 802.11 WLANs at the edges of the Internet, a largely untapped area of work. To exploit the interests of researchers wanting to study the performance of TCP accurately over heterogeneous conditions, this thesis proposes a flexible wired-to-wireless experimental testbed that better reflects conditions in the real-world. To exploit the transparent functionalities between TCP in the wired domain and the IEEE 802.11 WLAN protocols, this thesis proposes a more accurate methodology for gauging the transmission and error characteristics of real-world 802.11 WLANs. It also aims to correlate any findings with the functionality of fixed TCP senders. To exploit the popularity of Linux as a popular operating system for many of the Internet’s data servers, this thesis studies and evaluates various sender-side TCP congestion control implementations within the recent Linux v2.6. A selection of the implementations are put under systematic testing using real-world wired-to-wireless conditions in order to screen and present a viable candidate/s for further development and usage in the modern-day heterogeneous Internet. Overall, this thesis comprises a set of systematic evaluations of TCP senders over 802.11 WLANs, incorporating measurements in the form of simulations, emulations, and through the use of a real-world-like experimental testbed. The goal of the work is to ensure that all aspects concerned are comprehensively investigated in order to establish rules that can help to decide under which circumstances the deployment of TCP is optimal i.e. a set of paradigms for advancing the state-of-the-art in data transport across the Internet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Raster graphic ampelometric software was not exclusively developed for the estimation of leaf area, but also for the characterization of grapevine (Viti vinifera L.) leaves. The software was written in C-Hprogramming language, using the C-1-1- Builder 2007 for Windows 95-XP and Linux operation systems. It handles desktop-scanned images. On the image analysed with the GRA.LE.D., the user has to determine 11 points. These points are then connected and the distances between them calculated. The GRA.LE.D. software supports standard ampelometric measurements such as leaf area, angles between the veins and lengths of the veins. These measurements are recorded by the software and exported into plain ASCII text files for single or multiple samples. Twenty-two biometric data points of each leaf are identified by the GRA.LE.D. It presents the opportunity to statistically analyse experimental data, allows comparison of cultivars and enables graphic reconstruction of leaves using the Microsoft Excel Chart Wizard. The GRA. LE.D. was thoroughly calibrated and compared to other widely used instruments and methods such as photo-gravimetry, LiCor L0100, WinDIAS2.0 and ImageTool. By comparison, the GRA.LE.D. presented the most accurate measurements of leaf area, but the LiCor L0100 and the WinDIAS2.0 were faster, while the photo-gravimetric method proved to be the most time-consuming. The WinDIAS2.0 instrument was the least reliable. The GRA.LE.D. is uncomplicated, user-friendly, accurate, consistent, reliable and has wide practical application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation established a software-hardware integrated design for a multisite data repository in pediatric epilepsy. A total of 16 institutions formed a consortium for this web-based application. This innovative fully operational web application allows users to upload and retrieve information through a unique human-computer graphical interface that is remotely accessible to all users of the consortium. A solution based on a Linux platform with My-SQL and Personal Home Page scripts (PHP) has been selected. Research was conducted to evaluate mechanisms to electronically transfer diverse datasets from different hospitals and collect the clinical data in concert with their related functional magnetic resonance imaging (fMRI). What was unique in the approach considered is that all pertinent clinical information about patients is synthesized with input from clinical experts into 4 different forms, which were: Clinical, fMRI scoring, Image information, and Neuropsychological data entry forms. A first contribution of this dissertation was in proposing an integrated processing platform that was site and scanner independent in order to uniformly process the varied fMRI datasets and to generate comparative brain activation patterns. The data collection from the consortium complied with the IRB requirements and provides all the safeguards for security and confidentiality requirements. An 1-MR1-based software library was used to perform data processing and statistical analysis to obtain the brain activation maps. Lateralization Index (LI) of healthy control (HC) subjects in contrast to localization-related epilepsy (LRE) subjects were evaluated. Over 110 activation maps were generated, and their respective LIs were computed yielding the following groups: (a) strong right lateralization: (HC=0%, LRE=18%), (b) right lateralization: (HC=2%, LRE=10%), (c) bilateral: (HC=20%, LRE=15%), (d) left lateralization: (HC=42%, LRE=26%), e) strong left lateralization: (HC=36%, LRE=31%). Moreover, nonlinear-multidimensional decision functions were used to seek an optimal separation between typical and atypical brain activations on the basis of the demographics as well as the extent and intensity of these brain activations. The intent was not to seek the highest output measures given the inherent overlap of the data, but rather to assess which of the many dimensions were critical in the overall assessment of typical and atypical language activations with the freedom to select any number of dimensions and impose any degree of complexity in the nonlinearity of the decision space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. ^ We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. ^ We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. ^ We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). ^ In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research proposes a reflection on tutorial videos from Youtube, seen as a form of gift in modern society. Our reflection parts form a perspective of mutual exchange, which avoids the patterns of trade with current economic purposes. We present these video producers as craftsmen of cyberculture due to the skill and competence which they transmit their knowledge. The research is consisted by the observation of video tutorials on YouTube over the Linux operating system and its distributions. Analyzing the interactions between video producers, users and the website. The analysis is based on the classic Mauss (2003) and his reinterpretations of Caille (1998, 2001, 2002, 2006), Godbout (1992, 1998) assisted by Aime Cossetta (2010) and Sennett (2009) to help understand the idea of the craftsmen. The Internet as an open territory in expansion ables us to understand that the relationship in this medium also constitutes the reciprocal links pointed out by Mauss in the early twentieth century. The circulation of intangible property, in this case the knowledge beyond the establishment of social links, promotes a collaborative extent to produce the common in cyberspace.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vengono analizzate le strategie di rilascio delle principali Distribuzioni Linux e i metodi per la compilazione automatizzata del software. Si propone quindi una nuova metodologia sia per il rilascio di media installabili e sia per la pacchettizzazione. Sfruttando le tecnologie del campo DevOps, si introduce quindi un alto grado di scalabilità anche in ambienti Cloud, grazie anche alla riproducibilità di ogni componente dell'infrastruttura proposta. Vedremo quindi come questo approccio aumenta l'automatizzazione nei cicli produttivi per la realizzazione della Distribuzione Sabayon Linux e per la definizione di un'infrastruttura automatizzata attualmente in production.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il mondo dell’Internet of Things e del single board computing sono settori in forte espansione al giorno d’oggi e le architetture ARM sono, al momento, i dominatori in questo ambito. I sistemi operativi e i software si stanno evolvendo per far fronte a questo cambiamento e ai nuovi casi d’uso che queste tecnologie introducono. In questa tesi ci occuperemo del porting della distribuzione Linux Sabayon per queste architetture, la creazione di un infrastruttura per il rilascio delle immagini e la compilazione dei pacchetti software.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Questa tesi tratta un argomento che si è fatto sempre più interessante, soprattutto in questi ultimi anni, l'integrità firmware e hardware di un sistema. Oggigiorno milioni di persone fanno completamente affidamento al proprio sistema lasciando nelle loro mani moli di dati personali e non, molte delle quali si affidano ai moderni antivirus i quali, però, non sono in grado di rilevare e gestire attacchi che implicano l'alterazione dei firmware. Verranno mostrati diversi attacchi di questo tipo cercando di fare capire come la relativa sicurezza sia importante, inoltre saranno discussi diversi progetti reputati interessanti. Sulla base delle ricerche effettuate, poi, sarà mostrata la progettazione e l'implementazione di un software in grado di rilevare alterazioni hardware e firmware in un sistema.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The program PanTool was developed as a tool box like a Swiss Army Knife for data conversion and recalculation, written to harmonize individual data collections to standard import format used by PANGAEA. The format of input files the program PanTool needs is a tabular saved in plain ASCII. The user can create this files with a spread sheet program like MS-Excel or with the system text editor. PanTool is distributed as freeware for the operating systems Microsoft Windows, Apple OS X and Linux.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

LAPMv2 is a research software solution specifically developed to allow marine scientists to produce geo-referenced visual maps of the seafloor, known as mosaics, from a set of underwater images and navigation data. LAPMv2 has a graphical user interface that guides the user through the different steps of the mosaicking workflow. LAPMv2 runs on 64-bit Windows, MacOS X and Linux operating systems. There are two versions for each operating system: (1) the WEB-installers (lightweight but require an internet connection during the installation) and (2) the MCR installers (large files but can be installed on computer without internet-connection). The user manual explains how to install and start the program on the different operating systems. Go to http://www.lapm.eu.com for further information about the latest versions of LAPMv2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En el mundo de la simulación existen varios tipos de sistemas reales, entre los que se encuentran los sistemas de eventos discretos. Para poder simular estos sistemas se pueden utilizar, entre otras, herramientas basadas en el formalismo DEVS (Discrete EVents system Specification), como la utilizada en este proyecto: xDEVS. La simulación posee una importancia muy elevada en campos como la educación y la ciencia, y en ocasiones es necesario incluir datos del medio físico o sacar información al exterior del simulador. Por ello es necesario contar con herramientas que puedan realizar simulaciones utilizando sensores, actuadores, circuitos externos, etc., o lo que es lo mismo, que puedan realizar co-simulaciones entre software y hardware. De esta forma se puede facilitar el desarrollo de sistemas por medio de modelado y simulación, pudiendo extraer el hardware gradualmente y analizar los resultados en cada etapa. Este proyecto es de carácter incremental, y trata de extender la funcionalidad de la plataforma xDEVS para poder realizar co-simulaciones entre hardware y software sobre una Raspberry Pi. Para ello se van a utilizar circuitos lógicos como hardware externo y se enlazarán al simulador a través de ficheros de dispositivo, gestionados por módulos del kernel de Linux. Como caso de estudio se desarrolla la co-simulación entre hardware y software completa de un ascensor de siete plantas para mostrar el uso y funcionamiento en xDEVS, extrayendo los circuitos integrados de uno en uno.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Los procesadores multicore asimétricos con repertorio común de instrucciones (AMPsAsymmetric Multicore Processors) han sido propuestos recientemente como alternativa de bajo consumo a los procesadores multicore simétricos convencionales. Los AMPs combinan, en un mismo chip, cores rápidos de alto rendimiento, con cores más lentos y sencillos de consumo reducido. Uno de los ejemplos más destacados de procesador multicore asimétrico es el procesador big.LITTLE de ARM, que incorporan algunos modelos de teléfonos móviles y tablets disponibles en la actualidad. Trabajos previos han demostrado que para explotar los beneficios potenciales de los procesadores multicore asimétricos, el sistema operativo debe tener en cuenta el beneficio relativo (speedup) que cada aplicación experimenta al ejecutar en un core rápido frente a un core lento. Actualmente, los planificadores por defecto de los sistemas operativos de propósito general no tienen en cuenta la diversidad de speedups entre aplicaciones que puede estar presente en una carga de trabajo multiprogramada. En consecuencia, la asignación de aplicaciones a cores que hacen estos planificadores no extrae el máximo rendimiento por vatio de la plataforma. Recientemente se han realizado extensiones en el kernel Linux para ofrecer un mejor soporte de planificación en multicore asimétricos. Sin embargo, estas extensiones del planificador, utilizadas fundamentalmente en dispositivos móviles con el sistema operativo Android, tampoco tienen en cuenta la diversidad de speedups en las aplicaciones de la carga de trabajo. Por lo tanto estas extensiones no constituyen una aproximación robusta desde el punto de vista de la eficiencia energética. En este proyecto se lleva a cabo la evaluación exhaustiva de distintos algoritmos de planificación para multicore asimétricos sobre una plataforma provista de un procesador ARM big.LITTLE. El principal objetivo del estudio es cuantificar el grado de eficiencia energética y el rendimiento global proporcionado por implementaciones de estos algoritmos en el kernel Linux sobre hardware multicore asimétrico real.