986 resultados para file system UI
Resumo:
Este trabalho incide sobre a gestão do conhecimento e cultura organizacional, as suas barreiras os seus facilitadores na Parque Escolar E.P.E. Este estudo teve por base o método quadripolar. Várias foram as atividades ao longo deste trabalho, inicialmente foi recolhida a documentação interna, nomeadamente diplomas legais, regulamentos, manuais de procedimentos, manuais de formações internas, entre outros documentos, que serviram de base ao reconhecimento da instituição, a sua evolução estrutural e de funcionamento. Para identificar as barreiras e os facilitadores na recuperação da informação nos três principais meios para o efeito: arquivo físico, file system e aplicações informáticas foram aplicados inquéritos aos produtores/ utilizadores de informação da Parque Escolar, E.P.E. Com base neste estudo foi possível identificar qual o recurso de recuperação de informação que traz mais dificuldades na sua utilização, se existem documentos exclusivos em papel ou exclusivos em formato digital, se os mesmos são recuperáveis com facilidade. Foi possível averiguar se os colaboradores da Parque Escolar, E.P.E. consideram os documentos que constam no Arquivo Físico mais fidedignos do que os documentos em formato digital guardados no file system ou nas aplicações informáticas. Em relação às aplicações informáticas foi ainda possível averiguar se os colaboradores consideram uteis as suas atualizações, ou se demonstram alguma resistência à mudança, e se consideram que tiveram o acompanhamento necessário para compreender e aplicar as alterações. Com este estudo esperamos ter contribuído para dar uma maior visibilidade à temática da gestão do conhecimento e como a cultura organizacional pode influenciar, criando barreiras ou facilitadores.
Resumo:
Information technologies (ITs), and sports resources and services aid the potential to transform governmental organizations, and play an important role in contributing to sustainable communities development, respectively. Spatial data is a crucial source to support sports planning and management. Low-cost mobile geospatial tools bring productive and accurate data collection, and their use combining a handy and customized graphical user interface (GUI) (forms, mapping, media support) is still in an early stage. Recognizing the benefits — efficiency, effectiveness, proximity to citizens — that Mozambican Minister of Youth and Sports (MJD) can achieve with information resulted from the employment of a low-cost data collection platform, this project presents the development of a mobile mapping application (app) — m-SportGIS — under Open Source (OS) technologies and a customized evolutionary software methodology. The app development embraced the combination of mobile web technologies and Application Programming Interfaces (APIs) (e.g. Sencha Touch (ST), Apache Cordova, OpenLayers) to deploy a native-to-the-device (Android operating system) product, taking advantage of device’s capabilities (e.g. File system, Geolocation, Camera). In addition to an integrated Web Map Service (WMS), was created a local and customized Tile Map Service (TMS) to serve up cached data, regarding the IT infrastructures limitations in several Mozambican regions. m-SportGIS is currently being exploited by Mozambican Government staff to inventory all kind of sports facilities, which resulted and stored data feeds a WebGIS platform to manage Mozambican sports resources.
Resumo:
En termes de temps d'execució i ús de dades, les aplicacions paral·leles/distribuïdes poden tenir execucions variables, fins i tot quan s'empra el mateix conjunt de dades d'entrada. Existeixen certs aspectes de rendiment relacionats amb l'entorn que poden afectar dinàmicament el comportament de l'aplicació, tals com: la capacitat de la memòria, latència de la xarxa, el nombre de nodes, l'heterogeneïtat dels nodes, entre d'altres. És important considerar que l'aplicació pot executar-se en diferents configuracions de maquinari i el desenvolupador d'aplicacions no port garantir que els ajustaments de rendiment per a un sistema en particular continuïn essent vàlids per a d'altres configuracions. L'anàlisi dinàmica de les aplicacions ha demostrat ser el millor enfocament per a l'anàlisi del rendiment per dues raons principals. En primer lloc, ofereix una solució molt còmoda des del punt de vista dels desenvolupadors mentre que aquests dissenyen i evaluen les seves aplicacions paral·leles. En segon lloc, perquè s'adapta millor a l'aplicació durant l'execució. Aquest enfocament no requereix la intervenció de desenvolupadors o fins i tot l'accés al codi font de l'aplicació. S'analitza l'aplicació en temps real d'execució i es considra i analitza la recerca dels possibles colls d'ampolla i optimitzacions. Per a optimitzar l'execució de l'aplicació bioinformàtica mpiBLAST, vam analitzar el seu comportament per a identificar els paràmetres que intervenen en el rendiment d'ella, com ara: l'ús de la memòria, l'ús de la xarxa, patrons d'E/S, el sistema de fitxers emprat, l'arquitectura del processador, la grandària de la base de dades biològica, la grandària de la seqüència de consulta, la distribució de les seqüències dintre d'elles, el nombre de fragments de la base de dades i/o la granularitat dels treballs assignats a cada procés. El nostre objectiu és determinar quins d'aquests paràmetres tenen major impacte en el rendiment de les aplicacions i com ajustar-los dinàmicament per a millorar el rendiment de l'aplicació. Analitzant el rendiment de l'aplicació mpiBLAST hem trobat un conjunt de dades que identifiquen cert nivell de serial·lització dintre l'execució. Reconeixent l'impacte de la caracterització de les seqüències dintre de les diferents bases de dades i una relació entre la capacitat dels workers i la granularitat de la càrrega de treball actual, aquestes podrien ser sintonitzades dinàmicament. Altres millores també inclouen optimitzacions relacionades amb el sistema de fitxers paral·lel i la possibilitat d'execució en múltiples multinucli. La grandària de gra de treball està influenciat per factors com el tipus de base de dades, la grandària de la base de dades, i la relació entre grandària de la càrrega de treball i la capacitat dels treballadors.
Resumo:
XML:n kasvava suosio dokumenttiformaattina sekä sen alati monipuolistuva käyttö ovat lisänneet XML-tiedonhallintajärjestelmien tarvetta. Yksi tapa XML-dokumenttien hallintaan on edelleen tiedostopohjainen järjestelmä. Erilaisiin tietokantoihin perustuvat XML-tiedonhallintajärjestelmät ovat kuitenkin viime vuosina kasvattaneetsuosiotaan monipuolisempien ominaisuuksien ja paremman suorituskyvyn takia. Lisäksi XML-dokumenttien hallinta tiedostopohjaisessa järjestelmässä käy lähes mahdottomaksi suurilla datamäärillä. Suhteellisen uutena tulokkaana XML-dokumenttienhallintaan ovat tulleet natiivit XML -tietokannat, jotka ovat suunniteltu juuriXML:ää silmälläpitäen. Tässä diplomityössä esitellään erilaisia XML-tiedonhallintajärjestelmiä. Erityisesti relaatiotietokantoihin ja natiiveihin XML -tietokantoihin perustuvien ratkaisujen taustoihin ja teknisiin yksityiskohtiin yritetään luoda syvällinen katsaus. Neljälle XML-tiedonhallintaratkaisulle - Binary Approachille, Edge Approachille, eXistille ja Xindicelle - suoritetaan XMach-1 suorituskykytesti. Lisäksi testattavien ratkaisujen teknistä toimivuutta arvioidaan sekä analyyttisen että käytännön tarkastelun kautta. Suorituskykytestien ja teknisen toimivuuden arvioiden perusteella on tarkoitus valita XML-tiedonhallintaratkaisu Javalla toteutetulle Web-sovellukselle, joka käyttää XML:ää tietojen tallennusformaattina.
Resumo:
Determinar los posibles efectos de la enseñanza que un tratamiento novedoso del componente de la pronunciación inglesa tendría en la mejora de la competencia lingüística global en lengua inglesa y en la mejora de su rendimiento en este área.. Investigación aplicada de naturaleza experimental. Se ha utilizado un tipo de muestreo aleatorio por conglomerados seleccionando una agrupación natural (todos los alumnos de cuarto de ESO de dos grupos-clase ya formados en un centro educativo, como grupos experimental y de control). El grupo experimental formado por 32 estudiantes y 30 el de control. Diseño cuasiexperimental con pre y postest. Para comprobar la similitud de los grupos se diseñó un cuestionario de variables que todos los alumnos bajo estudio tuvieron que completar y que proporcionó información adicional sobre los grupos. Cuestionario de elaboración propia sobre las variables intervinientes en el estudio. Los diarios de clase de los profesores, una prueba de evaluación inicial y final (The Key English Test KET, de la Universidad de Cambridge). Prueba de pronunciación específica de elaboración propia que mide la competencia de los candidatos en cinco bloques principales del componente de pronunciación: fonemas, acento léxico, acento rítmico, tonicidad y tonemicidad. Instrumento para la recogida de muestras orales: Total Recorder I y Dart Pro 98. Programa de curvas entonativas Spech File System (FS) de la University College London. Se aplicó el tratamiento de pronunciación en el grupo experimental dejando al grupo de control con el correspondiente a los contenidos de manual del curso: The Burlington Course for 4 ESO. Student's Book (Burlington Books).. Mayor avance, aunque no estadísticamente significativo, del grupo experimental sobre el de control con relación al rendimiento global en lengua inglesa y a las destrezas lingüísticas de expresión y comprensión oral de la lengua inglesa y en las de expresión y comunicación escrita. .
Resumo:
The aim of this study was to evaluate the efficacy of three rotary instrument systems (K3, ProTaper and Twisted File) in removing calcium hydroxide residues from root canal walls. Thirty-four human mandibular incisors were instrumented with the ProTaper System up to the F2 instrument, irrigated with 2.5% NaOCl followed by 17% EDTA, and filled with a calcium hydroxide intracanal dressing. After 7 days, the calcium hydroxide dressing was removed using the following rotary instruments: G1 - NiTi size 25, 0.06 taper, of the K3 System; G2 - NiTi F2, of the ProTaper System; or G3 - NiTi size 25, 0.06 taper, of the Twisted File System. The teeth were longitudinally grooved on the buccal and lingual root surfaces, split along their long axis, and their apical and cervical canal thirds were evaluated by SEM (×1000). The images were scored and the data were statistically analyzed using the Kruskall Wallis test. None of the instruments removed the calcium hydroxide dressing completely, either in the apical or cervical thirds, and no significant differences were observed among the rotary instruments tested (p > 0.05).
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Given the exponential growth in the spread of the virus world wide web (Internet) and its increasing complexity, it is necessary to adopt more complex systems for the extraction of malware finger-prints (malware fingerprints - malicious software; is the name given to extracting unique information leading to identification of the virus, equivalent to humans, the fingerprint). The architecture and protocol proposed here aim to achieve more efficient fingerprints, using techniques that make a single fingerprint enough to compromise an entire group of viruses. This efficiency is given by the use of a hybrid approach of extracting fingerprints, taking into account the analysis of the code and the behavior of the sample, so called viruses. The main targets of this proposed system are Polymorphics and Metamorphics Malwares, given the difficulty in creating fingerprints that identify an entire family from these viruses. This difficulty is created by the use of techniques that have as their main objective compromise analysis by experts. The parameters chosen for the behavioral analysis are: File System; Records Windows; RAM Dump and API calls. As for the analysis of the code, the objective is to create, in binary virus, divisions in blocks, where it is possible to extract hashes. This technique considers the instruction there and its neighborhood, characterized as being accurate. In short, with this information is intended to predict and draw a profile of action of the virus and then create a fingerprint based on the degree of kinship between them (threshold), whose goal is to increase the ability to detect viruses that do not make part of the same family
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Hundreds of Terabytes of CMS (Compact Muon Solenoid) data are being accumulated for storage day by day at the University of Nebraska-Lincoln, which is one of the eight US CMS Tier-2 sites. Managing this data includes retaining useful CMS data sets and clearing storage space for newly arriving data by deleting less useful data sets. This is an important task that is currently being done manually and it requires a large amount of time. The overall objective of this study was to develop a methodology to help identify the data sets to be deleted when there is a requirement for storage space. CMS data is stored using HDFS (Hadoop Distributed File System). HDFS logs give information regarding file access operations. Hadoop MapReduce was used to feed information in these logs to Support Vector Machines (SVMs), a machine learning algorithm applicable to classification and regression which is used in this Thesis to develop a classifier. Time elapsed in data set classification by this method is dependent on the size of the input HDFS log file since the algorithmic complexities of Hadoop MapReduce algorithms here are O(n). The SVM methodology produces a list of data sets for deletion along with their respective sizes. This methodology was also compared with a heuristic called Retention Cost which was calculated using size of the data set and the time since its last access to help decide how useful a data set is. Accuracies of both were compared by calculating the percentage of data sets predicted for deletion which were accessed at a later instance of time. Our methodology using SVMs proved to be more accurate than using the Retention Cost heuristic. This methodology could be used to solve similar problems involving other large data sets.
Resumo:
The aim of this study was to evaluate the efficacy of three rotary instrument systems (K3, Pro Taper and Twisted File) in removing calcium hydroxide residues from root canal walls. Thirty-four human mandibular incisors were instrumented with the Pro Taper System up to the F2 instrument, irrigated with 2.5% NaOCl followed by 17% EDTA, and filled with a calcium hydroxide intracanal dressing. After 7 days, the calcium hydroxide dressing was removed using the following rotary instruments: G1. - NiTi size 25, 0.06 taper, of the K3 System; G2 - NiTi F2, of the Pro Taper System; or G3 - NiTi size 25, 0.06 taper, of the Twisted File System. The teeth were longitudinally grooved on the buccal and lingual root surfaces, split along their long axis, and their apical and cervical canal thirds were evaluated by SEM (x1000). The images were scored and the data were statistically analyzed using the Kruskall Wallis test. None of the instruments removed the calcium hydroxide dressing completely, either in the apical or cervical thirds, and no significant differences were observed among the rotary instruments tested (p > 0.05).
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
Nello sviluppo di sistemi informatici si sono affermate numerose tecnologie, che vanno utilizzate in modo combinato e, possibilmente sinergico. Da una parte, i sistemi di gestione di basi di dati relazionali consentono una gestione efficiente ed efficace di dati persistenti, condivisi e transazionali. Dall'altra, gli strumenti e i metodi orientati agli oggetti (linguaggi di programmazione, ma anche metodologie di analisi e progettazione) consentono uno sviluppo efficace della logica applicativa delle applicazioni. E’ utile in questo contesto spiegare che cosa s'intende per sistema informativo e sistema informatico. Sistema informativo: L'insieme di persone, risorse tecnologiche, procedure aziendali il cui compito è quello di produrre e conservare le informazioni che servono per operare nell'impresa e gestirla. Sistema informatico: L'insieme degli strumenti informatici utilizzati per il trattamento automatico delle informazioni, al fine di agevolare le funzioni del sistema informativo. Ovvero, il sistema informatico raccoglie, elabora, archivia, scambia informazione mediante l'uso delle tecnologie proprie dell'Informazione e della Comunicazione (ICT): calcolatori, periferiche, mezzi di comunicazione, programmi. Il sistema informatico è quindi un componente del sistema informativo. Le informazioni ottenute dall'elaborazione dei dati devono essere salvate da qualche parte, in modo tale da durare nel tempo dopo l'elaborazione. Per realizzare questo scopo viene in aiuto l'informatica. I dati sono materiale informativo grezzo, non (ancora) elaborato da chi lo riceve, e possono essere scoperti, ricercati, raccolti e prodotti. Sono la materia prima che abbiamo a disposizione o produciamo per costruire i nostri processi comunicativi. L'insieme dei dati è il tesoro di un'azienda e ne rappresenta la storia evolutiva. All'inizio di questa introduzione è stato accennato che nello sviluppo dei sistemi informatici si sono affermate diverse tecnologie e che, in particolare, l'uso di sistemi di gestione di basi di dati relazionali comporta una gestione efficace ed efficiente di dati persistenti. Per persistenza di dati in informatica si intende la caratteristica dei dati di sopravvivere all'esecuzione del programma che li ha creati. Se non fosse cosi, i dati verrebbero salvati solo in memoria RAM e sarebbero persi allo spegnimento del computer. Nella programmazione informatica, per persistenza si intende la possibilità di far sopravvivere strutture dati all'esecuzione di un programma singolo. Occorre il salvataggio in un dispositivo di memorizzazione non volatile, come per esempio su un file system o su un database. In questa tesi si è sviluppato un sistema che è in grado di gestire una base di dati gerarchica o relazionale consentendo l'importazione di dati descritti da una grammatica DTD. Nel capitolo 1 si vedranno più in dettaglio cosa di intende per Sistema Informativo, modello client-server e sicurezza dei dati. Nel capitolo 2 parleremo del linguaggio di programmazione Java, dei database e dei file XML. Nel capitolo 3 descriveremo un linguaggio di analisi e modellazione UML con esplicito riferimento al progetto sviluppato. Nel capitolo 4 descriveremo il progetto che è stato implementato e le tecnologie e tools utilizzati.
Resumo:
Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.