999 resultados para Linux Terminal Server Project
Resumo:
Este documento presenta un proyecto para la realización de un DVD personalizado enfocado a la educación. Estará basado en un sistema operativo Debian y en el proyecto LTSP (Linux Terminal Server Project).
Resumo:
This project focuses on studying and testing the benefits of the NX Remote Desktop technology in administrative use for Finnish Meteorological Institutes existing Linux Terminal Service Project environment. This was done due to the criticality of the system caused by growing number of users as the Linux Terminal Service Project system expands. Although many of the supporting tasks can be done via Secure Shell connection, testing graphical programs or desktop behaviour in such a way is impossible. At first basic technologies behind the NX Remote Desktop were studied, and after that started the testing of two possible programs, FreeNX and NoMachine NX server. Testing the functionality and bandwidth demands were first done in a closed local area network, and results were studied. The better candidate was then installed in a virtual server simulating actual Linux Terminal Service Project server at Finnish Meteorological Institute and connection from Internet was tested to see was there any problems with firewalls and security policies. The results are reported in this study. Studying and testing the two different candidates of NX Remote Desktop showed, that NoMachine NX Server provides better customer support and documentation. Security aspects of the Finnish Meteorological Institute had also to be considered, and since updates along with the new developing tools are announced in next version of the program, this version was the choice. Studies also show that even NoMachine promises a swift connection over an average of 20Kbit/s bandwidth, at least double of that is needed. This project gives an overview of available remote desktop products along their benefits. NX Remote Desktop technology is studied, and installation instructions are included. Testing is done in both, closed and the actual environment and problems and suggestions are studied and analyzed. The installation to the actual LTSP server is not yet made, but a virtual server is put up in the same place in the view of network topology. This ensures, that if the administrators are satisfied with the system, installation and setting up the system will go as described in this report.
Resumo:
This article aims to demonstrate through performance tests, that it is possible to reuse outdated machines as effective and productive terminals, with the use of LTSP - Linux Terminal Service Project. Additionally, we will demonstrate that companies and institutions concerned with the environment may find this alternative a measure of computational saving and recycling.
Resumo:
El objeto del presente documento es proporcionar una serie de directrices de configuración de seguridad para sistemas operativos Suse Linux Enterprise Server 11. Este sistema operativo proporciona un conjunto interesante de mecanismos de seguridad que permiten alcanzar un nivel de seguridad aceptable. Mediante la aplicación de las directrices expuestas en este documento se podrá garantizar una configuración adecuada del sistema operativo Suse Linux Enterprise Server 11 en lo relativo a la seguridad lógica.
Resumo:
En aquest projecte s'ha desenvolupat un entorn repositori de pràctiques per a emmagatzemar i donar una eina de control del material docent que es disposa. El desenvolupament dels mòduls del sistema s'han realitzat després de contemplar els diferents rols que existeixen dintre dels membres del personal docent, sent aquest becaris o professors. Atès que el personal destinat a l'ensenyament està sotmès a un alt grau de rotacions aquest sistema intentarà ser un punt de trobada de totes les seves aportacions, ajudant així la tasca d'avaluació i explicació de les pràctiques. L'estudi d'un nou ventall de pràctiques ha permès conèixer un nou entorn de treball, que ofereix camps no tractats fins al moment a través de la implantació d'un sistema operatiu simulat per sobre del sistema operatiu Linux.
Resumo:
OBJECTIVE: Quality assurance (QA) in clinical trials is essential to ensure treatment is safely and effectively delivered. As QA requirements have increased in complexity in parallel with evolution of radiation therapy (RT) delivery, a need to facilitate digital data exchange emerged. Our objective is to present the platform developed for the integration and standardization of QART activities across all EORTC trials involving RT. METHODS: The following essential requirements were identified: secure and easy access without on-site software installation; integration within the existing EORTC clinical remote data capture system; and the ability to both customize the platform to specific studies and adapt to future needs. After retrospective testing within several clinical trials, the platform was introduced in phases to participating sites and QART study reviewers. RESULTS: The resulting QA platform, integrating RT analysis software installed at EORTC Headquarters, permits timely, secure, and fully digital central DICOM-RT based data review. Participating sites submit data through a standard secure upload webpage. Supplemental information is submitted in parallel through web-based forms. An internal quality check by the QART office verifies data consistency, formatting, and anonymization. QART reviewers have remote access through a terminal server. Reviewers evaluate submissions for protocol compliance through an online evaluation matrix. Comments are collected by the coordinating centre and institutions are informed of the results. CONCLUSIONS: This web-based central review platform facilitates rapid, extensive, and prospective QART review. This reduces the risk that trial outcomes are compromised through inadequate radiotherapy and facilitates correlation of results with clinical outcomes.
Resumo:
Las herramientas de configuración basadas en lenguajes de alto nivel como LabVIEW permiten el desarrollo de sistemas de adquisición de datos basados en hardware reconfigurable FPGA muy complejos en un breve periodo de tiempo. La estandarización del ciclo de diseño hardware/software y la utilización de herramientas como EPICS facilita su integración con la plataforma de adquisición y control ITER CODAC CORE SYSTEM (CCS) basada en Linux. En este proyecto se propondrá una metodología que simplificará el ciclo completo de integración de plataformas novedosas, como cRIO, en las que el funcionamiento del hardware de adquisición puede ser modificado por el usuario para que éste se amolde a sus requisitos específicos. El objetivo principal de este proyecto fin de master es realizar la integración de un sistema cRIO NI9159 y diferentes módulos de E/S analógica y digital en EPICS y en CODAC CORE SYSTEM (CCS). Este último consiste en un conjunto de herramientas software que simplifican la integración de los sistemas de instrumentación y control del experimento ITER. Para cumplir el objetivo se realizarán las siguientes tareas: • Desarrollo de un sistema de adquisición de datos basado en FPGA con la plataforma hardware CompactRIO. En esta tarea se realizará la configuración del sistema y la implementación en LabVIEW para FPGA del hardware necesario para comunicarse con los módulos: NI9205, NI9264, NI9401.NI9477, NI9426, NI9425 y NI9476 • Implementación de un driver software utilizando la metodología de AsynDriver para integración del cRIO con EPICS. Esta tarea requiere definir todos los records necesarios que exige EPICS y crear las interfaces adecuadas que permitirán comunicarse con el hardware. • Implementar la descripción del sistema cRIO y del driver EPICS en el sistema de descripción de plantas de ITER llamado SDD. Esto automatiza la creación de las aplicaciones de EPICS que se denominan IOCs. SUMMARY The configuration tools based in high-level programing languages like LabVIEW allows the development of high complex data acquisition systems based on reconfigurable hardware FPGA in a short time period. The standardization of the hardware/software design cycle and the use of tools like EPICS ease the integration with the data acquisition and control platform of ITER, the CODAC Core System based on Linux. In this project a methodology is proposed in order to simplify the full integration cycle of new platforms like CompactRIO (cRIO), in which the data acquisition functionality can be reconfigured by the user to fits its concrete requirements. The main objective of this MSc final project is to develop the integration of a cRIO NI-9159 and its different analog and digital Input/Output modules with EPICS in a CCS. The CCS consists of a set of software tools that simplifies the integration of instrumentation and control systems in the International Thermonuclear Reactor (ITER) experiment. To achieve such goal the following tasks are carried out: • Development of a DAQ system based on FPGA using the cRIO hardware platform. This task comprehends the configuration of the system and the implementation of the mandatory hardware to communicate to the I/O adapter modules NI9205, NI9264, NI9401, NI9477, NI9426, NI9425 y NI9476 using LabVIEW for FPGA. • Implementation of a software driver using the asynDriver methodology to integrate such cRIO system with EPICS. This task requires the definition of the necessary EPICS records and the creation of the appropriate interfaces that allow the communication with the hardware. • Develop the cRIO system’s description and the EPICS driver in the ITER plant description tool named SDD. This development will automate the creation of EPICS applications, called IOCs.
Resumo:
Este Proyecto Fin de Grado (PFG) recoge el trabajo de depuración realizado sobre el prototipo PCCMuTe v2.2, un sistema empotrado que dispone de la instrumentación necesaria para medir el consumo de potencia/energía en cada uno de sus dominios de tensión, y posteriormente digitalizar y enviar los resultados al procesador que se encuentra en su interior. Su uso permite la obtención de información en tiempo real sobre el consumo del hardware de la placa, en especial del procesador, pudiendo relacionar la potencia consumida con el software ejecutado. El proyecto está orientado a medir el consumo de energía derivado de la decodificación de vídeo. El software utilizado para controlar el hardware se basa en Linux. En este proyecto se distinguen principalmente dos actividades, depuración hardware y depuración software. Los resultados muestran avances en la depuración hardware hasta obtener un prototipo en completo funcionamiento. Los avances en el apartado del software habilitan las comunicaciones SPI, necesarias para la transmisión de los resultados de consumo al procesador. En la fase final de este PFG se hace uso de una aplicación previamente desarrollada por miembros del GDEM con la que se obtienen los primeros datos de consumo, pero por falta de tiempo estos resultados no pueden ser verificados. Por la misma razón no ha sido posible diseñar y codificar una nueva aplicación que mejore la forma en la que se obtienen esos datos. ABSTRACT. This bachelor final project includes the debugging work done on the prototype PCCMuTe v2.2, an embedded system with the necessary instrumentation to measure the power/ energy consumption in each of its voltage domains, scan and send the results to its processor. The purpose of this device is to obtain real-time information about the hardware power consumption, especially from the processor, being able to relate the power consumed with the software executed. The project aims to measure the energy consumption of video decoding. The software used to control the hardware is based on Linux. In this project there are two main activities: hardware and software debugging. The results show advances in hardware debugging, and finally a fully functioning prototype is obtained. Advances in software debugging enable SPI communications, used to transmit the consumption data to the processor. In the last part of this final bachelor project an application previously coded by other members of the GDEM is used to obtain the first data. The results can not finally be verified because of the lack of time. For the same reason it is not possible to design and code a new application that improves the way the data is obtained.
Resumo:
Data analysis, presentation and distribution is of utmost importance to a genome project. A public domain software, ACeDB, has been chosen as the common basis for parasite genome databases, and a first release of TcruziDB, the Trypanosoma cruzi genome database, is available by ftp from ftp://iris.dbbm.fiocruz.br/pub/genomedb/TcruziDB as well as versions of the software for different operating systems (ftp://iris.dbbm.fiocruz.br/pub/unixsoft/). Moreover, data originated from the project are available from the WWW server at http://www.dbbm.fiocruz.br. It contains biological and parasitological data on CL Brener, its karyotype, all available T. cruzi sequences from Genbank, data on the EST-sequencing project and on available libraries, a T. cruzi codon table and a listing of activities and participating groups in the genome project, as well as meeting reports. T. cruzi discussion lists (tcruzi-l@iris.dbbm.fiocruz.br and tcgenics@iris.dbbm.fiocruz.br) are being maintained for communication and to promote collaboration in the genome project
Resumo:
During the project we get familiar with Linksys WRT54GL wireless router and its network managing methods. Operating system is OpenWRT which is Linux-based distribution for embedded devices. OpenWRT uses two kind of approach for its network administration. The first one is web-based user interface and the second one is command line based. Both methods are working but do not solve all problems that competent network administrator can need for secured network managing. The goal of the project was design an NCurses-based user interface for network administration that can be run from command line. The user interface can be use for example from terminal via SSH which is yet faster and also light to use. The idea is to combine the user friendly of WWW-interface and the advanced options that command line based network managing can offer. Linux-based open source OpenWRT offers good development tools. There exist also a compact development community if there is need for further development of software in future. So far user interface for command line based network administrator is not available.
Resumo:
Este proyecto se incluye en una línea de trabajo que tiene como objetivo final la optimización de la energía consumida por un dispositivo portátil multimedia mediante la aplicación de técnicas de control realimentado, a partir de una modificación dinámica de la frecuencia de trabajo del procesador y de su tensión de alimentación. La modificación de frecuencia y tensión se realiza a partir de la información de realimentación acerca de la potencia consumida por el dispositivo, lo que supone un problema ya que no suele ser posible la monitorización del consumo de potencia en dispositivos de estas características. Este es el motivo por el que se recurre a la estimación del consumo de potencia, utilizando para ello un modelo de predicción. A partir del número de veces que se producen ciertos eventos en el procesador del dispositivo, el modelo de predicción es capaz de obtener una estimación de la potencia consumida por dicho dispositivo. El trabajo llevado a cabo en este proyecto se centra en la implementación de un modelo de estimación de potencia en el kernel de Linux. La razón por la que la estimación se implementa en el sistema operativo es, en primer lugar para lograr un acceso directo a los contadores del procesador. En segundo lugar, para facilitar la modificación de frecuencia y tensión, una vez obtenida la estimación de potencia, ya que esta también se realiza desde el sistema operativo. Otro motivo para implementar la estimación en el sistema operativo, es que la estimación debe ser independiente de las aplicaciones de usuario. Además, el proceso de estimación se realiza de forma periódica, lo que sería difícil de lograr si no se trabajase desde el sistema operativo. Es imprescindible que la estimación se haga de forma periódica ya que al ser dinámica la modificación de frecuencia y tensión que se pretende implementar, se necesita conocer el consumo de potencia del dispositivo en todo momento. Cabe destacar también, que los algoritmos de control se tienen que diseñar sobre un patrón periódico de actuación. El modelo de estimación de potencia funciona de manera específica para el perfil de consumo generado por una única aplicación determinada, que en este caso es un decodificador de vídeo. Sin embargo, es necesario que funcione de la forma más precisa posible para cada una de las frecuencias de trabajo del procesador, y para el mayor número posible de secuencias de vídeo. Esto es debido a que las sucesivas estimaciones de potencia se pretenden utilizar para llevar a cabo la modificación dinámica de frecuencia, por lo que el modelo debe ser capaz de continuar realizando las estimaciones independientemente de la frecuencia con la que esté trabajando el dispositivo. Para valorar la precisión del modelo de estimación se toman medidas de la potencia consumida por el dispositivo a las distintas frecuencias de trabajo durante la ejecución del decodificador de vídeo. Estas medidas se comparan con las estimaciones de potencia obtenidas durante esas mismas ejecuciones, obteniendo de esta forma el error de predicción cometido por el modelo y realizando las modificaciones y ajustes oportunos en el mismo. ABSTRACT. This project is included in a work line which tries to optimize consumption of handheld multimedia devices by the application of feedback control techniques, from a dynamic modification of the processor work frequency and its voltage. The frequency and voltage modification is performed depending on the feedback information about the device power consumption. This is a problem because normally it is not possible to monitor the power consumption on this kind of devices. This is the reason why a power consumption estimation is used instead, which is obtained from a prediction model. Using the number of times some events occur on the device processor, the prediction model is able to obtain a power consumption estimation of this device. The work done in this project focuses on the implementation of a power estimation model in the Linux kernel. The main reason to implement the estimation in the operating system is to achieve a direct access to the processor counters. The second reason is to facilitate the frequency and voltage modification, because this modification is also done from the operating system. Another reason to implement the estimation in the operating system is because the estimation must be done apart of the user applications. Moreover, the estimation process is done periodically, what is difficult to obtain outside the operating system. It is necessary to make the estimation in a periodic way because the frequency and voltage modification is going to be dynamic, so it needs to know the device power consumption at every time. Also, it is important to say that the control algorithms have to be designed over a periodic pattern of action. The power estimation model works specifically for the consumption profile generated by a single application, which in this case is a video decoder. Nevertheless, it is necessary that the model works as accurate as possible for each frequency available on the processor, and for the greatest number of video sequences. This is because the power estimations are going to be used to modify dynamically the frequency, so the model must be able to work independently of the device frequency. To value the estimation model precision, some measurements of the device power consumption are taken at different frequencies during the video decoder execution. These measurements are compared with the power estimations obtained during that execution, getting the prediction error committed by the model, and if it is necessary, making modifications and settings on this model.
Resumo:
The Ribosomal Database Project (RDP-II), previously described by Maidak et al. [Nucleic Acids Res. (2000), 28, 173–174], continued during the past year to add new rRNA sequences to the aligned data and to improve the analysis commands. Release 8.0 (June 1, 2000) consisted of 16 277 aligned prokaryotic small subunit (SSU) rRNA sequences while the number of eukaryotic and mitochondrial SSU rRNA sequences in aligned form remained at 2055 and 1503, respectively. The number of prokaryotic SSU rRNA sequences more than doubled from the previous release 14 months earlier, and ~75% are longer than 899 bp. An RDP-II mirror site in Japan is now available (http://wdcm.nig.ac.jp/RDP/html/index.html). RDP-II provides aligned and annotated rRNA sequences, derived phylogenetic trees and taxonomic hierarchies, and analysis services through its WWW server (http://rdp.cme.msu.edu/). Analysis services include rRNA probe checking, approximate phylogenetic placement of user sequences, screening user sequences for possible chimeric rRNA sequences, automated alignment, production of similarity matrices and services to plan and analyze terminal restriction fragment polymorphism experiments. The RDP-II email address for questions and comments has been changed from curator@cme.msu.edu to rdpstaff@msu.edu.
Resumo:
"This paper examines The Lake Project and Terminal Mirage, the two components of David Maisel’s Black Maps series that concern water. Like the section of the Salt Lake chosen by Robert Smithson for his seminal Spiral Jetty, the alkaline waters Maisel photographs are subject to infestations of bacteria that that give them a visceral hue. Smithson provides a reference for this work; the artists are notable for their shared site, disorienting scale, and attraction to entropy"
Resumo:
Technical Assistance Grant Project, no. 01-6109132-1.