974 resultados para Open source software -- TFG


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il Cloud computing è probabilmente l'argomento attualmente più dibattuto nel mondo dell'Information and Communication Technology (ICT). La diffusione di questo nuovo modo di concepire l'erogazione di servizi IT, è l'evoluzione di una serie di tecnologie che stanno rivoluzionando le modalit à in cui le organizzazioni costruiscono le proprie infrastrutture informatiche. I vantaggi che derivano dall'utilizzo di infrastrutture di Cloud Computing sono ad esempio un maggiore controllo sui servizi, sulla struttura dei costi e sugli asset impiegati. I costi sono proporzionati all'eettivo uso dei servizi (pay-per-use), evitando dunque gli sprechi e rendendo più efficiente il sistema di sourcing. Diverse aziende hanno già cominciato a provare alcuni servizi cloud e molte altre stanno valutando l'inizio di un simile percorso. La prima organizzazione a fornire una piattaforma di cloud computing fu Amazon, grazie al suo Elastic Computer Cloud (EC2). Nel luglio del 2010 nasce OpenStack, un progetto open-source creato dalla fusione dei codici realizzati dall'agenzia governativa della Nasa[10] e dell'azienda statunitense di hosting Rackspace. Il software realizzato svolge le stesse funzioni di quello di Amazon, a differenza di questo, però, è stato rilasciato con licenza Apache, quindi nessuna restrizione di utilizzo e di implementazione. Oggi il progetto Openstack vanta di numerose aziende partner come Dell, HP, IBM, Cisco, e Microsoft. L'obiettivo del presente elaborato è quello di comprendere ed analizzare il funzionamento del software OpenStack. Il fine principale è quello di familiarizzare con i diversi componenti di cui è costituito e di concepire come essi interagiscono fra loro, per poter costruire infrastrutture cloud del tipo Infrastructure as a service (IaaS). Il lettore si troverà di fronte all'esposizione degli argomenti organizzati nei seguenti capitoli. Nel primo capitolo si introduce la definizione di cloud computing, trattandone le principali caratteristiche, si descrivono poi, i diversi modelli di servizio e di distribuzione, delineando vantaggi e svantaggi che ne derivano. Nel secondo capitolo due si parla di una delle tecnologie impiegate per la realizzazione di infrastrutture di cloud computing, la virtualizzazione. Vengono trattate le varie forme e tipologie di virtualizzazione. Nel terzo capitolo si analizza e descrive in dettaglio il funzionamento del progetto OpenStack. Per ogni componente del software, viene illustrata l'architettura, corredata di schemi, ed il relativo meccanismo. Il quarto capitolo rappresenta la parte relativa all'installazione del software e alla configurazione dello stesso. Inoltre si espongono alcuni test effettuati sulla macchina in cui è stato installato il software. Infine nel quinto capitolo si trattano le conclusioni con le considerazioni sugli obiettivi raggiunti e sulle caratteristiche del software preso in esame.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'obiettivo della ricerca è di compiere un'analisi dell'impatto della cosiddetta cultura "open" alla luce dell'attuale condizione del World Wide Web. Si prenderà in considerazione, in particolare, la genesi del movimento a partire dalle basi di cultura hacker e la relativa evoluzione nella filosofia del software libero, con il fine ultimo di identificare il ruolo attuale del modello open source nello scenario esistente. L'introduzione al concetto di Open Access completerà la ricerca anche considerando la recente riaffermazione della conoscenza come bene comune all'interno della Società dell'Informazione

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studio degli strumenti Open Source usati per lo sviluppo cooperativo del software, delle loro possibili interazioni e di come esse facilitino lo sviluppo cooperativo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesi fornisce una panoramica delle principali metodologie di analisi dei dati utilizzando il software open source R e l’ambiente di sviluppo integrato (IDE) RStudio. Viene effettuata un’analisi descrittiva e quantitativa dei dati GICS, tassonomia industriale che cataloga le principali aziende per il processo di gestione e ricerca degli asset di investimento. Sono stati studiati i principali settori del mercato USA considerando il fatturato, le spese per il lobbying e tre indici che misurano il grado di collegamento fra industrie. Su questi dati si sono svolte delle analisi quantitative e si sono tentati alcuni modelli nell’ambito della regressione lineare semplice e multipla. Tale studio ha il compito di verificare eventuali interazioni fra le variabili o pattern di comportamento strategico societario durante il periodo 2007 - 2012, anni di rinnovo e miglioramento delle regolamentazioni in materia di lobbying negli Stati Uniti. Più nello specifico vengono presi in esame tre settori: IT, Health Care e Industrial dove viene studiato l’andamento del reddito medio e la spesa media in attività di lobbying dei settori. I risultati ottenuti mostrano l’utilità dei pacchetti di R per l’analisi dei dati: vengono individuati alcuni andamenti che, se confermati da ulteriori e necessarie analisi, potrebbero essere interessanti per capire non solo i meccanismi impliciti nell’attività di lobbying ma anche comportamenti anomali legati a questa attività.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le reti devono essere in grado di gestire i modelli di traffico generati dalle nuove applicazioni, per questo si sta concentrando un interesse senza precedenti nella storia di Internet parlando di Software Defined Networking (SDN), un nuovo modo di concepire le reti. SDN è un paradigma che permette di dividere il piano di controllo dal piano dati consentendo il controllo della rete da un dispositivo unico centralizzato,il controller. In questa tesi abbiamo voluto esaminare due specifici casi di studio, affinché si dimostri come SDN possa fornire il miglior supporto per risolvere il problema delle architetture tradizionali, e uno strumento utile per progettare SDN. Per primo viene analizzato Procera, utilizzato nelle reti domestiche e nelle reti campus per dimostrare che, grazie ad esso, è possibile ridurre la complessità di un’intera rete. Poi è stato visto AgNos, un’architettura basata su azioni svolte da agenti rappresentando così un ottimo strumento di lavoro sia perché gli agenti sono implementati nei controller di rete e sia perché AgNos ha la peculiarità di fornire all’utente (o al sistema) un livello stabile di concretezza. Inoltre sono stati analizzati due problemi comuni su Internet: 1.la mitigazione degli attacchi Ddos, dove i domini SDN collaborano per filtrare i pacchetti dalla fonte per evitare l’esaurimento delle risorse 2.l’attuazione di un meccanismo di prevenzione per risolvere il problema dell’attacco Dos nella fase iniziale rendendo l’aggressione più facile da gestire. L’ultimo argomento trattato è il sistema Mininet, ottimo strumento di lavoro in quanto permette di emulare topologie di rete in cui fanno parte host, switch e controller, creati utilizzando il software. Rappresenta un ottimo strumento per implementare reti SDN ed è molto utile per lo sviluppo, l'insegnamento e la ricerca grazie alla sua peculiarità di essere open source.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software evolution research has focused mostly on analyzing the evolution of single software systems. However, it is rarely the case that a project exists as standalone, independent of others. Rather, projects exist in parallel within larger contexts in companies, research groups or even the open-source communities. We call these contexts software ecosystems, and on this paper we present The Small Project Observatory, a prototype tool which aims to support the analysis of project ecosystems through interactive visualization and exploration. We present a case-study of exploring an ecosystem using our tool, we describe about the architecture of the tool, and we distill the lessons learned during the tool-building experience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software visualizations can provide a concise overview of a complex software system. Unfortunately, as software has no physical shape, there is no `natural' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typically diverges from one visualization to another. We propose an approach to consistent layout for software visualization, called Software Cartography, in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, as the vocabulary of software artifacts tends to be stable over time. We present a prototype implementation of Software Cartography, and illustrate its use with practical examples from numerous open-source case studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software evolution, and particularly its growth, has been mainly studied at the file (also sometimes referred as module) level. In this paper we propose to move from the physical towards a level that includes semantic information by using functions or methods for measuring the evolution of a software system. We point out that use of functions-based metrics has many advantages over the use of files or lines of code. We demonstrate our approach with an empirical study of two Free/Open Source projects: a community-driven project, Apache, and a company-led project, Novell Evolution. We discovered that most functions never change; when they do their number of modifications is correlated with their size, and that very few authors who modify each; finally we show that the departure of a developer from a software project slows the evolution of the functions that she authored.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While developing new IT products, reusability of existing components is a key aspect that can considerably improve the success rate. This fact has become even more important with the rise of the open source paradigm. However, integrating different products and technologies is not always an easy task. Different communities employ different standards and tools, and most times is not clear which dependencies a particular piece of software has. This is exacerbated by the transitive nature of these dependencies, making component integration a complicated affair. To help reducing this complexity we propose a model-based repository, capable of automatically resolve the required dependencies. This repository needs to be expandable, so new constraints can be analyzed, and also have federation support, for the integration with other sources of artifacts. The solution we propose achieves these working with OSGi components and using OSGi itself.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The worldwide "hyper-connection" of any object around us is the challenge that promises to cover the paradigm of the Internet of Things. If the Internet has colonized the daily life of more than 2000 million1 people around the globe, the Internet of Things faces of connecting more than 100000 million2 "things" by 2020. The underlying Internet of Things’ technologies are the cornerstone that promises to solve interrelated global problems such as exponential population growth, energy management in cities, and environmental sustainability in the average and long term. On the one hand, this Project has the goal of knowledge acquisition about prototyping technologies available in the market for the Internet of Things. On the other hand, the Project focuses on the development of a system for devices management within a Wireless Sensor and Actuator Network to offer some services accessible from the Internet. To accomplish the objectives, the Project will begin with a detailed analysis of various “open source” hardware platforms to encourage creative development of applications, and automatically extract information from the environment around them for transmission to external systems. In addition, web platforms that enable mass storage with the philosophy of the Internet of Things will be studied. The project will culminate in the proposal and specification of a service-oriented software architecture for embedded systems that allows communication between devices on the network, and the data transmission to external systems. Furthermore, it abstracts the complexities of hardware to application developers. RESUMEN. La “hiper-conexión” a nivel mundial de cualquier objeto que nos rodea es el desafío al que promete dar cobertura el paradigma de la Internet de las Cosas. Si la Internet ha colonizado el día a día de más de 2000 millones1 de personas en todo el planeta, la Internet de las Cosas plantea el reto de conectar a más de 100000 millones2 de “cosas” para el año 2020. Las tecnologías subyacentes de la Internet de las Cosas son la piedra angular que prometen dar solución a problemas globales interrelacionados como el crecimiento exponencial de la población, la gestión de la energía en las ciudades o la sostenibilidad del medioambiente a largo plazo. Este Proyecto Fin de Carrera tiene como principales objetivos por un lado, la adquisición de conocimientos acerca de las tecnologías para prototipos disponibles en el mercado para la Internet de las Cosas, y por otro lado el desarrollo de un sistema para la gestión de dispositivos de una red inalámbrica de sensores que ofrezcan unos servicios accesibles desde la Internet. Con el fin de abordar los objetivos marcados, el proyecto comenzará con un análisis detallado de varias plataformas hardware de tipo “open source” que estimulen el desarrollo creativo de aplicaciones y que permitan extraer de forma automática información del medio que les rodea para transmitirlo a sistemas externos para su posterior procesamiento. Por otro lado, se estudiarán plataformas web identificadas con la filosofía de la Internet de las Cosas que permitan el almacenamiento masivo de datos que diferentes plataformas hardware transfieren a través de la Internet. El Proyecto culminará con la propuesta y la especificación una arquitectura software orientada a servicios para sistemas empotrados que permita la comunicación entre los dispositivos de la red y la transmisión de datos a sistemas externos, así como facilitar el desarrollo de aplicaciones a los programadores mediante la abstracción de la complejidad del hardware.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the objectives of the European Higher Education Area is the promotion of collaborative and informal learning through the implementation of educational practices. 3D virtual environments become an ideal space for such activities. On the other hand, the problem of financing in Spanish universities has led to the search for new ways to optimize available resources. The Technical University of Madrid requires the use of laboratories which due to their dangerousness, duration or control of the developed processes are difficult to perform in real life. For this reason, we have developed several 3D laboratories in virtual environment. The laboratories are built on open source platform OpenSim. In this paper it is exposed the use of the OpenSim platform for these new teaching experiences and the new design of the software architecture. This architecture requires the adaptation of the platform to the needs of the users and the different laboratories of our University. We will explain the structure of the implemented architecture and the process of creating and configuring it. The proposed architecture is decentralized, each laboratory is housed in different an educational center. The architecture adds several services, among others, the creation and management of users automated, communication between external services and platforms in different program languages. Therefore, we achieve improving the user experience and rising the functionalities of laboratories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hoy en día, existen numerosos sistemas (financieros, fabricación industrial, infraestructura de servicios básicos, etc.) que son dependientes del software. Según la definición de Ingeniería del Software realizada por I. Sommerville, “la Ingeniería del Software es una disciplina de la ingeniería que comprende todos los aspectos de la producción de software desde las etapas iniciales de la especificación del sistema, hasta el mantenimiento de éste después de que se utiliza.” “La ingeniería del software no sólo comprende los procesos técnicos del desarrollo de software, sino también actividades tales como la gestión de proyectos de software y el desarrollo de herramientas, métodos y teorías de apoyo a la producción de software.” Los modelos de proceso de desarrollo software determinan una serie de pautas para poder desarrollar con éxito un proyecto de desarrollo software. Desde que surgieran estos modelos de proceso, se investigado en nuevas maneras de poder gestionar un proyecto y producir software de calidad. En primer lugar surgieron las metodologías pesadas o tradicionales, pero con el avance del tiempo y la tecnología, surgieron unas nuevas llamadas metodologías ágiles. En el marco de las metodologías ágiles cabe destacar una determinada práctica, la integración continua. Esta práctica surgió de la mano de Martin Fowler, con el objetivo de facilitar el trabajo en grupo y automatizar las tareas de integración. La integración continua se basa en la construcción automática de proyectos con una frecuencia alta, promoviendo la detección de errores en un momento temprano para poder dar prioridad a corregir dichos errores. Sin embargo, una de las claves del éxito en el desarrollo de cualquier proyecto software consiste en utilizar un entorno de trabajo que facilite, sistematice y ayude a aplicar un proceso de desarrollo de una forma eficiente. Este Proyecto Fin de Grado (PFG) tiene por objetivo el análisis de distintas herramientas para configurar un entorno de trabajo que permita desarrollar proyectos aplicando metodologías ágiles e integración continua de una forma fácil y eficiente. Una vez analizadas dichas herramientas, se ha propuesto y configurado un entorno de trabajo para su puesta en marcha y uso. Una característica a destacar de este PFG es que las herramientas analizadas comparten una cualidad común y de alto valor, son herramientas open-source. El entorno de trabajo propuesto en este PFG presenta una arquitectura cliente-servidor, dado que la mayoría de proyectos software se desarrollan en equipo, de tal forma que el servidor proporciona a los distintos clientes/desarrolladores acceso al conjunto de herramientas que constituyen el entorno de trabajo. La parte servidora del entorno propuesto proporciona soporte a la integración continua mediante herramientas de control de versiones, de gestión de historias de usuario, de análisis de métricas de software, y de automatización de la construcción de software. La configuración del cliente únicamente requiere de un entorno de desarrollo integrado (IDE) que soporte el lenguaje de programación Java y conexión con el servidor. ABSTRACT Nowadays, numerous systems (financial, industrial production, basic services infrastructure, etc.) depend on software. According to the Software Engineering definition made by I.Sommerville, “Software engineering is an engineering discipline that is concerned with all aspects of software production from the early stages of system specification through to maintaining the system after it has gone into use.” “Software engineering is not just concerned with the technical processes of software development. It also includes activities such as software project management and the development of tools, methods, and theories to support software production.” Software development process models determine a set of guidelines to successfully develop a software development project. Since these process models emerged, new ways of managing a project and producing software with quality have been investigated. First, the so-called heavy or traditional methodologies appeared, but with the time and the technological improvements, new methodologies emerged: the so-called agile methodologies. Agile methodologies promote, among other practices, continuous integration. This practice was coined by Martin Fowler and aims to make teamwork easier as well as automate integration tasks. Nevertheless, one of the keys to success in software projects is to use a framework that facilitates, systematize, and help to deploy a development process in an efficient way. This Final Degree Project (FDP) aims to analyze different tools to configure a framework that enables to develop projects by applying agile methodologies and continuous integration in an easy and efficient way. Once tools are analyzed, a framework has been proposed and configured. One of the main features of this FDP is that the tools under analysis share a common and high-valued characteristic: they are open-source. The proposed framework presents a client-server architecture, as most of the projects are developed by a team. In this way, the server provides access the clients/developers to the tools that comprise the framework. The server provides continuous integration through a set of tools for control management, user stories management, software quality management, and software construction automatization. The client configuration only requires a Java integrated development environment and network connection to the server.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente TFG está enmarcado en el contexto de la biología sintética (más concretamente en la automatización de protocolos) y representa una parte de los avances en este sector. Se trata de una plataforma de gestión de laboratorios autónomos. El resultado tecnológico servirá para ayudar al operador a coordinar las máquinas disponibles en un laboratorio a la hora de ejecutar un experimento basado en un protocolo de biología sintética. En la actualidad los experimentos biológicos tienen una tasa de éxito muy baja en laboratorios convencionales debido a la cantidad de factores externos que intervienen durante el protocolo. Además estos experimentos son caros y requieren de un operador pendiente de la ejecución en cada fase del protocolo. La automatización de laboratorios puede suponer un aumento de la tasa de éxito, además de una reducción de costes y de riesgos para los trabajadores en el entorno del laboratorio. En la presente propuesta se pretende que se dividan las distintas entidades de un laboratorio en unidades funcionales que serán los elementos a ser coordinados por la herramienta resultado del TFG. Para aportar flexibilidad a la herramienta se utilizará una arquitectura orientada a servicios (SOA). Cada unidad funcional desplegará un servicio web proporcionando su funcionalidad al resto del laboratorio. SOA es esencial para la comunicación entre máquinas ya que permite la abstracción del tipo de máquina que se trate y como esté implementada su funcionalidad. La principal dificultad del TFG consiste en lidiar con las dificultades de integración y coordinación de las distintas unidades funcionales para poder gestionar adecuadamente el ciclo de vida de un experimento. Para ello se ha realizado un análisis de herramientas disponibles de software libre. Finalmente se ha escogido la plataforma Apache Camel como marco sobre el que crear la herramienta específica planteada en el TFG. Apache Camel juega un papel importantísimo en este proyecto, ya que establece las capas de conexión a los distintos servicios y encamina los mensajes oportunos a cada servicio basándose en el contenido del fichero de entrada. Para la preparación del prototipo se han desarrollado una serie de servicios web que permitirán realizar pruebas y demostraciones de concepto de la herramienta en sí. Además se ha desarrollado una versión preliminar de la aplicación web que utilizará el operador del laboratorio para gestionar las peticiones, decidiendo que protocolo se ejecuta a continuación y siguiendo el flujo de tareas del experimento.---ABSTRACT---The current TFG is bound by synthetic biology context (more specifically in the protocol automation) and represents an element of progression in this sector. It consists of a management platform for automated laboratories. The technological result will help the operator to coordinate the available machines in a lab, this way an experiment based on a synthetic biological protocol, could be executed. Nowadays, the biological experiments have a low success rate in conventional laboratories, due to the amount of external factors that intrude during the protocol. On top of it, these experiments are usually expensive and require of an operator monitoring at every phase of the protocol. The laboratories’ automation might mean an increase in the success rate, and also a reduction of costs and risks for the lab workers. The current approach is hoped to divide the different entities in a laboratory in functional units. Those will be the elements to be coordinated by the tool that results from this TFG. In order to provide flexibility to the system, a service-oriented architecture will be used (SOA). Every functional unit will deploy a web service, publishing its functionality to the rest of the lab. SOA is essential to facilitate the communication between machines, due to the fact that it provides an abstraction on the type of the machine and how its functionality is implemented. The main difficulty of this TFG consists on grappling with the integration and coordination problems, being able to manage successfully the lifecycle of an experiment. For that, a benchmark has been made on the available open source tools. Finally Apache Camel has been chosen as a framework over which the tool defined in the TFG will be created. Apache Camel plays a fundamental role in this project, given that it establishes the connection layers to the different services and routes the suitable messages to each service, based on the received file’s content. For the prototype development a number of services that will allow it to perform demonstrations and concept tests have been deployed. Furthermore a preliminary version of the webapp has been developed. It will allow the laboratory operator managing petitions, to decide what protocol goes next as it executes the flow of the experiment’s tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

imaxin|software es una empresa creada en 1997 por cuatro titulados en ingeniería informática cuyo objetivo ha sido el de desarrollar videojuegos multimedia educativos y procesamiento del lenguaje natural multilingüe. 17 años más tarde, hemos desarrollado recursos, herramientas y aplicaciones multilingües de referencia para diferentes lenguas: Portugués (Galicia, Portugal, Brasil, etc.), Español (España, Argentina, México, etc.), Inglés, Catalán y Francés. En este artículo haremos una descripción de aquellos principales hitos en relación a la incorporación de estas tecnologías PLN al sector industrial e institucional.