992 resultados para Hardware reconfigurable


Relevância:

10.00% 10.00%

Publicador:

Resumo:

With this final master thesis we are going to contribute to the Asterisk open source project. Asterisk is an open source project that started with the main objective of develop an IP telephony platform, completely based on Software (so not hardware dependent) and under an open license like GPL. This project was started on 1999 by the software engineer Mark Spencer at Digium. The main motivation of that open source project was that the telecommunications sector is lack of open solutions, and most of the available solutions are based on proprietary standards, which are close and not compatible between them. Behind the Asterisk project there is a company, Digum, which is the project leading since the project was originated in its laboratories. This company has some of its employees fully dedicated to contribute to the Asterisk project, and also provide the whole infrastructure required by the open source project. But the business of Digium isn't based on licensing of products due to the open source nature of Asterisk, but it's based on offering services around Asteriskand designing and selling some hardware components to be used with Asterisk. The Asterisk project has grown up a lot since its birth, offering in its latest versions advanced functionalities for managing calls and compatibility with some hardware that previously was exclusive of proprietary solutions. Due to that, Asterisk is becoming a serious alternative to all these proprietaries solutions because it has reached a level of maturity that makes it very stable. In addition, as it is open source, it can be fully customized to a givenrequirement, which could be impossible with the proprietaries solutions. Due to the bigness that is reaching the project, every day there are more companies which develop value added software for telephony platforms, that are seriously evaluating the option of make their software fully compatible withAsterisk platforms. All these factors make Asterisk being a consolidated project but in constant evolution, trying to offer all those functionalities offered by proprietaries solutions. This final master thesis will be divided mainly in two blocks totally complementaries. In the first block we will analyze Asterisk as an open source project and Asterisk as a telephony platform (PBX). As a result of this analysis we will generate a document, written in English because it is Asterisk project's official language, which could be used by future contributors as an starting point on joining Asterisk. On the second block we will proceed with a development contribution to the Asterisk project. We will have several options in the form that we do the contribution, such as solving bugs, developing new functionalities or start an Asterisk satellite project. The type of contribution will depend on the needs of the project on that moment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El rápido crecimiento del los sistemas multicore y los diversos enfoques que estos han tomado, permiten que procesos complejos que antes solo eran posibles de ejecutar en supercomputadores, hoy puedan ser ejecutados en soluciones de bajo coste también denominadas "hardware de comodidad". Dichas soluciones pueden ser implementadas usando los procesadores de mayor demanda en el mercado de consumo masivo (Intel y AMD). Al escalar dichas soluciones a requerimientos de cálculo científico se hace indispensable contar con métodos para medir el rendimiento que los mismos ofrecen y la manera como los mismos se comportan ante diferentes cargas de trabajo. Debido a la gran cantidad de tipos de cargas existentes en el mercado, e incluso dentro de la computación científica, se hace necesario establecer medidas "típicas" que puedan servir como soporte en los procesos de evaluación y adquisición de soluciones, teniendo un alto grado de certeza de funcionamiento. En la presente investigación se propone un enfoque práctico para dicha evaluación y se presentan los resultados de las pruebas ejecutadas sobre equipos de arquitecturas multicore AMD e Intel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’objectiu d’aquest PFC és desenvolupar un sistema de pluja per a videojocs i aplicacions de realitat virtual que sigui acurat, tant en el sentit del realisme visual com del seu comportament. El projecte permetrà al desenvolupador de videojocs incorporar a les seves aplicacions, zones de pluja amb diferents intensitats utilitzant el hardware gràfic més modern, per així evitar que aquesta pluja sigui processada per la CPU i per tant pugui alentir el videojoc que està creant. S’han desenvolupat dos sistemes, el sistema d’edició de pluja i el de visualització en temps real

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La creación y posterior gestión de un repositorio institucional no tiene sentido si no cuenta con un número significativo de documentos y un crecimiento permanente de la colección, a un coste razonable. A partir del estudio de repositorios institucionales y de colecciones patrimoniales de bibliotecas, archivos y museos de España, los autores comparten sus reflexiones bajo la premisa de producir repositorios sostenibles, promoviendo la autosuficiencia en el incremento de sus fondos, la garantía de la financiación permanente por parte de la institución que los integra y, especialmente, el uso de los documentos depositados por parte de la comunidad a la que sirve la institución. Tras un breve repaso a los procesos de adaptación de la filosofía del acceso abierto a los repositorios existentes, se establece una hoja de ruta para el diseño e implementación de un nuevo repositorio, teniendo en cuenta la cobertura estratégica y legal del proyecto, las opciones de hardware y software más populares, así como la planificación de los procesos de trabajo y la adopción de metadatos de descripción e interoperabilidad. Se presentan estrategias de difusión y evaluación de los repositorios. Finalmente, se aportan recomendaciones básicas de preservación digital, a la espera de una solución global.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: Posttraumatic painful osteoarthritis of the ankle joint after fracture-dislocation often has to be treated with arthrodesis. In the presence of major soft tissue lesions and important bone loss the technique to achieve arthrodesis has to be well chosen in order to prevent hardware failure, infection of bulky implants or non-union. Methods: We present the case of a 53 year-old biker suffering of a fracture-dislocation of the ankle associated with a mayor degloving injury of the heel. After initial immobilization of the lesion by external fixation in Spain the patient was transferred to our hospital for further treatment. The degloving injury of the heel with MRSA infection was initially treated by repeated débridement, changing of the configuration of the Ex Fix and antibiotic therapy with favourable outcome. Because of the bony lesions reconstruction of the ankle-joint was juged not to be an option and arthrodesis was planned. Due to bad soft-tissue situation standard open fixtion with plate and/or screws was not wanted but an option for intramedullary nailing was taken. However the use of a standard retrograde arthrodesis nail comes with two problems: 1) Risk of infection of the heel-part of the calaneus/nail in an unstable soft tissue situation with protruding nail. And 2) talo-calcaneal arthrodesis of an initially healthy subtalar joint. Given the situation of an unstable plantar/heel flap it was decided to perform anklearthrodesis by means of an anterograde nail with static fixation in the talus and in the proximal tibia. Results:This operation was performed with minimal opening at the ankle-site in order to remove the remaining cartilage and improve direct bone to bone contact. Arthrodesis was achieved by means of an anterograde T2 Stryker tibial nail.One year after the anterograde nailing the patient walks without pain for up to 4 hours with a heel of good quality and arthrodesis is achieved. Conclusion: Tibiotalar arthrodesis in the presence of mayor soft tissue lesions and bone loss can be successfully achieved with antegrade nailing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A graphical processing unit (GPU) is a hardware device normally used to manipulate computer memory for the display of images. GPU computing is the practice of using a GPU device for scientific or general purpose computations that are not necessarily related to the display of images. Many problems in econometrics have a structure that allows for successful use of GPU computing. We explore two examples. The first is simple: repeated evaluation of a likelihood function at different parameter values. The second is a more complicated estimator that involves simulation and nonparametric fitting. We find speedups from 1.5 up to 55.4 times, compared to computations done on a single CPU core. These speedups can be obtained with very little expense, energy consumption, and time dedicated to system maintenance, compared to equivalent performance solutions using CPUs. Code for the examples is provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Mechatronics Research Centre (MRC) owns a small scale robot manipulator called aMini-Mover 5. This robot arm is a microprocessor-controlled, six-jointed mechanical armdesigned to provide an unusual combination of dexterity and low cost.The Mini-Mover-5 is operated by a number of stepper motors and is controlled by a PCparallel port via a discrete logic board. The manipulator also has an impoverished array ofsensors.This project requires that a new control board and suitable software be designed to allow themanipulator to be controlled from a PC. The control board will also provide a mechanism forthe values measured using some sensors to be returned to the PC.On this project I will consider: stepper motor control requirements, sensor technologies,power requirements, USB protocols, USB hardware and software development and controlrequirements (e.g. sample rates).In this report we will have a look at robots history and background, as well as we willconcentrate how stepper motors and parallel port work

Relevância:

10.00% 10.00%

Publicador:

Resumo:

S'ha realitzat un estudi per conèixer que és el PLC, quins tipus existeixen, la normativa que hi ha al voltant, quin hardware utilitza per després crear i analitzar dos cassos d'estudi: una xarxa domèstica i un edifici de nova construcció, en el que s'oferirà als futurs compradors Internet en qualsevol endoll de l'edifici.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aquest treball analitza els túnels més populars que són suportats pel maquinari habitualment implantat a la xarxa Guifi.net.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Critical real-time ebedded (CRTE) Systems require safe and tight worst-case execution time (WCET) estimations to provide required safety levels and keep costs low. However, CRTE Systems require increasing performance to satisfy performance needs of existing and new features. Such performance can be only achieved by means of more agressive hardware architectures, which are much harder to analyze from a WCET perspective. The main features considered include cache memòries and multi-core processors.Thus, althoug such features provide higher performance, corrent WCET analysis methods are unable to provide tight WCET estimations. In fact, WCET estimations become worse than for simple rand less powerful hardware. The main reason is the fact that hardware behavior is deterministic but unknown and, therefore, the worst-case behavior must be assumed most of the time, leading to large WCET estimations. The purpose of this project is developing new hardware designs together with WCET analysis tools able to provide tight and safe WCET estimations. In order to do so, those pieces of hardware whose behavior is not easily analyzable due to lack of accurate information during WCET analysis will be enhanced to produce a probabilistically analyzable behavior. Thus, even if the worst-case behavior cannot be removed, its probabilty can be bounded, and hence, a safe and tight WCET can be provided for a particular safety level in line with the safety levels of the remaining components of the system. During the first year the project we have developed molt of the evaluation infraestructure as well as the techniques hardware techniques to analyze cache memories. During the second year those techniques have been evaluated, and new purely-softwar techniques have been developed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El projecte que s’ha presentat correspon a la titulació d’Enginyeria en Informàtica i conté diversos aspectes relacionats al desenvolupament d’una aplicació web per a gestionar els pressupostos de l’empresa Ncora. Es presenta un estudi i anàlisi de requeriments, una compra de servidors, una instal·lació i configuració dels servidors en l’entorn hardware del núvol de Ncora, un anàlisi i disseny de la base de dades i un desenvolupament a mida de l’aplicació software. L’objectiu principal del projecte és l’estalvi de temps en la creació de pressupostos i la ràpida cerca de pressupostos fets així com dels seus components.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A traditional photonic-force microscope (PFM) results in huge sets of data, which requires tedious numerical analysis. In this paper, we propose instead an analog signal processor to attain real-time capabilities while retaining the richness of the traditional PFM data. Our system is devoted to intracellular measurements and is fully interactive through the use of a haptic joystick. Using our specialized analog hardware along with a dedicated algorithm, we can extract the full 3D stiffness matrix of the optical trap in real time, including the off-diagonal cross-terms. Our system is also capable of simultaneously recording data for subsequent offline analysis. This allows us to check that a good correlation exists between the classical analysis of stiffness and our real-time measurements. We monitor the PFM beads using an optical microscope. The force-feedback mechanism of the haptic joystick helps us in interactively guiding the bead inside living cells and collecting information from its (possibly anisotropic) environment. The instantaneous stiffness measurements are also displayed in real time on a graphical user interface. The whole system has been built and is operational; here we present early results that confirm the consistency of the real-time measurements with offline computations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CSCL applications are complex distributed systems that posespecial requirements towards achieving success in educationalsettings. Flexible and efficient design of collaborative activitiesby educators is a key precondition in order to provide CSCL tailorable systems, capable of adapting to the needs of eachparticular learning environment. Furthermore, some parts ofthose CSCL systems should be reused as often as possible inorder to reduce development costs. In addition, it may be necessary to employ special hardware devices, computational resources that reside in other organizations, or even exceed thepossibilities of one specific organization. Therefore, theproposal of this paper is twofold: collecting collaborativelearning designs (scripting) provided by educators, based onwell-known best practices (collaborative learning flow patterns) in a standard way (IMS-LD) in order to guide the tailoring of CSCL systems by selecting and integrating reusable CSCL software units; and, implementing those units in the form of grid services offered by third party providers. More specifically, this paper outlines a grid-based CSCL system having these features and illustrates its potential scope and applicability by means of a sample collaborative learning scenario.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence is presented. It is comprised of more than 600 individuals acquired simultaneously in three scenarios: 1) over the Internet, 2) in an office environment with desktop PC, and 3) in indoor/outdoor environments with mobile portable hardware. The three scenarios include a common part of audio/video data. Also, signature and fingerprint data have been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data were acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions. Additional features of the BioSecure Multimodal Database (BMDB) are: two acquisitionsessions, several sensors in certain modalities, balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity, availability of demographic data, and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB allow us to perform new challenging research and evaluation of eithermonomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to beavailable for research purposes through the BioSecure Association during 2008.