863 resultados para Hardware gráfico
Resumo:
Octopus vulgaris on-growing in floating cages is a promising activity implemented in Spain at industrial level, with productions of 16-32 tons/year from 1998. Nevertheless, some aspects of the culture system need to be evaluated to warrantee its profitability. In the present work absolute growth rate (AGR, g/day) and mortality (%) under two initial rearing densities, 10 and 17 kg/m3, were compared under two feeding regimes over 15 weeks. One diet was composed by bogue, supplied as ?discarded? species from local fish farms. The other diet was based on a 40-60% discarded bogue-crab Portunus pelagicus. Half of the reared octopuses were PIT-tagged and two sampling points were established along the experimental period. Regardless of dietary treatment, up to the 11th week, growth was 19 and 13 g./day for the low and high rearing density. On the other hand, up to the 11th week mortality was higher in the control diet fed group (30%), reaching 74-84% by the end of the experiment regardless of rearing density and dietary treatment, which could suggest some nutritional imbalance of the tested diets.
Resumo:
[ES]El objetivo de este Trabajo es el de actualizar un entorno de gestión de bases de datos existente a la versión 11.2 del software de bases de datos Oracle y a una plataforma hardware de última generación. Se migran con tiempo de parada cero varias bases de datos dispersas en distintos servidores a un entorno consolidado de dos nodos dispuestos en alta disponibilidad tipo "activo-activo" mediante Oracle RAC y respaldado por un entorno de contingencia totalmente independiente y sincronizado en tiempo real mediante Oracle GoldenGate. Se realiza un estudio del entorno actual y, realizando una estimación de crecimiento, se propone una configuración de hardware y software mínima para implementar con garantías de éxito los requerimientos del entorno de gestión de bases de datos a corto y medio plazo. Una vez adquirido el hardware, se lleva a cabo la instalación, actualización y configuración del Sistema Operativo y el acceso redundado de los servidores a la cabina de almacenamiento. Posteriormente se instala el software de clúster de Oracle, el software de la base de datos y se crea una instancia que albergará los esquemas requeridos de las bases de datos a consolidar. Seguidamente se migran los esquemas al entorno consolidado y se establece la replicación de éstos en tiempo real con la máquina de contingencia usando en ambos casos Oracle GoldenGate. Finalmente se crea y prueba un esquema de copias de seguridad que incluye copias lógicas y físicas de la propia base de datos y de archivos de configuración del clúster a partir de los cuales será posible restaurar el entorno completamente.
Resumo:
[ES] La aplicación Gráfico interactivo de las relaciones derivativas entre palabras del español se centra en realizar un tratamiento a la información recabada en la tesis Sistema computacional de la gestión morfológica del español, de toda la información que contiene esta tesis este proyecto hace uso recogiendo cualquier palabra introducida por el usuario, buscando su forma canónica y devolviendo en forma gráfica la familia de palabras relacionada con esta, mediante un grafo. Esta aplicación web tiene como objetivo hacer amena la legibilidad y consulta de la información mencionada. Para lograr este objetivo se ha desarrollado una interfaz muy sencilla para el usuario, en la que éste solo tiene que introducir una palabra y relacionarla mediante un click. Una vez realizada la búsqueda, la aplicación permite al usuario interactuar con el grafo, pudiendo moverlo por la pantalla a su antojo, también tiene la función de mostrar los hijos de un nodo hoja al pasar el puntero por encima de éste y al hacer doble click sobre un nodo hoja, sea hijo o nieto de la raíz, devolviendo un nuevo grafo con el nodo seleccionado como raíz del mismo y mostrando sus hijos y nietos si los tuviera.
Resumo:
[EN]One of the main issues of the current education system is the lack of student motivation. This aspect together with the permanent change that the Information and Communications Technologies involve represents a major challenge for the teacher: to continuously update contents and to keep awake the student’s interest. A tremendously useful tool in classrooms consists on the integration of projects with participative and collaborative dynamics, where the teacher acts mainly as a guidance to the student activity instead of being a mere knowledge and evaluation transmitter. As a specific example of project based learning, the EDUROVs project consists on building an economic underwater robot using low cost materials, but allowing the integration and programming of many accessories and sensors with minimum budget using opensource hardware and software.
Resumo:
ALICE, that is an experiment held at CERN using the LHC, is specialized in analyzing lead-ion collisions. ALICE will study the properties of quarkgluon plasma, a state of matter where quarks and gluons, under conditions of very high temperatures and densities, are no longer confined inside hadrons. Such a state of matter probably existed just after the Big Bang, before particles such as protons and neutrons were formed. The SDD detector, one of the ALICE subdetectors, is part of the ITS that is composed by 6 cylindrical layers with the innermost one attached to the beam pipe. The ITS tracks and identifies particles near the interaction point, it also aligns the tracks of the articles detected by more external detectors. The two ITS middle layers contain the whole 260 SDD detectors. A multichannel readout board, called CARLOSrx, receives at the same time the data coming from 12 SDD detectors. In total there are 24 CARLOSrx boards needed to read data coming from all the SDD modules (detector plus front end electronics). CARLOSrx packs data coming from the front end electronics through optical link connections, it stores them in a large data FIFO and then it sends them to the DAQ system. Each CARLOSrx is composed by two boards. One is called CARLOSrx data, that reads data coming from the SDD detectors and configures the FEE; the other one is called CARLOSrx clock, that sends the clock signal to all the FEE. This thesis contains a description of the hardware design and firmware features of both CARLOSrx data and CARLOSrx clock boards, which deal with all the SDD readout chain. A description of the software tools necessary to test and configure the front end electronics will be presented at the end of the thesis.
Resumo:
This work describes the development of a simulation tool which allows the simulation of the Internal Combustion Engine (ICE), the transmission and the vehicle dynamics. It is a control oriented simulation tool, designed in order to perform both off-line (Software In the Loop) and on-line (Hardware In the Loop) simulation. In the first case the simulation tool can be used in order to optimize Engine Control Unit strategies (as far as regard, for example, the fuel consumption or the performance of the engine), while in the second case it can be used in order to test the control system. In recent years the use of HIL simulations has proved to be very useful in developing and testing of control systems. Hardware In the Loop simulation is a technology where the actual vehicles, engines or other components are replaced by a real time simulation, based on a mathematical model and running in a real time processor. The processor reads ECU (Engine Control Unit) output signals which would normally feed the actuators and, by using mathematical models, provides the signals which would be produced by the actual sensors. The simulation tool, fully designed within Simulink, includes the possibility to simulate the only engine, the transmission and vehicle dynamics and the engine along with the vehicle and transmission dynamics, allowing in this case to evaluate the performance and the operating conditions of the Internal Combustion Engine, once it is installed on a given vehicle. Furthermore the simulation tool includes different level of complexity, since it is possible to use, for example, either a zero-dimensional or a one-dimensional model of the intake system (in this case only for off-line application, because of the higher computational effort). Given these preliminary remarks, an important goal of this work is the development of a simulation environment that can be easily adapted to different engine types (single- or multi-cylinder, four-stroke or two-stroke, diesel or gasoline) and transmission architecture without reprogramming. Also, the same simulation tool can be rapidly configured both for off-line and real-time application. The Matlab-Simulink environment has been adopted to achieve such objectives, since its graphical programming interface allows building flexible and reconfigurable models, and real-time simulation is possible with standard, off-the-shelf software and hardware platforms (such as dSPACE systems).
Resumo:
The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
Die Kernmagnetresonanz (NMR) ist eine vielseitige Technik, die auf spin-tragende Kerne angewiesen ist. Seit ihrer Entdeckung ist die Kernmagnetresonanz zu einem unverzichtbaren Werkzeug in unzähligen Anwendungen der Physik, Chemie, Biologie und Medizin geworden. Das größte Problem der NMR ist ihre geringe Sensitivtät auf Grund der sehr kleinen Energieaufspaltung bei Raumtemperatur. Für Protonenspins, die das größte magnetogyrische Verhältnis besitzen, ist der Polarisationsgrad selbst in den größten verfügbaren Magnetfeldern (24 T) nur ~7*10^(-5).rnDurch die geringe inhärente Polarisation ist folglich eine theoretische Sensitivitätssteigerung von mehr als 10^4 möglich. rnIn dieser Arbeit wurden verschiedene technische Aspekte und unterschiedliche Polarisationsagenzien für Dynamic Nuclear Polarization (DNP) untersucht.rnDie technische Entwicklung des mobilen Aufbaus umfasst die Verwendung eines neuen Halbach Magneten, die Konstruktion neuer Probenköpfe und den automatisierten Ablauf der Experimente mittels eines LabVIEW basierten Programms. Desweiteren wurden zwei neue Polarisationsagenzien mit besonderen Merkmalen für den Overhauser und den Tieftemperatur DNP getestet. Zusätzlich konnte die Durchführbarkeit von NMR Experimenten an Heterokernen (19F und 13C) im mobilen Aufbau bei 0,35 T gezeigt werden. Diese Ergebnisse zeigen die Möglichkeiten der Polarisationstechnik DNP auf, wenn Heterokerne mit einem kleinen magnetogyrischen Verhältnis polarisiert werden müssen.rnDie Sensitivitätssteigerung sollte viele neue Anwendungen, speziell in der Medizin, ermöglichen.
Resumo:
L’acceleratore di particelle LHC, al CERN di Ginevra, permette studi molto rilevanti nell'ambito della fisica subnucleare. L’importanza che ricopre in questo campo il rivelatore è grandissima ed è per questo che si utilizzano tecnologie d’avanguardia nella sua costruzione. É altresì fondamentale disporre di un sistema di acquisizione dati quanto più moderno ma sopratutto efficiente. Tale sistema infatti è necessario per gestire tutti i segnali elettrici che derivano dalla conversione dell’evento fisico, passaggio necessario per rendere misurabili e quantificabili le grandezze di interesse. In particolare in questa tesi viene seguito il lavoro di test delle schede ROD dell’esperimento ATLAS IBL, che mira a verificare la loro corretta funzionalità, prima che vengano spedite nei laboratori del CERN. Queste nuove schede gestiscono i segnali in arrivo dal Pixel Detector di ATLAS, per poi inviarli ai computer per la successiva elaborazione. Un sistema simile era già implementato e funzionante, ma il degrado dei chip ha causato una perdita di prestazioni, che ha reso necessario l’inserimento di un layer aggiuntivo. Il nuovo strato di rivelatori a pixel, denominato Insertable Barrel Layer (IBL), porta così un aggiornamento tecnologico e prestazionale all'interno del Pixel Detector di ATLAS, andando a ristabilire l’efficacia del sistema.