374 resultados para Production engineering Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estudi dels estàndards definits per l'Open Geospatial Consortium, i més concretament en l'estàndard Web Processing Service (wps). Així mateix, ha tingut una component pràctica que ha consistit en el disseny i desenvolupament d'un client capaç de consumir serveis Web creats segons wps i integrat a la plataforma gvSIG.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'objectiu d'aquest projecte és fer ús de la nova programació orientada a aspectes (AOP) per a fer tasques de reenginyeria. la finalitat seria que, amb l'ajut d'aquesta tecnologia, es pogués extreure informació de l'execució d'una aplicació, de manera que a partir d'aquesta informació es pogués obtenir el diagrama de cas d'ús.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estudi seriós sobre les interfícies gràfiques destinades al sector industrial. En aquest sentit, s'analitza el perfil d'usuari o usuaris més freqüent en aquest sector (les seves característiques i les seves necessitats), es presenten i es descriuen diverses pautes de disseny i diversos elements gràfics que compleixen una sèrie de requisits predefinits, es procedeix a fer un muntatge d'exemple presentant una sèrie de pantalles (se n'explica i justifica el funcionament) i, per acabar, es proposa un mètode per a fer la validació del disseny, mètode que pot comportar modificacions sobre el disseny inicial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'objectiu del TFC és crear una 'suite' que resolgui tota la línia de producció d'un podcast. És a dir: captura d'un senyal d'audio en directe, transcodificació, classificació,emmagatzematge i, per acabar, difusió per Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente Trabajo Final de Carrera (TFC) está centrado en la Gestión de un Proyecto de Implantación de un Repositorio de Objetos Digitales de Aprendizaje en una Universidad, y queda englobado en el área de Gestión de Proyectos de la Ingeniería Técnica Informática de Gestión.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The system described herein represents the first example of a recommender system in digital ecosystems where agents negotiate services on behalf of small companies. The small companies compete not only with price or quality, but with a wider service-by-service composition by subcontracting with other companies. The final result of these offerings depends on negotiations at the scale of millions of small companies. This scale requires new platforms for supporting digital business ecosystems, as well as related services like open-id, trust management, monitors and recommenders. This is done in the Open Negotiation Environment (ONE), which is an open-source platform that allows agents, on behalf of small companies, to negotiate and use the ecosystem services, and enables the development of new agent technologies. The methods and tools of cyber engineering are necessary to build up Open Negotiation Environments that are stable, a basic condition for predictable business and reliable business environments. Aiming to build stable digital business ecosystems by means of improved collective intelligence, we introduce a model of negotiation style dynamics from the point of view of computational ecology. This model inspires an ecosystem monitor as well as a novel negotiation style recommender. The ecosystem monitor provides hints to the negotiation style recommender to achieve greater stability of an open negotiation environment in a digital business ecosystem. The greater stability provides the small companies with higher predictability, and therefore better business results. The negotiation style recommender is implemented with a simulated annealing algorithm at a constant temperature, and its impact is shown by applying it to a real case of an open negotiation environment populated by Italian companies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gaia is the most ambitious space astrometry mission currently envisaged and is a technological challenge in all its aspects. We describe a proposal for the payload data handling system of Gaia, as an example of a high-performance, real-time, concurrent, and pipelined data system. This proposal includes the front-end systems for the instrumentation, the data acquisition and management modules, the star data processing modules, and the payload data handling unit. We also review other payload and service module elements and we illustrate a data flux proposal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes a parallel architecture for a motion estimation algorithm. It is well known that image processing requires a huge amount of computation, mainly at low level processing where the algorithms are dealing with a great numbers of data-pixel. One of the solutions to estimate motions involves detection of the correspondences between two images. Due to its regular processing scheme, parallel implementation of correspondence problem can be an adequate approach to reduce the computation time. This work introduces parallel and real-time implementation of such low-level tasks to be carried out from the moment that the current image is acquired by the camera until the pairs of point-matchings are detected

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DnaSP is a software package for a comprehensive analysis of DNA polymorphism data. Version 5 implements a number of new features and analytical methods allowing extensive DNA polymorphism analyses on large datasets. Among other features, the newly implemented methods allow for: (i) analyses on multiple data files; (ii) haplotype phasing; (iii) analyses on insertion/deletion polymorphism data; (iv) visualizing sliding window results integrated with available genome annotations in the UCSC browser.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gaia is the most ambitious space astrometry mission currently envisaged and is a technological challenge in all its aspects. We describe a proposal for the payload data handling system of Gaia, as an example of a high-performance, real-time, concurrent, and pipelined data system. This proposal includes the front-end systems for the instrumentation, the data acquisition and management modules, the star data processing modules, and the payload data handling unit. We also review other payload and service module elements and we illustrate a data flux proposal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aquest projecte descriu la fusió de les necessitats diaries de monitorització del experiment ATLAS des del punt de vista del cloud. La idea principal es desenvolupar un conjunt de col·lectors que recullin informació de la distribució i processat de les dades i dels test de wlcg (Service Availability Monitoring), emmagatzemant-la en BBDD específiques per tal de mostrar els resultats en una sola pàgina HLM (High Level Monitoring). Un cop aconseguit, l’aplicació ha de permetre investigar més enllà via interacció amb el front-end, el qual estarà alimentat per les estadístiques emmagatzemades a la BBDD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aquest projecte abasta el disseny i el desenvolupament d’un model prototípic de Metodologia per a la Valoració de l’Aprenentatge Ambiental, a la qual anomenem “MEVA-Ambiental”. Per a fer possible aquesta fita ens hem basat en fonaments ontològics i constructivistes per representar i analitzar el coneixement a fi de poder quantificar l’Increment de Coneixement (IC). Per nosaltres l’IC esdevé un indicador socio-educatiu que ens servirà per a determinar l’efectivitat dels tallers d’educació ambiental en percentatge. En procedir d’aquesta manera, les qualificacions resultats poden es poden prendre com punt de partida per a desenvolupar estudis en el temps i comprendre com “s’ancora” el nou coneixement a l’estructura cognitiva dels aprenents. Més enllà del plantejament teòric de mètode, també proveïm la solució tècnica que mostra com n’és de funcional i d’aplicable la part empírica metodològica. A aquesta solució que hem anomenat “MEVA-Tool”, és una eina virtual que automatitza la recollida i tractament de dades amb una estructura dinàmica basada en “qüestionaris web” que han d’emplenar els estudiants, una “base de dades” que acumula la informació i en permet un filtratge selectiu, i més “Llibre Excel” que en fa el tractament informatiu, la representació gràfica dels resultats, l’anàlisi i conclusions.