47 resultados para Could computing
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The increasing volume of data describing humandisease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the@neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system’s architecture is generic enough that it could be adapted to the treatment of other diseases.Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers cliniciansthe tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medicalresearchers gain access to a critical mass of aneurysm related data due to the system’s ability to federate distributed informationsources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access andwork on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand forperforming computationally intensive simulations for treatment planning and research.
Resumo:
Despite the huge increase in processor and interprocessor network performace, many computational problems remain unsolved due to lack of some critical resources such as floating point sustained performance, memory bandwidth, etc... Examples of these problems are found in areas of climate research, biology, astrophysics, high energy physics (montecarlo simulations) and artificial intelligence, among others. For some of these problems, computing resources of a single supercomputing facility can be 1 or 2 orders of magnitude apart from the resources needed to solve some them. Supercomputer centers have to face an increasing demand on processing performance, with the direct consequence of an increasing number of processors and systems, resulting in a more difficult administration of HPC resources and the need for more physical space, higher electrical power consumption and improved air conditioning, among other problems. Some of the previous problems can´t be easily solved, so grid computing, intended as a technology enabling the addition and consolidation of computing power, can help in solving large scale supercomputing problems. In this document, we describe how 2 supercomputing facilities in Spain joined their resources to solve a problem of this kind. The objectives of this experience were, among others, to demonstrate that such a cooperation can enable the solution of bigger dimension problems and to measure the efficiency that could be achieved. In this document we show some preliminary results of this experience and to what extend these objectives were achieved.
Resumo:
Report for the scientific sojourn at the German Aerospace Center (DLR) , Germany, during June and July 2006. The main objective of the two months stay has been to apply the techniques of LEO (Low Earth Orbiters) satellites GPS navigation which DLR currently uses in real time navigation. These techniques comprise the use of a dynamical model which takes into account the precise earth gravity field and models to account for the effects which perturb the LEO’s motion (such as drag forces due to earth’s atmosphere, solar pressure, due to the solar radiation impacting on the spacecraft, luni-solar gravity, due to the perturbation of the gravity field for the sun and moon attraction, and tidal forces, due to the ocean and solid tides). A high parameterized software was produced in the first part of work, which has been used to asses which accuracy could be reached exploring different models and complexities. The objective was to study the accuracy vs complexity, taking into account that LEOs at different heights have different behaviors. In this frame, several LEOs have been selected in a wide range of altitudes, and several approaches with different complexity have been chosen. Complexity is a very important issue, because processors onboard spacecrafts have very limited computing and memory resources, so it is mandatory to keep the algorithms simple enough to let the satellite process it by itself.
Resumo:
En este proyecto se han visto dos sistemas de computación distribuida diferentes entre ellos: Condor y BOINC. Se exploran las posibilidades para poder conseguir que ambos sistemas logren trabajar conjuntamente, escogiendo la parte más efectiva de cada uno de los sistemas con el fin de complementarse.
Resumo:
Cooperation between libraries is a universal language spoken in different dialects. In 1996 the libraries of the state-funded universities and the National Library of Catalonia (Spain) formed the Consortium of Academic Libraries of Catalonia (CBUC) to act as a channel for cooperation. The organization and activities of CBUC are an example of how this universal language has been adapted to the specific characteristics of the Libraries of Catalonia. Catalonia is an autonomous region of Spain with 7 million inhabitants with its own language, history and traditions and with a strong feeling of own identity that facilitates the cooperation. Thanks to this (and also to the hard work of the member libraries), since then, CBUC has created a union catalogue, an interlibrary lending program, the Digital Library of Catalonia, a cooperative store, different cooperatives repositories and other cooperation programs. One of these cooperatives repositories is RACO (Catalan Journals in Open Access, www.raco.cat) where can be consulted, in open access, the full-text articles of scientific, cultural and scholar Catalan journals. The main purpose of RACO is to increase the visibility and searches of the journals included and to spread the scientific and academic production published in Catalonia. This purpose makes specific in three aims: encourage the electronic edition of Catalan journals; be the interface that allows the whole search of all the journals and provide the instruments for its preservation. There are currently 244 journals in RACO, that includes more than 85.000 articles (80% in OA) from 50 publishing institutions. Since it got into operation it has had more than 4 millions of queries. These 244 journals offer the full-text of all the published issues. Nevertheless, some journal can have a delay between the introduction of the table of contents and the full-text for the recent issues. From 2005 we have a plan of retrospective digitization that has allowed to digitize more than 350.000 pages of back issues. The RACO repository works with the open source program OJS (Open Journal Systems, http://pkp.sfu.ca/ojs/) and uses Dublin Core Metadata and the interoperability protocol created by Open Archives Initiative (OAI) which allows to increase the visibility of the articles published in journals offering oneself together with other international repositories.
Marine biotoxins in the Catalan littoral: could biosensors be integrated into monitoring programmes?
Resumo:
Aquest article descriu els sensors enzimàtics i immunosensors electroquímics que s’han desenvolupat als nostres grups per a la detecció de la biotoxina marina àcid okadaic (OA), i discuteix la possibilitat d’integrar-los en programes de seguiment. Els sensors enzimàtics per a OA que es presenten es basen en la inhibició de la proteïna fosfatasa (PP2A) per aquesta toxina i la mesura electroquímica de l’activitat enzimàtica mitjançant l’ús de substrats enzimàtics apropiats, electroquímicament actius després de la seva desfosforació per l’enzim. Els immunosensors electroquímics descrits en aquest article es basen en un enzimoimmunoassaig sobre fase sòlida competitiu indirecte (ciELISA), amb fosfatasa alcalina (ALP) o peroxidasa (HRP) com a marcatges, i un sistema de reciclatge enzimàtic amb diaforasa (DI). Els biosensors presentats aquí s’han aplicat a l’anàlisi de dinoflagel·lats, musclos i ostres. Les validacions preliminars amb assaigs colorimètrics i LC-MS/MS han demostrat la possibilitat d’utilitzar les bioeines desenvolupades per al cribratge preliminar de biotoxines marines en mostres de camp o de cultiu, que ofereixen informació complementària a la cromatografia. En conclusió, tot i que encara cal optimitzar alguns paràmetres experimentals, la integració dels biosensors a programes de seguiment és viable i podria proporcionar avantatges respecte a altres tècniques analítiques pel que fa al temps d’anàlisi, la simplicitat, la selectivitat, la sensibilitat, el fet de poder ser d’un sol ús i l’efectivitat de cost. This article describes the electrochemical enzyme sensors and immunosensors that have been developed by our groups for the detection of marine biotoxin okadaic acid (OA), and discusses the possibility of integrating them into monitoring programmes. The enzyme sensors for OA reported herein are based on the inhibition of immobilised protein phosphatase 2A (PP2A) by this toxin and the electrochemical measurement of the enzyme activity through the use of appropriate enzyme substrates, which are electrochemically active after dephosphorylation by the enzyme. The electrochemical immunosensors described in this article are based on a competitive indirect Enzyme- Linked ImmunoSorbent Assay (ciELISA), using alkaline phosphatase (ALP) or horseradish peroxidase (HRP) as labels, and an enzymatic recycling system with diaphorase (DI). The biosensors presented herein have been applied to the analysis of dinoflagellates, mussels and oysters. Preliminary validations with colorimetric assays and LC-MS/MS have demonstrated the possibility of using the developed biotools for the preliminary screening of marine biotoxins in field or cultured samples, offering complementary information to chromatography. In conclusion, although optimisation of some experimental parameters is still required, the integration of biosensors into monitoring programmes is viable and may provide advantages over other analytical techniques in terms of analysis time, simplicity, selectivity, sensitivity, disposability of electrodes and cost effectiveness.
Resumo:
Las aplicaciones de alineamiento múltiple de secuencias son prototipos de aplicaciones que requieren elevada potencia de cómputo y memoria. Se destacan por la relevancia científica que tienen los resultados que brindan a investigaciones científicas en el campo de la biomedicina, genética y farmacología. Las aplicaciones de alineamiento múltiple tienen la limitante de que no son capaces de procesar miles de secuencias, por lo que se hace necesario crear un modelo para resolver la problemática. Analizando el volumen de datos que se manipulan en el área de las ciencias biológica y la complejidad de los algoritmos de alineamiento de secuencias, la única vía de solución del problema es a través de la utilización de entornos de cómputo paralelos y la computación de altas prestaciones. La investigación realizada por nosotros tiene como objetivo la creación de un modelo paralelo que le permita a los algoritmos de alineamiento múltiple aumentar el número de secuencias a procesar, tratando de mantener la calidad en los resultados para garantizar la precisión científica. El modelo que proponemos emplea como base la clusterización de las secuencias de entrada utilizando criterios biológicos que permiten mantener la calidad de los resultados. Además, el modelo se enfoca en la disminución del tiempo de cómputo y consumo de memoria. Para presentar y validar el modelo utilizamos T-Coffee, como plataforma de desarrollo e investigación. El modelo propuesto pudiera ser aplicado a cualquier otro algoritmo de alineamiento múltiple de secuencias.
Resumo:
Sir James Lighthill proposed in 1992 that acoustic streaming occurs in the inner ear, as part of the cochlear amplifier mechanism. Here we hypothesize that some of the most ancient organisms use acoustic streaming not only for self-propulsion but also to enhance their nutrient uptake. We focus on a motile strain of Synechococcus, a yanobacteria whose mechanism for self-propulsion is not known. Molecular motors could work like piezoelectric transducers acting on the crystalline structure surrounding the outer cell membrane. Our calculations show that a traveling surface acoustic wave (SAW)could account for the observed velocities. These SAW waves will also produce a non-negligible Stokes layer surrounding the cell: motion within this region being essentially chaotic. Therefore, an AS mechanism would be biologically advantageous, enhancing localized diffusion processes and consequently, chemical reactions. We believe that acoustic streaming, produced by nanometer scale membrane vibrations could be widespread in cell biology. Other possible instances are yeast cells and erythrocytes. Flows generated by acoustic streaming may also be produced by silica coated diatoms along their raphe. We note that microelectromechanical (MEMS) acoustic streaming devices were first introduced in the 1990’s. Nature may have preceded this invention by 2.7 Gyr.
Resumo:
Aquest projecte descriu la fusió de les necessitats diaries de monitorització del experiment ATLAS des del punt de vista del cloud. La idea principal es desenvolupar un conjunt de col·lectors que recullin informació de la distribució i processat de les dades i dels test de wlcg (Service Availability Monitoring), emmagatzemant-la en BBDD específiques per tal de mostrar els resultats en una sola pàgina HLM (High Level Monitoring). Un cop aconseguit, l’aplicació ha de permetre investigar més enllà via interacció amb el front-end, el qual estarà alimentat per les estadístiques emmagatzemades a la BBDD.
Resumo:
Hypergraph width measures are a class of hypergraph invariants important in studying the complexity of constraint satisfaction problems (CSPs). We present a general exact exponential algorithm for a large variety of these measures. A connection between these and tree decompositions is established. This enables us to almost seamlessly adapt the combinatorial and algorithmic results known for tree decompositions of graphs to the case of hypergraphs and obtain fast exact algorithms. As a consequence, we provide algorithms which, given a hypergraph H on n vertices and m hyperedges, compute the generalized hypertree-width of H in time O*(2n) and compute the fractional hypertree-width of H in time O(1.734601n.m).1
Resumo:
DisperSATE està basat en una aplicació que podria ser el nucli d'un sistema d'informació d'ajuda al treball en equip a la planta de dispersions polimèriques de BASF a Tarragona, que és una planta química de tipus discontinu
Resumo:
Els objectius inicials d'aquest projecte eren la incorporació d'OpenGL a l'eina ja existent de generació de grafisme per televisió. Com que l'aplicador ja dissenyat es basava en un reduït tipus d'objectes sobre els quals es podien aplicar diferents paràmetres i agrupar-los de diferents maneres, calia trobar un nou tipus d'objecte que tingués alguna particularitat que el fes prou atractiu per a incorporar-lo a l'aplicació. El resultat va ser un objecte polièdric per a poder-hi mapar diferents textures i un text aplicant-hi una ombra en tres dimensions segons la posició del dibuix.
Resumo:
El director comercial fa ja temps que demana una web corporativa en la que els seus clients puguin consultar les característiques tècniques dels productes i puguin fer les seves comandes. A més aquesta web podria servir com aparador per nous clients i per realçar la imatge de qualitat de l'empresa.
Resumo:
L'objectiu d'aquest projecte és fer ús de la nova programació orientada a aspectes (AOP) per a fer tasques de reenginyeria. la finalitat seria que, amb l'ajut d'aquesta tecnologia, es pogués extreure informació de l'execució d'una aplicació, de manera que a partir d'aquesta informació es pogués obtenir el diagrama de cas d'ús.
Resumo:
El projecte que es presenta a continuació és una planificació de migració de servidors físics a un entorn virtualitzat, allà on sigui possible. A més s'ha plantejat una renovació tecnològica de tot el parc de servidors per estalviar diners en el manteniment i en el consum d'energia.La solució de virtualització es buscarà que sigui programari lliure.