854 resultados para Video on demand
Resumo:
This paper presents the work in progress of an on-demand software deployment system based on application virtualization concepts which eliminates the need of software installation and configuration on each computer. Some mechanisms were created, such as mapping of utilization of resources by the application to improve the software distribution and startup; a virtualization middleware which give all resources needed for the software execution; an asynchronous P2P transport used to optimizing distribution on the network; and off-line support where the user can execute the application even when the server is not available or when is out of the network. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
We propose an efficient scheduling scheme that optimizes advance-reserved lightpath services in reconfigurable WDM networks. A re-optimization approach is devised to reallocate network resources for dynamic service demands while keeping determined schedule unchanged.
Resumo:
Background: Handling Totally Implantable Access Ports (TIAP) is a nursing procedure that requires skill and knowledge to avoid adverse events. No studies addressing this procedure with undergraduate students were identified prior to this study. Communication technologies, such as videos, have been increasingly adopted in the teaching of nursing and have contributed to the acquisition of competencies for clinical performance. Objective: To evaluate the effect of a video on the puncture and heparinization of TIAP in the development of cognitive and technical competencies of undergraduate nursing students. Method: Quasi-experimental study with a pretest-posttest design. Results: 24 individuals participated in the study. Anxiety scores were kept at levels 1 and 2 in the pretest and posttest. In relation to cognitive knowledge concerning the procedure, the proportion of correct answers in the pretest was 0.14 (SD=0.12) and 0.90 in the posttest (SD=0.05). After watching the video, the average score obtained by the participants in the mock session was 27.20. Conclusion: The use of an educational video with a simulation of puncture and heparinization of TIAP proved to be a strategy that increased both cognitive and technical knowledge. This strategy is viable in the teaching-learning process and is useful as a support tool for professors and for the development of undergraduate nursing students. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Die pneumatische Zerstäubung ist die häufigste Methode der Probenzuführung von Flüssigkeiten in der Plasmaspektrometrie. Trotz der bekannten Limitierungen dieser Systeme, wie die hohen Probenverluste, finden diese Zerstäuber aufgrund ihrer guten Robustheit eine breite Anwendung. Die flussratenabhängige Aerosolcharakteristik und pumpenbasierte Signalschwankungen limitieren bisher Weiterentwicklungen. Diese Probleme werden umso gravierender, je weiter die notwendige Miniaturisierung dieser Systeme fortschreitet. Der neuartige Ansatz dieser Arbeit basiert auf dem Einsatz modifizierter Inkjet-Druckerpatronen für die Dosierung von pL-Tropfen. Ein selbst entwickelter Mikrokontroller ermöglicht den Betrieb von matrixkodierten Patronen des Typs HP45 mit vollem Zugriff auf alle essentiellen Betriebsparameter. Durch die neuartige Aerosoltransportkammer gelang die effiziente Kopplung des Tropfenerzeugungssystems an ein ICP-MS. Das so aufgebaute drop-on-demand-System (DOD) zeigt im Vergleich zu herkömmlichen und miniaturisierten Zerstäubern eine deutlich gesteigerte Empfindlichkeit (8 - 18x, elementabhängig) bei leicht erhöhtem, aber im Grunde vergleichbarem Signalrauschen. Darüber hinaus ist die Flexibilität durch die große Zahl an Freiheitsgraden des Systems überragend. So ist die Flussrate über einen großen Bereich variabel (5 nL - 12,5 µL min-1), ohne dabei die primäre Aerosolcharakteristik zu beeinflussen, welche vom Nutzer durch Wahl der elektrischen Parameter bestimmt wird. Das entwickelte Probenzuführungssystem ist verglichen mit dem pneumatischen Referenzsystem weniger anfällig gegenüber Matrixeffekten beim Einsatz von realen Proben mit hohen Anteilen gelöster Substanzen. So gelingt die richtige Quantifizierung von fünf Metallen im Spurenkonzentrationsbereich (Li, Sr, Mo, Sb und Cs) in nur 12 µL Urin-Referenzmaterial mittels externer Kalibrierung ohne Matrixanpassung. Wohingegen beim pneumatischen Referenzsystem die aufwändigere Standardadditionsmethode sowie über 250 µL Probenvolumen für eine akkurate Bestimmung der Analyten nötig sind. Darüber hinaus wird basierend auf der Dosierfrequenz eines dualen DOD-Systems eine neuartige Kalibrierstrategie vorgestellt. Bei diesem Ansatz werden nur eine Standard- und eine Blindlösung anstelle einer Reihe unterschiedlich konzentrierter Standards benötigt, um eine lineare Kalibrierfunktion zu erzeugen. Zusätzlich wurde mittels selbst entwickelter, zeitlich aufgelöster ICP-MS umfangreiche Rauschspektren aufgenommen. Aus diesen gelang die Ermittlung der Ursache des erhöhten Signalrauschens des DOD, welches maßgeblich durch das zeitlich nicht äquidistante Eintreffen der Tropfen am Detektor verursacht wird. Diese Messtechnik erlaubt auch die Detektion einzeln zugeführter Tropfen, wodurch ein Vergleich der Volumenverteilung der mittels ICP-MS detektierten, gegenüber den generierten und auf optischem Wege charakterisierten Tropfen möglich wurde. Dieses Werkzeug ist für diagnostische Untersuchungen äußerst hilfreich. So konnte aus diesen Studien neben der Aufklärung von Aerosoltransportprozessen die Transporteffizienz des DOD ermittelt werden, welche bis zu 94 Vol.-% beträgt.
Resumo:
A microfluidic hydrogen generator is presented in this work. Its fabrication, characterization, and integration with a micro proton exchange membrane (PEM) fuel cell are described. Hydrogen gas is generated by the hydrolysis of aqueous ammonia borane. Gas generation, as well as the circulation of ammonia borane from a rechargeable fuel reservoir, is performed without any power consumption. To achieve this, directional growth and selective venting of hydrogen gas is maintained in the microchannels, which results in the circulation of fresh reactant from the fuel reservoir. In addition to this self-circulation mechanism, the hydrogen generator has been demonstrated to self-regulate gas generation to meet demands of a connected micro fuel cell. All of this is done without parasitic power consumption from the fuel cell. Results show its feasibility in applications of high-impedance systems. Lastly, recommendations for improvements and suggestions for future work are described
Resumo:
The Future Communication Architecture for Mobile Cloud Services: Mobile Cloud Networking (MCN) is a EU FP7 Large-scale Integrating Project (IP) funded by the European Commission. MCN project was launched in November 2012 for the period of 36 month. In total top-tier 19 partners from industry and academia commit to jointly establish the vision of Mobile Cloud Networking, to develop a fully cloud-based mobile communication and application platform.
Resumo:
Background The optimal defence hypothesis (ODH) predicts that tissues that contribute most to a plant's fitness and have the highest probability of being attacked will be the parts best defended against biotic threats, including herbivores. In general, young sink tissues and reproductive structures show stronger induced defence responses after attack from pathogens and herbivores and contain higher basal levels of specialized defensive metabolites than other plant parts. However, the underlying physiological mechanisms responsible for these developmentally regulated defence patterns remain unknown. Scope This review summarizes current knowledge about optimal defence patterns in above- and below-ground plant tissues, including information on basal and induced defence metabolite accumulation, defensive structures and their regulation by jasmonic acid (JA). Physiological regulations underlying developmental differences of tissues with contrasting defence patterns are highlighted, with a special focus on the role of classical plant growth hormones, including auxins, cytokinins, gibberellins and brassinosteroids, and their interactions with the JA pathway. By synthesizing recent findings about the dual roles of these growth hormones in plant development and defence responses, this review aims to provide a framework for new discoveries on the molecular basis of patterns predicted by the ODH. Conclusions Almost four decades after its formulation, we are just beginning to understand the underlying molecular mechanisms responsible for the patterns of defence allocation predicted by the ODH. A requirement for future advances will be to understand how developmental and defence processes are integrated.
Resumo:
Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.