847 resultados para Systems-based agents


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Both obesity and asthma are highly prevalent, complex diseases modified by multiple factors. Genetic, developmental, lung mechanical, immunological and behavioural factors have all been suggested as playing a causal role between the two entities; however, their complex mechanistic interactions are still poorly understood and evidence of causality in children remains scant. Equally lacking is evidence of effective treatment strategies, despite the fact that imbalances at vulnerable phases in childhood can impact long-term health. This review is targeted at both clinicians frequently faced with the dilemma of how to investigate and treat the obese asthmatic child and researchers interested in the topic. Highlighting the breadth of the spectrum of factors involved, this review collates evidence regarding the investigation and treatment of asthma in obese children, particularly in comparison with current approaches in 'difficult-to-treat' childhood asthma. Finally, the authors propose hypotheses for future research from a systems-based perspective.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

E-learning systems output a huge quantity of data on a learning process. However, it takes a lot of specialist human resources to manually process these data and generate an assessment report. Additionally, for formative assessment, the report should state the attainment level of the learning goals defined by the instructor. This paper describes the use of the granular linguistic model of a phenomenon (GLMP) to model the assessment of the learning process and implement the automated generation of an assessment report. GLMP is based on fuzzy logic and the computational theory of perceptions. This technique is useful for implementing complex assessment criteria using inference systems based on linguistic rules. Apart from the grade, the model also generates a detailed natural language progress report on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. This is illustrated by applying the model to the assessment of Dijkstra’s algorithm learning using a visual simulation-based graph algorithm learning environment, called GRAPHs

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The area of Human-Machine Interface is growing fast due to its high importance in all technological systems. The basic idea behind designing human-machine interfaces is to enrich the communication with the technology in a natural and easy way. Gesture interfaces are a good example of transparent interfaces. Such interfaces must identify properly the action the user wants to perform, so the proper gesture recognition is of the highest importance. However, most of the systems based on gesture recognition use complex methods requiring high-resource devices. In this work, we propose to model gestures capturing their temporal properties, which significantly reduce storage requirements, and use clustering techniques, namely self-organizing maps and unsupervised genetic algorithm, for their classification. We further propose to train a certain number of algorithms with different parameters and combine their decision using majority voting in order to decrease the false positive rate. The main advantage of the approach is its simplicity, which enables the implementation using devices with limited resources, and therefore low cost. The testing results demonstrate its high potential.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recently a new recipe for developing and deploying real-time systems has become increasingly adopted in the JET tokamak. Powered by the advent of x86 multi-core technology and the reliability of the JET’s well established Real-Time Data Network (RTDN) to handle all real-time I/O, an official Linux vanilla kernel has been demonstrated to be able to provide realtime performance to user-space applications that are required to meet stringent timing constraints. In particular, a careful rearrangement of the Interrupt ReQuests’ (IRQs) affinities together with the kernel’s CPU isolation mechanism allows to obtain either soft or hard real-time behavior depending on the synchronization mechanism adopted. Finally, the Multithreaded Application Real-Time executor (MARTe) framework is used for building applications particularly optimised for exploring multicore architectures. In the past year, four new systems based on this philosophy have been installed and are now part of the JET’s routine operation. The focus of the present work is on the configuration and interconnection of the ingredients that enable these new systems’ real-time capability and on the impact that JET’s distributed real-time architecture has on system engineering requirements, such as algorithm testing and plant commissioning. Details are given about the common real-time configuration and development path of these systems, followed by a brief description of each system together with results regarding their real-time performance. A cycle time jitter analysis of a user-space MARTe based application synchronising over a network is also presented. The goal is to compare its deterministic performance while running on a vanilla and on a Messaging Real time Grid (MRG) Linux kernel.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern embedded applications typically integrate a multitude of functionalities with potentially different criticality levels into a single system. Without appropriate preconditions, the integration of mixed-criticality subsystems can lead to a significant and potentially unacceptable increase of engineering and certification costs. A promising solution is to incorporate mechanisms that establish multiple partitions with strict temporal and spatial separation between the individual partitions. In this approach, subsystems with different levels of criticality can be placed in different partitions and can be verified and validated in isolation. The MultiPARTES FP7 project aims at supporting mixed- criticality integration for embedded systems based on virtualization techniques for heterogeneous multicore processors. A major outcome of the project is the MultiPARTES XtratuM, an open source hypervisor designed as a generic virtualization layer for heterogeneous multicore. MultiPARTES evaluates the developed technology through selected use cases from the offshore wind power, space, visual surveillance, and automotive domains. The impact of MultiPARTES on the targeted domains will be also discussed. In a number of ongoing research initiatives (e.g., RECOMP, ARAMIS, MultiPARTES, CERTAINTY) mixed-criticality integration is considered in multicore processors. Key challenges are the combination of software virtualization and hardware segregation and the extension of partitioning mechanisms to jointly address significant non-functional requirements (e.g., time, energy and power budgets, adaptivity, reliability, safety, security, volume, weight, etc.) along with development and certification methodology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Minimally invasive surgery is a highly demanding surgical approach regarding technical requirements for the surgeon, who must be trained in order to perform a safe surgical intervention. Traditional surgical education in minimally invasive surgery is commonly based on subjective criteria to quantify and evaluate surgical abilities, which could be potentially unsafe for the patient. Authors, surgeons and associations are increasingly demanding the development of more objective assessment tools that can accredit surgeons as technically competent. This paper describes the state of the art in objective assessment methods of surgical skills. It gives an overview on assessment systems based on structured checklists and rating scales, surgical simulators, and instrument motion analysis. As a future work, an objective and automatic assessment method of surgical skills should be standardized as a means towards proficiency-based curricula for training in laparoscopic surgery and its certification.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Optical hyperthermia systems based on the laser irradiation of gold nanorods seem to be a promising tool in the development of therapies against cancer. After a proof of concept in which the authors demonstrated the efficiency of this kind of systems, a modeling process based on an equivalent thermal-electric circuit has been carried out to determine the thermal parameters of the system and an energy balance obtained from the time-dependent heating and cooling temperature curves of the irradiated samples in order to obtain the photothermal transduction efficiency. By knowing this parameter, it is possible to increase the effectiveness of the treatments, thanks to the possibility of predicting the response of the device depending on the working configuration. As an example, the thermal behavior of two different kinds of nanoparticles is compared. The results show that, under identical conditions, the use of PEGylated gold nanorods allows for a more efficient heating compared with bare nanorods, and therefore, it results in a more effective therapy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La cuestión principal abordada en esta tesis doctoral es la mejora de los sistemas biométricos de reconocimiento de personas a partir de la voz, proponiendo el uso de una nueva parametrización, que hemos denominado parametrización biométrica extendida dependiente de género (GDEBP en sus siglas en inglés). No se propone una ruptura completa respecto a los parámetros clásicos sino una nueva forma de utilizarlos y complementarlos. En concreto, proponemos el uso de parámetros diferentes dependiendo del género del locutor, ya que como es bien sabido, la voz masculina y femenina presentan características diferentes que deberán modelarse, por tanto, de diferente manera. Además complementamos los parámetros clásicos utilizados (MFFC extraídos de la señal de voz), con un nuevo conjunto de parámetros extraídos a partir de la deconstrucción de la señal de voz en sus componentes de fuente glótica (más relacionada con el proceso y órganos de fonación y por tanto con características físicas del locutor) y de tracto vocal (más relacionada con la articulación acústica y por tanto con el mensaje emitido). Para verificar la validez de esta propuesta se plantean diversos escenarios, utilizando diferentes bases de datos, para validar que la GDEBP permite generar una descripción más precisa de los locutores que los parámetros MFCC clásicos independientes del género. En concreto se plantean diferentes escenarios de identificación sobre texto restringido y texto independiente utilizando las bases de datos de HESPERIA y ALBAYZIN. El trabajo también se completa con la participación en dos competiciones internacionales de reconocimiento de locutor, NIST SRE (2010 y 2012) y MOBIO 2013. En el primer caso debido a la naturaleza de las bases de datos utilizadas se obtuvieron resultados cercanos al estado del arte, mientras que en el segundo de los casos el sistema presentado obtuvo la mejor tasa de reconocimiento para locutores femeninos. A pesar de que el objetivo principal de esta tesis no es el estudio de sistemas de clasificación, sí ha sido necesario analizar el rendimiento de diferentes sistemas de clasificación, para ver el rendimiento de la parametrización propuesta. En concreto, se ha abordado el uso de sistemas de reconocimiento basados en el paradigma GMM-UBM, supervectores e i-vectors. Los resultados que se presentan confirman que la utilización de características que permitan describir los locutores de manera más precisa es en cierto modo más importante que la elección del sistema de clasificación utilizado por el sistema. En este sentido la parametrización propuesta supone un paso adelante en la mejora de los sistemas de reconocimiento biométrico de personas por la voz, ya que incluso con sistemas de clasificación relativamente simples se consiguen tasas de reconocimiento realmente competitivas. ABSTRACT The main question addressed in this thesis is the improvement of automatic speaker recognition systems, by the introduction of a new front-end module that we have called Gender Dependent Extended Biometric Parameterisation (GDEBP). This front-end do not constitute a complete break with respect to classical parameterisation techniques used in speaker recognition but a new way to obtain these parameters while introducing some complementary ones. Specifically, we propose a gender-dependent parameterisation, since as it is well known male and female voices have different characteristic, and therefore the use of different parameters to model these distinguishing characteristics should provide a better characterisation of speakers. Additionally, we propose the introduction of a new set of biometric parameters extracted from the components which result from the deconstruction of the voice into its glottal source estimate (close related to the phonation process and the involved organs, and therefore the physical characteristics of the speaker) and vocal tract estimate (close related to acoustic articulation and therefore to the spoken message). These biometric parameters constitute a complement to the classical MFCC extracted from the power spectral density of speech as a whole. In order to check the validity of this proposal we establish different practical scenarios, using different databases, so we can conclude that a GDEBP generates a more accurate description of speakers than classical approaches based on gender-independent MFCC. Specifically, we propose scenarios based on text-constrain and text-independent test using HESPERIA and ALBAYZIN databases. This work is also completed with the participation in two international speaker recognition evaluations: NIST SRE (2010 and 2012) and MOBIO 2013, with diverse results. In the first case, due to the nature of the NIST databases, we obtain results closed to state-of-the-art although confirming our hypothesis, whereas in the MOBIO SRE we obtain the best simple system performance for female speakers. Although the study of classification systems is beyond the scope of this thesis, we found it necessary to analise the performance of different classification systems, in order to verify the effect of them on the propose parameterisation. In particular, we have addressed the use of speaker recognition systems based on the GMM-UBM paradigm, supervectors and i-vectors. The presented results confirm that the selection of a set of parameters that allows for a more accurate description of the speakers is as important as the selection of the classification method used by the biometric system. In this sense, the proposed parameterisation constitutes a step forward in improving speaker recognition systems, since even when using relatively simple classification systems, really competitive recognition rates are achieved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La importancia de los sistemas de recomendación ha experimentado un crecimiento exponencial como consecuencia del auge de las redes sociales. En esta tesis doctoral presentaré una amplia visión sobre el estado del arte de los sistemas de recomendación. Incialmente, estos estaba basados en fitrado demográfico, basado en contendio o colaborativo. En la actualidad, estos sistemas incorporan alguna información social al proceso de recomendación. En el futuro utilizarán información implicita, local y personal proveniente del Internet de las cosas. Los sistemas de recomendación basados en filtrado colaborativo se pueden modificar con el fin de realizar recomendaciones a grupos de usuarios. Existen trabajos previos que han incluido estas modificaciones en diferentes etapas del algoritmo de filtrado colaborativo: búsqueda de los vecinos, predicción de las votaciones y elección de las recomendaciones. En esta tesis doctoral proporcionaré un nuevo método que realizar el proceso de unficación (pasar de varios usuarios a un grupo) en el primer paso del algoritmo de filtrado colaborativo: cálculo de la métrica de similaridad. Proporcionaré una formalización completa del método propuesto. Explicaré cómo obtener el conjunto de k vecinos del grupo de usuarios y mostraré cómo obtener recomendaciones usando dichos vecinos. Asimismo, incluiré un ejemplo detallando cada paso del método propuesto en un sistema de recomendación compuesto por 8 usuarios y 10 items. Las principales características del método propuesto son: (a) es más rápido (más eficiente) que las alternativas proporcionadas por otros autores, y (b) es al menos tan exacto y preciso como otras soluciones estudiadas. Para contrastar esta hipótesis realizaré varios experimentos que miden la precisión, la exactitud y el rendimiento del método. Los resultados obtenidos se compararán con los resultados de otras alternativas utilizadas en la recomendación de grupos. Los experimentos se realizarán con las bases de datos de MovieLens y Netflix. ABSTRACT The importance of recommender systems has grown exponentially with the advent of social networks. In this PhD thesis I will provide a wide vision about the state of the art of recommender systems. They were initially based on demographic, contentbased and collaborative filtering. Currently, these systems incorporate some social information to the recommendation process. In the future, they will use implicit, local and personal information from the Internet of Things. As we will see here, recommender systems based on collaborative filtering can be used to perform recommendations to group of users. Previous works have made this modification in different stages of the collaborative filtering algorithm: establishing the neighborhood, prediction phase and determination of recommended items. In this PhD thesis I will provide a new method that carry out the unification process (many users to one group) in the first stage of the collaborative filtering algorithm: similarity metric computation. I will provide a full formalization of the proposed method. I will explain how to obtain the k nearest neighbors of the group of users and I will show how to get recommendations using those users. I will also include a running example of a recommender system with 8 users and 10 items detailing all the steps of the method I will present. The main highlights of the proposed method are: (a) it will be faster (more efficient) that the alternatives provided by other authors, and (b) it will be at least as precise and accurate as other studied solutions. To check this hypothesis I will conduct several experiments measuring the accuracy, the precision and the performance of my method. I will compare these results with the results generated by other methods of group recommendation. The experiments will be carried out using MovieLens and Netflix datasets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract:The aim of this paper is to review the literature on voting systems based on Condorcet and Borda. We compared and classified them. Also we referred to some strengths and weaknesses of voting systems and finally in a case study, we made use of the Borda voting system for collective decision making in the Salonga National Park in the Democratic Republic of Congo. Resumen: el objetivo de este trabajo es hacer una revisión bibliográfica de los sistemas de votación basados en Condorcet y Borda. Se ha comparado y clasificado los mismos. Así mismo se ha hecho referencia a algunas debilidades y fortalezas de los sistemas de votación y por último en un caso de estudio, se ha hecho uso del sistema de votación de Borda para la toma de decisión colectiva en el Parque Nacional de Salonga en la República Democrática del Congo.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Las herramientas de configuración basadas en lenguajes de alto nivel como LabVIEW permiten el desarrollo de sistemas de adquisición de datos basados en hardware reconfigurable FPGA muy complejos en un breve periodo de tiempo. La estandarización del ciclo de diseño hardware/software y la utilización de herramientas como EPICS facilita su integración con la plataforma de adquisición y control ITER CODAC CORE SYSTEM (CCS) basada en Linux. En este proyecto se propondrá una metodología que simplificará el ciclo completo de integración de plataformas novedosas, como cRIO, en las que el funcionamiento del hardware de adquisición puede ser modificado por el usuario para que éste se amolde a sus requisitos específicos. El objetivo principal de este proyecto fin de master es realizar la integración de un sistema cRIO NI9159 y diferentes módulos de E/S analógica y digital en EPICS y en CODAC CORE SYSTEM (CCS). Este último consiste en un conjunto de herramientas software que simplifican la integración de los sistemas de instrumentación y control del experimento ITER. Para cumplir el objetivo se realizarán las siguientes tareas: • Desarrollo de un sistema de adquisición de datos basado en FPGA con la plataforma hardware CompactRIO. En esta tarea se realizará la configuración del sistema y la implementación en LabVIEW para FPGA del hardware necesario para comunicarse con los módulos: NI9205, NI9264, NI9401.NI9477, NI9426, NI9425 y NI9476 • Implementación de un driver software utilizando la metodología de AsynDriver para integración del cRIO con EPICS. Esta tarea requiere definir todos los records necesarios que exige EPICS y crear las interfaces adecuadas que permitirán comunicarse con el hardware. • Implementar la descripción del sistema cRIO y del driver EPICS en el sistema de descripción de plantas de ITER llamado SDD. Esto automatiza la creación de las aplicaciones de EPICS que se denominan IOCs. SUMMARY The configuration tools based in high-level programing languages like LabVIEW allows the development of high complex data acquisition systems based on reconfigurable hardware FPGA in a short time period. The standardization of the hardware/software design cycle and the use of tools like EPICS ease the integration with the data acquisition and control platform of ITER, the CODAC Core System based on Linux. In this project a methodology is proposed in order to simplify the full integration cycle of new platforms like CompactRIO (cRIO), in which the data acquisition functionality can be reconfigured by the user to fits its concrete requirements. The main objective of this MSc final project is to develop the integration of a cRIO NI-9159 and its different analog and digital Input/Output modules with EPICS in a CCS. The CCS consists of a set of software tools that simplifies the integration of instrumentation and control systems in the International Thermonuclear Reactor (ITER) experiment. To achieve such goal the following tasks are carried out: • Development of a DAQ system based on FPGA using the cRIO hardware platform. This task comprehends the configuration of the system and the implementation of the mandatory hardware to communicate to the I/O adapter modules NI9205, NI9264, NI9401, NI9477, NI9426, NI9425 y NI9476 using LabVIEW for FPGA. • Implementation of a software driver using the asynDriver methodology to integrate such cRIO system with EPICS. This task requires the definition of the necessary EPICS records and the creation of the appropriate interfaces that allow the communication with the hardware. • Develop the cRIO system’s description and the EPICS driver in the ITER plant description tool named SDD. This development will automate the creation of EPICS applications, called IOCs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the most demanding needs in cloud computing and big data is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. This thesis provides an in depth study of three eager database replication systems based on relational systems: Middle-R, C-JDBC and MySQL Cluster and three systems based on In-Memory Data Grids: JBoss Data Grid, Oracle Coherence and Terracotta Ehcache. Thesis explore these systems based on their architecture, replication protocols, fault tolerance and various other functionalities. It also provides experimental analysis of these systems using state-of-the art benchmarks: TPC-C and TPC-W (for relational systems) and Yahoo! Cloud Serving Benchmark (In- Memory Data Grids). Thesis also discusses three Graph Databases, Neo4j, Titan and Sparksee based on their architecture and transactional capabilities and highlights the weaker transactional consistencies provided by these systems. It discusses an implementation of snapshot isolation in Neo4j graph database to provide stronger isolation guarantees for transactions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the challenges that concerns chemistry is the design of molecules able to modulate protein-protein and protein-ligand interactions, since these are involved in many physiological and pathological processes. The interactions occurring between proteins and their natural counterparts can take place through reciprocal recognition of rather large surface areas, through recognition of single contact points and single residues, through inclusion of the substrates in specific, more or less deep binding sites. In many cases, the design of synthetic molecules able to interfere with the processes involving proteins can benefit from the possibility of exploiting the multivalent effect. Multivalency, widely spread in Nature, consists in the simultaneous formation between two entities (cell-cell, cell-protein, protein-protein) of multiple equivalent ligand-recognition site complexes. In this way the whole interaction results particularly strong and specific. Calixarenes furnish a very interesting scaffold for the preparation of multivalent ligands and in the last years calixarene-based ligands demonstrated their remarkable capability to recognize and inhibit or restore the activity of different proteins, with a high efficiency and selectivity in several recognition phenomena. The relevance and versatility of these ligands is due to the different exposition geometries of the binding units that can be explored exploiting the conformational properties of these macrocycles, the wide variety of functionalities that can be linked to their structure at different distances from the aromatic units and to their intrinsic multivalent nature. With the aim of creating new multivalent systems for protein targeting, the work reported in this thesis regards the synthesis and properties of glycocalix[n]arenes and guanidino calix[4]arenes for different purposes. Firstly, a new bolaamphiphile glycocalix[4]arene in 1,3-alternate geometry, bearing cellobiose, was synthesized for the preparation of targeted drug delivery systems based on liposomes. The formed stable mixed liposomes obtained by mixing the macrocycle with DOPC were shown to be able of exploiting the sugar units emerging from the lipid bilayer to agglutinate Concanavalin A, a lectin specific for glucose. Moreover, always thanks to the presence of the glycocalixarene in the layer, the same liposomes demonstrated through preliminary experiments to be uptaken by cancer cells overexpressing glucose receptors on their exterior surface more efficiently respect to simple DOPC liposomes lacking glucose units in their structure. Then a small library of glycocalix[n]arenes having different valency and geometry was prepared, for the creation of potentially active immunostimulants against Streptococcus pneumoniae, particularly the 19F serotype, one of the most virulent. These synthesized glycocalixarenes bearing β-N-acetylmannosamine as antigenic unit were compared with the natural polysaccharide on the binding to the specific anti-19F human polyclonal antibody, to verify their inhibition potency. Among all, the glycocalixarene based on the conformationally mobile calix[4]arene resulted the more efficient ligand, probably due its major possibility to explore the antibody surface and dispose the antigenic units in a proper arrangement for the interaction process. These results pointed out the importance of how the different multivalent presentation in space of the glycosyl units can influence the recognition phenomena. At last, NMR studies, using particularly 1H-15N HSQC experiments, were performed on selected glycocalix[6]arenes and guanidino calix[4]arenes blocked in the cone geometry, in order to better understand protein-ligand interactions. The glycosylated compounds were studied with Ralstonia solanacearum lectin, in order to better understand the nature of the carbohydrate‐lectin interactions in solution. The series of cationic calixarene was employed with three different acidic proteins: GB1, Fld and alpha synuclein. Particularly GB1 and Fld were observed to interact with all five cationic calix[4]arenes but showing different behaviours and affinities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A detector based on doped silica and optical fibers was developed to monitor the profile of particle accelerator beams of intensity ranging from 1 pA to tens of µA. Scintillation light produced in a fiber moving across the beam is measured, giving information on its position, shape and intensity. The detector was tested with a continuous proton beam at the 18 MeV Bern medical cyclotron used for radioisotope production and multi-disciplinary research. For currents from 1 pA to 20 µA, Ce3+ and Sb3+ doped silica fibers were used as sensors. Read out systems based on photodiodes, photomultipliers and solid state photomultipliers were employed. Profiles down to the pA range were measured with this method for the first time. For currents ranging from 1 pA to 3 µA, the integral of the profile was found to be linear with respect to the beam current, which can be measured by this detector with an accuracy of ∼1%. The profile was determined with a spatial resolution of 0.25 mm. For currents ranging from 5 µA to 20 µA, thermal effects affect light yield and transmission, causing distortions of the profile and limitations in monitoring capabilities. For currents higher than ∼1 µA, non doped optical fibers for both producing and transporting scintillation light were also successfully employed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rolls-Royce fuel cell systems is developing megawatt scale power systems based on solid oxide fuel cell technology. The hybrid design promises to meet challenging energy efficiency, cost and performance targets in a grid friendly fashion. Analysis and testing to date indicate that those targets can be met and enable a wealth of fuel cell applications to meet customer and existing grid and modern grid requirements. Working with a global development team, a series of laboratory tests and evaluations are completed and future field test and evaluation and demonstration planned.