64 resultados para Multi Domain Information Model


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Globalization has intensified competition, as evidenced by the growing number of international classification systems (rankings) and the attention paid to them. Doctoral education has an international character in itself. It should promote opportunities for graduate students lo participate in these international studies. The quality and competitiveness are two of the most important issues for universities. To promote the interest of graduates to continue their education after the graduate level, it would be necessary to improve the published information of ihe doctoral programs. It should increase the visibility and provide high-quality, easily accessible and comparable information which includes all the relevant aspects of these programs. The authors analysed the website contents of doctoral programs, it was observed a lack of quality of them and very poor information about the contents, so that it was decided that any of them could constitute a model for creating new websites. The recommendations on the format and contents in the web were made by a discussion group. They recommended an attractive design; a page with easy access to contents and easy to find on Ihe net and with the information in more than one language. It should include complete program and academic staff information. It should also be included the study's results which should be easily accessible and includes quantitative data, such as number of students who completed scholars, publications, research projects, average duration of the studies, etc. It will facilitate the choice of program

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We establish an axiomatic model of multi-measures, capturing some classes of measures studied in the fuzzy sets literature, where they are applied to only one or two arguments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The authors are from UPM and are relatively grouped, and all have intervened in different academic or real cases on the subject, at different times as being of different age. With precedent from E. Torroja and A. Páez in Madrid Spain Safety Probabilistic models for concrete about 1957, now in ICOSSAR conferences, author J.M. Antón involved since autumn 1967 for euro-steel construction in CECM produced a math model for independent load superposition reductions, and using it a load coefficient pattern for codes in Rome Feb. 1969, practically adopted for European constructions, giving in JCSS Lisbon Feb. 1974 suggestion of union for concrete-steel-al.. That model uses model for loads like Gumbel type I, for 50 years for one type of load, reduced to 1 year to be added to other independent loads, the sum set in Gumbel theories to 50 years return period, there are parallel models. A complete reliability system was produced, including non linear effects as from buckling, phenomena considered somehow in actual Construction Eurocodes produced from Model Codes. The system was considered by author in CEB in presence of Hydraulic effects from rivers, floods, sea, in reference with actual practice. When redacting a Road Drainage Norm in MOPU Spain an optimization model was realized by authors giving a way to determine the figure of Return Period, 10 to 50 years, for the cases of hydraulic flows to be considered in road drainage. Satisfactory examples were a stream in SE of Spain with Gumbel Type I model and a paper of Ven Te Chow with Mississippi in Keokuk using Gumbel type II, and the model can be modernized with more varied extreme laws. In fact in the MOPU drainage norm the redacting commission acted also as expert to set a table of return periods for elements of road drainage, in fact as a multi-criteria complex decision system. These precedent ideas were used e.g. in wide Codes, indicated in symposia or meetings, but not published in journals in English, and a condensate of contributions of authors is presented. The authors are somehow involved in optimization for hydraulic and agro planning, and give modest hints of intended applications in presence of agro and environment planning as a selection of the criteria and utility functions involved in bayesian, multi-criteria or mixed decision systems. Modest consideration is made of changing in climate, and on the production and commercial systems, and on others as social and financial.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The vertical dynamic actions transmitted by railway vehicles to the ballasted track infrastructure is evaluated taking into account models with different degree of detail. In particular, we have studied this matter from a two-dimensional (2D) finite element model to a fully coupled three-dimensional (3D) multi-body finite element model. The vehicle and track are coupled via a non-linear Hertz contact mechanism. The method of Lagrange multipliers is used for the contact constraint enforcement between wheel and rail. Distributed elevation irregularities are generated based on power spectral density (PSD) distributions which are taken into account for the interaction. The numerical simulations are performed in the time domain, using a direct integration method for solving the transient problem due to the contact nonlinearities. The results obtained include contact forces, forces transmitted to the infrastructure (sleeper) by railpads and envelopes of relevant results for several track irregularities and speed ranges. The main contribution of this work is to identify and discuss coincidences and differences between discrete 2D models and continuum 3D models, as wheel as assessing the validity of evaluating the dynamic loading on the track with simplified 2D models

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Semantic Sensor Web infrastructures use ontology-based models to represent the data that they manage; however, up to now, these ontological models do not allow representing all the characteristics of distributed, heterogeneous, and web-accessible sensor data. This paper describes a core ontological model for Semantic Sensor Web infrastructures that covers these characteristics and that has been built with a focus on reusability. This ontological model is composed of different modules that deal, on the one hand, with infrastructure data and, on the other hand, with data from a specific domain, that is, the coastal flood emergency planning domain. The paper also presents a set of guidelines, followed during the ontological model development, to satisfy a common set of requirements related to modelling domain-specific features of interest and properties. In addition, the paper includes the results obtained after an exhaustive evaluation of the developed ontologies along different aspects (i.e., vocabulary, syntax, structure, semantics, representation, and context).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

First, this paper describes a future layered Air Traffic Management (ATM) system centred in the execution phase of flights. The layered ATM model is based on the work currently performed by SESAR [1] and takes into account the availability of accurate and updated flight information ?seen by all? across the European airspace. This shared information of each flight will be referred as Reference Business Trajectory (RBT). In the layered ATM system, exchanges of information will involve several actors (human or automatic), which will have varying time horizons, areas of responsibility and tasks. Second, the paper will identify the need to define the negotiation processes required to agree revisions to the RBT in the layered ATM system. Third, the final objective of the paper is to bring to the attention of researchers and engineers the communalities between multi-player games and Collaborative Decision Making processes (CDM) in a layered ATM system

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Based on a previously reported logic cell structure (see SPIE, vol. 2038, p. 67-77, 1993), the two types of cells present at the inner and ganglion cell layers of the vertebrate retina and their intracellular response, as well as their connections with each other, have been simulated. These cells are amacrines and ganglion cells. The main scheme of the authors' configuration is shown in a figure. These two types of cells, as well as some of their possible interconnections, have been implemented with the authors' previously reported optical-processing element. As it has been shown, the authors' logic structure is able to process two optical input binary signals, being the output two logical functions. Moreover, if a delayed feedback from one of the two possible outputs to one or both of the inputs is introduced, a very different behaviour is obtained. Depending on the value of the time delay, an oscillatory output can be obtained from a constant optical input signal. Period and length pulses are dependent on delay values, both external and internal, as well as on other control signals. Moreover, a chaotic behaviour can be obtained too under certain conditions

Relevância:

40.00% 40.00%

Publicador:

Resumo:

En los últimos años la externalización de TI ha ganado mucha importancia en el mercado y, por ejemplo, el mercado externalización de servicios de TI sigue creciendo cada año. Ahora más que nunca, las organizaciones son cada vez más los compradores de las capacidades necesarias mediante la obtención de productos y servicios de los proveedores, desarrollando cada vez menos estas capacidades dentro de la empresa. La selección de proveedores de TI es un problema de decisión complejo. Los gerentes que enfrentan una decisión sobre la selección de proveedores de TI tienen dificultades en la elaboración de lo que hay que pensar, además en sus discursos. También de acuerdo con un estudio del SEI (Software Engineering Institute) [40], del 20 al 25 por ciento de los grandes proyectos de adquisición de TI fracasan en dos años y el 50 por ciento fracasan dentro de cinco años. La mala gestión, la mala definición de requisitos, la falta de evaluaciones exhaustivas, que pueden ser utilizadas para llegar a los mejores candidatos para la contratación externa, la selección de proveedores y los procesos de contratación inadecuados, la insuficiencia de procedimientos de selección tecnológicos, y los cambios de requisitos no controlados son factores que contribuyen al fracaso del proyecto. La mayoría de los fracasos podrían evitarse si el cliente aprendiese a comprender los problemas de decisión, hacer un mejor análisis de decisiones, y el buen juicio. El objetivo principal de este trabajo es el desarrollo de un modelo de decisión para la selección de proveedores de TI que tratará de reducir la cantidad de fracasos observados en las relaciones entre el cliente y el proveedor. La mayor parte de estos fracasos son causados por una mala selección, por parte del cliente, del proveedor. Además de estos problemas mostrados anteriormente, la motivación para crear este trabajo es la inexistencia de cualquier modelo de decisión basado en un multi modelo (mezcla de modelos adquisición y métodos de decisión) para el problema de la selección de proveedores de TI. En el caso de estudio, nueve empresas españolas fueron analizadas de acuerdo con el modelo de decisión para la selección de proveedores de TI desarrollado en este trabajo. Dos softwares se utilizaron en este estudio de caso: Expert Choice, y D-Sight. ABSTRACT In the past few years IT outsourcing has gained a lot of importance in the market and, for example, the IT services outsourcing market is still growing every year. Now more than ever, organizations are increasingly becoming acquirers of needed capabilities by obtaining products and services from suppliers and developing less and less of these capabilities in-house. IT supplier selection is a complex and opaque decision problem. Managers facing a decision about IT supplier selection have difficulty in framing what needs to be thought about further in their discourses. Also according to a study from SEI (Software Engineering Institute) [40], 20 to 25 percent of large information technology (IT) acquisition projects fail within two years and 50 percent fail within five years. Mismanagement, poor requirements definition, lack of comprehensive evaluations, which can be used to come up with the best candidates for outsourcing, inadequate supplier selection and contracting processes, insufficient technology selection procedures, and uncontrolled requirements changes are factors that contribute to project failure. The majority of project failures could be avoided if the acquirer learns how to understand the decision problems, make better decision analysis, and good judgment. The main objective of this work is the development of a decision model for IT supplier selection that will try to decrease the amount of failures seen in the relationships between the client-supplier. Most of these failures are caused by a not well selection of the supplier. Besides these problems showed above, the motivation to create this work is the inexistence of any decision model based on multi model (mixture of acquisition models and decision methods) for the problem of IT supplier selection. In the case study, nine different Spanish companies were analyzed based on the IT supplier selection decision model developed in this work. Two software products were used in this case study, Expert Choice and D-Sight.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Antecedentes Europa vive una situación insostenible. Desde el 2008 se han reducido los recursos de los gobiernos a raíz de la crisis económica. El continente Europeo envejece con ritmo constante al punto que se prevé que en 2050 habrá sólo dos trabajadores por jubilado [54]. A esta situación se le añade el aumento de la incidencia de las enfermedades crónicas, relacionadas con el envejecimiento, cuyo coste puede alcanzar el 7% del PIB de un país [51]. Es necesario un cambio de paradigma. Una nueva manera de cuidar de la salud de las personas: sustentable, eficaz y preventiva más que curativa. Algunos estudios abogan por el cuidado personalizado de la salud (pHealth). En este modelo las prácticas médicas son adaptadas e individualizadas al paciente, desde la detección de los factores de riesgo hasta la personalización de los tratamientos basada en la respuesta del individuo [81]. El cuidado personalizado de la salud está asociado a menudo al uso de las tecnologías de la información y comunicación (TICs) que, con su desarrollo exponencial, ofrecen oportunidades interesantes para la mejora de la salud. El cambio de paradigma hacia el pHealth está lentamente ocurriendo, tanto en el ámbito de la investigación como en la industria, pero todavía no de manera significativa. Existen todavía muchas barreras relacionadas a la economía, a la política y la cultura. También existen barreras puramente tecnológicas, como la falta de sistemas de información interoperables [199]. A pesar de que los aspectos de interoperabilidad están evolucionando, todavía hace falta un diseño de referencia especialmente direccionado a la implementación y el despliegue en gran escala de sistemas basados en pHealth. La presente Tesis representa un intento de organizar la disciplina de la aplicación de las TICs al cuidado personalizado de la salud en un modelo de referencia, que permita la creación de plataformas de desarrollo de software para simplificar tareas comunes de desarrollo en este dominio. Preguntas de investigación RQ1 >Es posible definir un modelo, basado en técnicas de ingeniería del software, que represente el dominio del cuidado personalizado de la salud de una forma abstracta y representativa? RQ2 >Es posible construir una plataforma de desarrollo basada en este modelo? RQ3 >Esta plataforma ayuda a los desarrolladores a crear sistemas pHealth complejos e integrados? Métodos Para la descripción del modelo se adoptó el estándar ISO/IEC/IEEE 42010por ser lo suficientemente general y abstracto para el amplio enfoque de esta tesis [25]. El modelo está definido en varias partes: un modelo conceptual, expresado a través de mapas conceptuales que representan las partes interesadas (stakeholders), los artefactos y la información compartida; y escenarios y casos de uso para la descripción de sus funcionalidades. El modelo fue desarrollado de acuerdo a la información obtenida del análisis de la literatura, incluyendo 7 informes industriales y científicos, 9 estándares, 10 artículos en conferencias, 37 artículos en revistas, 25 páginas web y 5 libros. Basándose en el modelo se definieron los requisitos para la creación de la plataforma de desarrollo, enriquecidos por otros requisitos recolectados a través de una encuesta realizada a 11 ingenieros con experiencia en la rama. Para el desarrollo de la plataforma, se adoptó la metodología de integración continua [74] que permitió ejecutar tests automáticos en un servidor y también desplegar aplicaciones en una página web. En cuanto a la metodología utilizada para la validación se adoptó un marco para la formulación de teorías en la ingeniería del software [181]. Esto requiere el desarrollo de modelos y proposiciones que han de ser validados dentro de un ámbito de investigación definido, y que sirvan para guiar al investigador en la búsqueda de la evidencia necesaria para justificarla. La validación del modelo fue desarrollada mediante una encuesta online en tres rondas con un número creciente de invitados. El cuestionario fue enviado a 134 contactos y distribuido en algunos canales públicos como listas de correo y redes sociales. El objetivo era evaluar la legibilidad del modelo, su nivel de cobertura del dominio y su potencial utilidad en el diseño de sistemas derivados. El cuestionario incluía preguntas cuantitativas de tipo Likert y campos para recolección de comentarios. La plataforma de desarrollo fue validada en dos etapas. En la primera etapa se utilizó la plataforma en un experimento a pequeña escala, que consistió en una sesión de entrenamiento de 12 horas en la que 4 desarrolladores tuvieron que desarrollar algunos casos de uso y reunirse en un grupo focal para discutir su uso. La segunda etapa se realizó durante los tests de un proyecto en gran escala llamado HeartCycle [160]. En este proyecto un equipo de diseñadores y programadores desarrollaron tres aplicaciones en el campo de las enfermedades cardio-vasculares. Una de estas aplicaciones fue testeada en un ensayo clínico con pacientes reales. Al analizar el proyecto, el equipo de desarrollo se reunió en un grupo focal para identificar las ventajas y desventajas de la plataforma y su utilidad. Resultados Por lo que concierne el modelo que describe el dominio del pHealth, la parte conceptual incluye una descripción de los roles principales y las preocupaciones de los participantes, un modelo de los artefactos TIC que se usan comúnmente y un modelo para representar los datos típicos que son necesarios formalizar e intercambiar entre sistemas basados en pHealth. El modelo funcional incluye un conjunto de 18 escenarios, repartidos en: punto de vista de la persona asistida, punto de vista del cuidador, punto de vista del desarrollador, punto de vista de los proveedores de tecnologías y punto de vista de las autoridades; y un conjunto de 52 casos de uso repartidos en 6 categorías: actividades de la persona asistida, reacciones del sistema, actividades del cuidador, \engagement" del usuario, actividades del desarrollador y actividades de despliegue. Como resultado del cuestionario de validación del modelo, un total de 65 personas revisó el modelo proporcionando su nivel de acuerdo con las dimensiones evaluadas y un total de 248 comentarios sobre cómo mejorar el modelo. Los conocimientos de los participantes variaban desde la ingeniería del software (70%) hasta las especialidades médicas (15%), con declarado interés en eHealth (24%), mHealth (16%), Ambient Assisted Living (21%), medicina personalizada (5%), sistemas basados en pHealth (15%), informática médica (10%) e ingeniería biomédica (8%) con una media de 7.25_4.99 años de experiencia en estas áreas. Los resultados de la encuesta muestran que los expertos contactados consideran el modelo fácil de leer (media de 1.89_0.79 siendo 1 el valor más favorable y 5 el peor), suficientemente abstracto (1.99_0.88) y formal (2.13_0.77), con una cobertura suficiente del dominio (2.26_0.95), útil para describir el dominio (2.02_0.7) y para generar sistemas más específicos (2_0.75). Los expertos también reportan un interés parcial en utilizar el modelo en su trabajo (2.48_0.91). Gracias a sus comentarios, el modelo fue mejorado y enriquecido con conceptos que faltaban, aunque no se pudo demonstrar su mejora en las dimensiones evaluadas, dada la composición diferente de personas en las tres rondas de evaluación. Desde el modelo, se generó una plataforma de desarrollo llamada \pHealth Patient Platform (pHPP)". La plataforma desarrollada incluye librerías, herramientas de programación y desarrollo, un tutorial y una aplicación de ejemplo. Se definieron cuatro módulos principales de la arquitectura: el Data Collection Engine, que permite abstraer las fuentes de datos como sensores o servicios externos, mapeando los datos a bases de datos u ontologías, y permitiendo interacción basada en eventos; el GUI Engine, que abstrae la interfaz de usuario en un modelo de interacción basado en mensajes; y el Rule Engine, que proporciona a los desarrolladores un medio simple para programar la lógica de la aplicación en forma de reglas \if-then". Después de que la plataforma pHPP fue utilizada durante 5 años en el proyecto HeartCycle, 5 desarrolladores fueron reunidos en un grupo de discusión para analizar y evaluar la plataforma. De estas evaluaciones se concluye que la plataforma fue diseñada para encajar las necesidades de los ingenieros que trabajan en la rama, permitiendo la separación de problemas entre las distintas especialidades, y simplificando algunas tareas de desarrollo como el manejo de datos y la interacción asíncrona. A pesar de ello, se encontraron algunos defectos a causa de la inmadurez de algunas tecnologías empleadas, y la ausencia de algunas herramientas específicas para el dominio como el procesado de datos o algunos protocolos de comunicación relacionados con la salud. Dentro del proyecto HeartCycle la plataforma fue utilizada para el desarrollo de la aplicación \Guided Exercise", un sistema TIC para la rehabilitación de pacientes que han sufrido un infarto del miocardio. El sistema fue testeado en un ensayo clínico randomizado en el cual a 55 pacientes se les dio el sistema para su uso por 21 semanas. De los resultados técnicos del ensayo se puede concluir que, a pesar de algunos errores menores prontamente corregidos durante el estudio, la plataforma es estable y fiable. Conclusiones La investigación llevada a cabo en esta Tesis y los resultados obtenidos proporcionan las respuestas a las tres preguntas de investigación que motivaron este trabajo: RQ1 Se ha desarrollado un modelo para representar el dominio de los sistemas personalizados de salud. La evaluación hecha por los expertos de la rama concluye que el modelo representa el dominio con precisión y con un balance apropiado entre abstracción y detalle. RQ2 Se ha desarrollado, con éxito, una plataforma de desarrollo basada en el modelo. RQ3 Se ha demostrado que la plataforma es capaz de ayudar a los desarrolladores en la creación de software pHealth complejos. Las ventajas de la plataforma han sido demostradas en el ámbito de un proyecto de gran escala, aunque el enfoque genérico adoptado indica que la plataforma podría ofrecer beneficios también en otros contextos. Los resultados de estas evaluaciones ofrecen indicios de que, ambos, el modelo y la plataforma serán buenos candidatos para poderse convertir en una referencia para futuros desarrollos de sistemas pHealth. ABSTRACT Background Europe is living in an unsustainable situation. The economic crisis has been reducing governments' economic resources since 2008 and threatening social and health systems, while the proportion of older people in the European population continues to increase so that it is foreseen that in 2050 there will be only two workers per retiree [54]. To this situation it should be added the rise, strongly related to age, of chronic diseases the burden of which has been estimated to be up to the 7% of a country's gross domestic product [51]. There is a need for a paradigm shift, the need for a new way of caring for people's health, shifting the focus from curing conditions that have arisen to a sustainable and effective approach with the emphasis on prevention. Some advocate the adoption of personalised health care (pHealth), a model where medical practices are tailored to the patient's unique life, from the detection of risk factors to the customization of treatments based on each individual's response [81]. Personalised health is often associated to the use of Information and Communications Technology (ICT), that, with its exponential development, offers interesting opportunities for improving healthcare. The shift towards pHealth is slowly taking place, both in research and in industry, but the change is not significant yet. Many barriers still exist related to economy, politics and culture, while others are purely technological, like the lack of interoperable information systems [199]. Though interoperability aspects are evolving, there is still the need of a reference design, especially tackling implementation and large scale deployment of pHealth systems. This thesis contributes to organizing the subject of ICT systems for personalised health into a reference model that allows for the creation of software development platforms to ease common development issues in the domain. Research questions RQ1 Is it possible to define a model, based on software engineering techniques, for representing the personalised health domain in an abstract and representative way? RQ2 Is it possible to build a development platform based on this model? RQ3 Does the development platform help developers create complex integrated pHealth systems? Methods As method for describing the model, the ISO/IEC/IEEE 42010 framework [25] is adopted for its generality and high level of abstraction. The model is specified in different parts: a conceptual model, which makes use of concept maps, for representing stakeholders, artefacts and shared information, and in scenarios and use cases for the representation of the functionalities of pHealth systems. The model was derived from literature analysis, including 7 industrial and scientific reports, 9 electronic standards, 10 conference proceedings papers, 37 journal papers, 25 websites and 5 books. Based on the reference model, requirements were drawn for building the development platform enriched with a set of requirements gathered in a survey run among 11 experienced engineers. For developing the platform, the continuous integration methodology [74] was adopted which allowed to perform automatic tests on a server and also to deploy packaged releases on a web site. As a validation methodology, a theory building framework for SW engineering was adopted from [181]. The framework, chosen as a guide to find evidence for justifying the research questions, imposed the creation of theories based on models and propositions to be validated within a scope. The validation of the model was conducted as an on-line survey in three validation rounds, encompassing a growing number of participants. The survey was submitted to 134 experts of the field and on some public channels like relevant mailing lists and social networks. Its objective was to assess the model's readability, its level of coverage of the domain and its potential usefulness in the design of actual, derived systems. The questionnaires included quantitative Likert scale questions and free text inputs for comments. The development platform was validated in two scopes. As a small-scale experiment, the platform was used in a 12 hours training session where 4 developers had to perform an exercise consisting in developing a set of typical pHealth use cases At the end of the session, a focus group was held to identify benefits and drawbacks of the platform. The second validation was held as a test-case study in a large scale research project called HeartCycle the aim of which was to develop a closed-loop disease management system for heart failure and coronary heart disease patients [160]. During this project three applications were developed by a team of programmers and designers. One of these applications was tested in a clinical trial with actual patients. At the end of the project, the team was interviewed in a focus group to assess the role the platform had within the project. Results For what regards the model that describes the pHealth domain, its conceptual part includes a description of the main roles and concerns of pHealth stakeholders, a model of the ICT artefacts that are commonly adopted and a model representing the typical data that need to be formalized among pHealth systems. The functional model includes a set of 18 scenarios, divided into assisted person's view, caregiver's view, developer's view, technology and services providers' view and authority's view, and a set of 52 Use Cases grouped in 6 categories: assisted person's activities, system reactions, caregiver's activities, user engagement, developer's activities and deployer's activities. For what concerns the validation of the model, a total of 65 people participated in the online survey providing their level of agreement in all the assessed dimensions and a total of 248 comments on how to improve and complete the model. Participants' background spanned from engineering and software development (70%) to medical specialities (15%), with declared interest in the fields of eHealth (24%), mHealth (16%), Ambient Assisted Living (21%), Personalized Medicine (5%), Personal Health Systems (15%), Medical Informatics (10%) and Biomedical Engineering (8%) with an average of 7.25_4.99 years of experience in these fields. From the analysis of the answers it is possible to observe that the contacted experts considered the model easily readable (average of 1.89_0.79 being 1 the most favourable scoring and 5 the worst), sufficiently abstract (1.99_0.88) and formal (2.13_0.77) for its purpose, with a sufficient coverage of the domain (2.26_0.95), useful for describing the domain (2.02_0.7) and for generating more specific systems (2_0.75) and they reported a partial interest in using the model in their job (2.48_0.91). Thanks to their comments, the model was improved and enriched with concepts that were missing at the beginning, nonetheless it was not possible to prove an improvement among the iterations, due to the diversity of the participants in the three rounds. From the model, a development platform for the pHealth domain was generated called pHealth Patient Platform (pHPP). The platform includes a set of libraries, programming and deployment tools, a tutorial and a sample application. The main four modules of the architecture are: the Data Collection Engine, which allows abstracting sources of information like sensors or external services, mapping data to databases and ontologies, and allowing event-based interaction and filtering, the GUI Engine, which abstracts the user interface in a message-like interaction model, the Workow Engine, which allows programming the application's user interaction ows with graphical workows, and the Rule Engine, which gives developers a simple means for programming the application's logic in the form of \if-then" rules. After the 5 years experience of HeartCycle, partially programmed with pHPP, 5 developers were joined in a focus group to discuss the advantages and drawbacks of the platform. The view that emerged from the training course and the focus group was that the platform is well-suited to the needs of the engineers working in the field, it allowed the separation of concerns among the different specialities and it simplified some common development tasks like data management and asynchronous interaction. Nevertheless, some deficiencies were pointed out in terms of a lack of maturity of some technological choices, and for the absence of some domain-specific tools, e.g. for data processing or for health-related communication protocols. Within HeartCycle, the platform was used to develop part of the Guided Exercise system, a composition of ICT tools for the physical rehabilitation of patients who suffered from myocardial infarction. The system developed using the platform was tested in a randomized controlled clinical trial, in which 55 patients used the system for 21 weeks. The technical results of this trial showed that the system was stable and reliable. Some minor bugs were detected, but these were promptly corrected using the platform. This shows that the platform, as well as facilitating the development task, can be successfully used to produce reliable software. Conclusions The research work carried out in developing this thesis provides responses to the three three research questions that were the motivation for the work. RQ1 A model was developed representing the domain of personalised health systems, and the assessment of experts in the field was that it represents the domain accurately, with an appropriate balance between abstraction and detail. RQ2 A development platform based on the model was successfully developed. RQ3 The platform has been shown to assist developers create complex pHealth software. This was demonstrated within the scope of one large-scale project, but the generic approach adopted provides indications that it would offer benefits more widely. The results of these evaluations provide indications that both the model and the platform are good candidates for being a reference for future pHealth developments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Virtual reality (VR) techniques to understand and obtain conclusions of data in an easy way are being used by the scientific community. However, these techniques are not used frequently for analyzing large amounts of data in life sciences, particularly in genomics, due to the high complexity of data (curse of dimensionality). Nevertheless, new approaches that allow to bring out the real important data characteristics, arise the possibility of constructing VR spaces to visually understand the intrinsic nature of data. It is well known the benefits of representing high dimensional data in tridimensional spaces by means of dimensionality reduction and transformation techniques, complemented with a strong component of interaction methods. Thus, a novel framework, designed for helping to visualize and interact with data about diseases, is presented. In this paper, the framework is applied to the Van't Veer breast cancer dataset is used, while oncologists from La Paz Hospital (Madrid) are interacting with the obtained results. That is to say a first attempt to generate a visually tangible model of breast cancer disease in order to support the experience of oncologists is presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Acquired brain injury (ABI) 1-2 refers to any brain damage occurring after birth. It usually causes certain damage to portions of the brain. ABI may result in a significant impairment of an individuals physical, cognitive and/or psychosocial functioning. The main causes are traumatic brain injury (TBI), cerebrovascular accident (CVA) and brain tumors. The main consequence of ABI is a dramatic change in the individuals daily life. This change involves a disruption of the family, a loss of future income capacity and an increase of lifetime cost. One of the main challenges in neurorehabilitation is to obtain a dysfunctional profile of each patient in order to personalize the treatment. This paper proposes a system to generate a patient s dysfunctional profile by integrating theoretical, structural and neuropsychological information on a 3D brain imaging-based model. The main goal of this dysfunctional profile is to help therapists design the most suitable treatment for each patient. At the same time, the results obtained are a source of clinical evidence to improve the accuracy and quality of our rehabilitation system. Figure 1 shows the diagram of the system. This system is composed of four main modules: image-based extraction of parameters, theoretical modeling, classification and co-registration and visualization module.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the advent of cloud computing model, distributed caches have become the cornerstone for building scalable applications. Popular systems like Facebook [1] or Twitter use Memcached [5], a highly scalable distributed object cache, to speed up applications by avoiding database accesses. Distributed object caches assign objects to cache instances based on a hashing function, and objects are not moved from a cache instance to another unless more instances are added to the cache and objects are redistributed. This may lead to situations where some cache instances are overloaded when some of the objects they store are frequently accessed, while other cache instances are less frequently used. In this paper we propose a multi-resource load balancing algorithm for distributed cache systems. The algorithm aims at balancing both CPU and Memory resources among cache instances by redistributing stored data. Considering the possible conflict of balancing multiple resources at the same time, we give CPU and Memory resources weighted priorities based on the runtime load distributions. A scarcer resource is given a higher weight than a less scarce resource when load balancing. The system imbalance degree is evaluated based on monitoring information, and the utility load of a node, a unit for resource consumption. Besides, since continuous rebalance of the system may affect the QoS of applications utilizing the cache system, our data selection policy ensures that each data migration minimizes the system imbalance degree and hence, the total reconfiguration cost can be minimized. An extensive simulation is conducted to compare our policy with other policies. Our policy shows a significant improvement in time efficiency and decrease in reconfiguration cost.