836 resultados para Decision support, computerized


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Knowledge resource reuse has become a popular approach within the ontology engineering field, mainly because it can speed up the ontology development process, saving time and money and promoting the application of good practices. The NeOn Methodology provides guidelines for reuse. These guidelines include the selection of the most appropriate knowledge resources for reuse in ontology development. This is a complex decision-making problem where different conflicting objectives, like the reuse cost, understandability, integration workload and reliability, have to be taken into account simultaneously. GMAA is a PC-based decision support system based on an additive multi-attribute utility model that is intended to allay the operational difficulties involved in the Decision Analysis methodology. The paper illustrates how it can be applied to select multimedia ontologies for reuse to develop a new ontology in the multimedia domain. It also demonstrates that the sensitivity analyses provided by GMAA are useful tools for making a final recommendation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ponencia invitada sobre gestion de trafico aereo en el curso de verano de la UPM Research in Decision Support Systems for future Air Traffic Management

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Due to the sensitive international situation caused by still-recent terrorist attacks, there is a common need to protect the safety of large spaces such as government buildings, airports and power stations. To address this problem, developments in several research fields, such as video and cognitive audio, decision support systems, human interface, computer architecture, communications networks and communications security, should be integrated with the goal of achieving advanced security systems capable of checking all of the specified requirements and spanning the gap that presently exists in the current market. This paper describes the implementation of a decision system for crisis management in infrastructural building security. Specifically, it describes the implementation of a decision system in the management of building intrusions. The positions of the unidentified persons are reported with the help of a Wireless Sensor Network (WSN). The goal is to achieve an intelligent system capable of making the best decision in real time in order to quickly neutralise one or more intruders who threaten strategic installations. It is assumed that the intruders’ behaviour is inferred through sequences of sensors’ activations and their fusion. This article presents a general approach to selecting the optimum operation from the available neutralisation strategies based on a Minimax algorithm. The distances among different scenario elements will be used to measure the risk of the scene, so a path planning technique will be integrated in order to attain a good performance. Different actions to be executed over the elements of the scene such as moving a guard, blocking a door or turning on an alarm will be used to neutralise the crisis. This set of actions executed to stop the crisis is known as the neutralisation strategy. Finally, the system has been tested in simulations of real situations, and the results have been evaluated according to the final state of the intruders. In 86.5% of the cases, the system achieved the capture of the intruders, and in 59.25% of the cases, they were intercepted before they reached their objective.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este Proyecto Fin de Grado tiene como objetivo fundamental el perfeccionamiento y puesta en explotación de un sistema de ayuda a la decisión que evalúa el desarrollo del lenguaje en niños de 0 a 6 años de edad. Este sistema está formado fundamentalmente por una aplicación diseñada y construida mediante una arquitectura de componentes de software modular y reutilizable. La aplicación será usada por los pediatras para realizar evaluaciones del desarrollo del lenguaje infantil y además por los neuropediatras, logopedas y miembros de equipos de Atención Temprana para consultar las evaluaciones y validar las decisiones propuestas por el sistema. El sistema es accesible vía web y almacena toda la información que maneja en una base de datos. Asimismo, el sistema se apoya en un modelo conceptual u ontología desarrollado previamente para inferir las decisiones adecuadas para las evaluaciones del lenguaje. El sistema incorpora las funciones de gestión de los usuarios del mismo. ABSTRACT This Grade End Project has as fundamental objective the improvement and deployment of a decision support system for evaluating children language development from 0 to 6 years of age. This system is mainly formed by an application designed and built using a modular and reusable software component architecture. The application will be used by pediatricians for evaluating children´s speech development and also by neuro-pediatricians, speech therapists and early childhood intervention team members, for consulting previous evaluations and for validating system´s proposed decision. The system is web based and stores its information in a database. Likewise, the system is supported by a conceptual model or ontology previously developed to infer the appropriate decision for language evaluation. The system also includes user management functions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

RESUMEN El apoyo a la selección de especies a la restauración de la vegetación en España en los últimos 40 años se ha basado fundamentalmente en modelos de distribución de especies, también llamados modelos de nicho ecológico, que estiman la probabilidad de presencia de las especies en función de las condiciones del medio físico (clima, suelo, etc.). Con esta tesis se ha intentado contribuir a la mejora de la capacidad predictiva de los modelos introduciendo algunas propuestas metodológicas adaptadas a los datos disponibles actualmente en España y enfocadas al uso de los modelos en la selección de especies. No siempre se dispone de datos a una resolución espacial adecuada para la escala de los proyectos de restauración de la vegetación. Sin embrago es habitual contar con datos de baja resolución espacial para casi todas las especies vegetales presentes en España. Se propone un método de recalibración que actualiza un modelo de regresión logística de baja resolución espacial con una nueva muestra de alta resolución espacial. El método permite obtener predicciones de calidad aceptable con muestras relativamente pequeñas (25 presencias de la especie) frente a las muestras mucho mayores (más de 100 presencias) que requería una estrategia de modelización convencional que no usara el modelo previo. La selección del método estadístico puede influir decisivamente en la capacidad predictiva de los modelos y por esa razón la comparación de métodos ha recibido mucha atención en la última década. Los estudios previos consideraban a la regresión logística como un método inferior a técnicas más modernas como las de máxima entropía. Los resultados de la tesis demuestran que esa diferencia observada se debe a que los modelos de máxima entropía incluyen técnicas de regularización y la versión de la regresión logística usada en las comparaciones no. Una vez incorporada la regularización a la regresión logística usando penalización, las diferencias en cuanto a capacidad predictiva desaparecen. La regresión logística penalizada es, por tanto, una alternativa más para el ajuste de modelos de distribución de especies y está a la altura de los métodos modernos con mejor capacidad predictiva como los de máxima entropía. A menudo, los modelos de distribución de especies no incluyen variables relativas al suelo debido a que no es habitual que se disponga de mediciones directas de sus propiedades físicas o químicas. La incorporación de datos de baja resolución espacial proveniente de mapas de suelo nacionales o continentales podría ser una alternativa. Los resultados de esta tesis sugieren que los modelos de distribución de especies de alta resolución espacial mejoran de forma ligera pero estadísticamente significativa su capacidad predictiva cuando se incorporan variables relativas al suelo procedente de mapas de baja resolución espacial. La validación es una de las etapas fundamentales del desarrollo de cualquier modelo empírico como los modelos de distribución de especies. Lo habitual es validar los modelos evaluando su capacidad predictiva especie a especie, es decir, comparando en un conjunto de localidades la presencia o ausencia observada de la especie con las predicciones del modelo. Este tipo de evaluación no responde a una cuestión clave en la restauración de la vegetación ¿cuales son las n especies más idóneas para el lugar a restaurar? Se ha propuesto un método de evaluación de modelos adaptado a esta cuestión que consiste en estimar la capacidad de un conjunto de modelos para discriminar entre las especies presentes y ausentes de un lugar concreto. El método se ha aplicado con éxito a la validación de 188 modelos de distribución de especies leñosas orientados a la selección de especies para la restauración de la vegetación en España. Las mejoras metodológicas propuestas permiten mejorar la capacidad predictiva de los modelos de distribución de especies aplicados a la selección de especies en la restauración de la vegetación y también permiten ampliar el número de especies para las que se puede contar con un modelo que apoye la toma de decisiones. SUMMARY During the last 40 years, decision support tools for plant species selection in ecological restoration in Spain have been based on species distribution models (also called ecological niche models), that estimate the probability of occurrence of the species as a function of environmental predictors (e.g., climate, soil). In this Thesis some methodological improvements are proposed to contribute to a better predictive performance of such models, given the current data available in Spain and focusing in the application of the models to selection of species for ecological restoration. Fine grained species distribution data are required to train models to be used at the scale of the ecological restoration projects, but this kind of data are not always available for every species. On the other hand, coarse grained data are available for almost every species in Spain. A recalibration method is proposed that updates a coarse grained logistic regression model using a new fine grained updating sample. The method allows obtaining acceptable predictive performance with reasonably small updating sample (25 occurrences of the species), in contrast with the much larger samples (more than 100 occurrences) required for a conventional modeling approach that discards the coarse grained data. The choice of the statistical method may have a dramatic effect on model performance, therefore comparisons of methods have received much interest in the last decade. Previous studies have shown a poorer performance of the logistic regression compared to novel methods like maximum entropy models. The results of this Thesis show that the observed difference is caused by the fact that maximum entropy models include regularization techniques and the versions of logistic regression compared do not. Once regularization has been added to the logistic regression using a penalization procedure, the differences in model performance disappear. Therefore, penalized logistic regression may be considered one of the best performing methods to model species distributions. Usually, species distribution models do not consider soil related predictors because direct measurements of the chemical or physical properties are often lacking. The inclusion of coarse grained soil data from national or continental soil maps could be a reasonable alternative. The results of this Thesis suggest that the performance of the models slightly increase after including soil predictors form coarse grained soil maps. Model validation is a key stage of the development of empirical models, such as species distribution models. The usual way of validating is based on the evaluation of model performance for each species separately, i.e., comparing observed species presences or absence to predicted probabilities in a set of sites. This kind of evaluation is not informative for a common question in ecological restoration projects: which n species are the most suitable for the environment of the site to be restored? A method has been proposed to address this question that estimates the ability of a set of models to discriminate among present and absent species in a evaluation site. The method has been successfully applied to the validation of 188 species distribution models used to support decisions on species selection for ecological restoration in Spain. The proposed methodological approaches improve the predictive performance of the predictive models applied to species selection in ecological restoration and increase the number of species for which a model that supports decisions can be fitted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La diabetes mellitus es el conjunto de alteraciones provocadas por un defecto en la cantidad de insulina secretada o por un aprovechamiento deficiente de la misma. Es causa directa de complicaciones a corto, medio y largo plazo que disminuyen la calidad y las expectativas de vida de las personas con diabetes. La diabetes mellitus es en la actualidad uno de los problemas más importantes de salud. Ha triplicado su prevalencia en los últimos 20 anos y para el año 2025 se espera que existan casi 300 millones de personas con diabetes. Este aumento de la prevalencia junto con la morbi-mortalidad asociada a sus complicaciones micro y macro-vasculares convierten la diabetes en una carga para los sistemas sanitarios, sus recursos económicos y sus profesionales, haciendo de la enfermedad un problema individual y de salud pública de enormes proporciones. De momento no existe cura a esta enfermedad, de modo que el objetivo terapéutico del tratamiento de la diabetes se centra en la normalización de la glucemia intentando minimizar los eventos de hiper e hipoglucemia y evitando la aparición o al menos retrasando la evolución de las complicaciones vasculares, que constituyen la principal causa de morbi-mortalidad de las personas con diabetes. Un adecuado control diabetológico implica un tratamiento individualizado que considere multitud de factores para cada paciente (edad, actividad física, hábitos alimentarios, presencia de complicaciones asociadas o no a la diabetes, factores culturales, etc.). Sin embargo, a corto plazo, las dos variables más influyentes que el paciente ha de manejar para intervenir sobre su nivel glucémico son la insulina administrada y la dieta. Ambas presentan un retardo entre el momento de su aplicación y el comienzo de su acción, asociado a la absorción de los mismos. Por este motivo la capacidad de predecir la evolución del perfil glucémico en un futuro cercano, ayudara al paciente a tomar las decisiones adecuadas para mantener un buen control de su enfermedad y evitar situaciones de riesgo. Este es el objetivo de la predicción en diabetes: adelantar la evolución del perfil glucémico en un futuro cercano para ayudar al paciente a adaptar su estilo de vida y sus acciones correctoras, con el propósito de que sus niveles de glucemia se aproximen a los de una persona sana, evitando así los síntomas y complicaciones de un mal control. La aparición reciente de los sistemas de monitorización continua de glucosa ha proporcionado nuevas alternativas. La disponibilidad de un registro exhaustivo de las variaciones del perfil glucémico, con un periodo de muestreo de entre uno y cinco minutos, ha favorecido el planteamiento de nuevos modelos que tratan de predecir la glucemia utilizando tan solo las medidas anteriores de glucemia o al menos reduciendo significativamente la información de entrada a los algoritmos. El hecho de requerir menor intervención por parte del paciente, abre nuevas posibilidades de aplicación de los predictores de glucemia, haciéndose viable su uso en tiempo real, como sistemas de ayuda a la decisión, como detectores de situaciones de riesgo o integrados en algoritmos automáticos de control. En esta tesis doctoral se proponen diferentes algoritmos de predicción de glucemia para pacientes con diabetes, basados en la información registrada por un sistema de monitorización continua de glucosa así como incorporando la información de la insulina administrada y la ingesta de carbohidratos. Los algoritmos propuestos han sido evaluados en simulación y utilizando datos de pacientes registrados en diferentes estudios clínicos. Para ello se ha desarrollado una amplia metodología, que trata de caracterizar las prestaciones de los modelos de predicción desde todos los puntos de vista: precisión, retardo, ruido y capacidad de detección de situaciones de riesgo. Se han desarrollado las herramientas de simulación necesarias y se han analizado y preparado las bases de datos de pacientes. También se ha probado uno de los algoritmos propuestos para comprobar la validez de la predicción en tiempo real en un escenario clínico. Se han desarrollado las herramientas que han permitido llevar a cabo el protocolo experimental definido, en el que el paciente consulta la predicción bajo demanda y tiene el control sobre las variables metabólicas. Este experimento ha permitido valorar el impacto sobre el control glucémico del uso de la predicción de glucosa. ABSTRACT Diabetes mellitus is the set of alterations caused by a defect in the amount of secreted insulin or a suboptimal use of insulin. It causes complications in the short, medium and long term that affect the quality of life and reduce the life expectancy of people with diabetes. Diabetes mellitus is currently one of the most important health problems. Prevalence has tripled in the past 20 years and estimations point out that it will affect almost 300 million people by 2025. Due to this increased prevalence, as well as to morbidity and mortality associated with micro- and macrovascular complications, diabetes has become a burden on health systems, their financial resources and their professionals, thus making the disease a major individual and a public health problem. There is currently no cure for this disease, so that the therapeutic goal of diabetes treatment focuses on normalizing blood glucose events. The aim is to minimize hyper- and hypoglycemia and to avoid, or at least to delay, the appearance and development of vascular complications, which are the main cause of morbidity and mortality among people with diabetes. A suitable, individualized and controlled treatment for diabetes involves many factors that need to be considered for each patient: age, physical activity, eating habits, presence of complications related or unrelated to diabetes, cultural factors, etc. However, in the short term, the two most influential variables that the patient has available in order to manage his/her glycemic levels are administered insulin doses and diet. Both suffer from a delay between their time of application and the onset of the action associated with their absorption. Therefore, the ability to predict the evolution of the glycemic profile in the near future could help the patient to make appropriate decisions on how to maintain good control of his/her disease and to avoid risky situations. Hence, the main goal of glucose prediction in diabetes consists of advancing the evolution of glycemic profiles in the near future. This would assist the patient in adapting his/her lifestyle and in taking corrective actions in a way that blood glucose levels approach those of a healthy person, consequently avoiding the symptoms and complications of a poor glucose control. The recent emergence of continuous glucose monitoring systems has provided new alternatives in this field. The availability of continuous records of changes in glycemic profiles (with a sampling period of one or five minutes) has enabled the design of new models which seek to predict blood glucose by using automatically read glucose measurements only (or at least, reducing significantly the data input manually to the algorithms). By requiring less intervention by the patient, new possibilities are open for the application of glucose predictors, making its use feasible in real-time applications, such as: decision support systems, hypo- and hyperglycemia detectors, integration into automated control algorithms, etc. In this thesis, different glucose prediction algorithms are proposed for patients with diabetes. These are based on information recorded by a continuous glucose monitoring system and incorporate information of the administered insulin and carbohydrate intakes. The proposed algorithms have been evaluated in-silico and using patients’ data recorded in different clinical trials. A complete methodology has been developed to characterize the performance of predictive models from all points of view: accuracy, delay, noise and ability to detect hypo- and hyperglycemia. In addition, simulation tools and patient databases have been deployed. One of the proposed algorithms has additionally been evaluated in terms of real-time prediction performance in a clinical scenario in which the patient checked his/her glucose predictions on demand and he/she had control on his/her metabolic variables. This has allowed assessing the impact of using glucose prediction on glycemic control. The tools to carry out the defined experimental protocols were also developed in this thesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neuro-evolutive development from birth until the age of six years is a decisive factor in a child?s quality of life. Early detection of development disorders in early childhood can facilitate necessary diagnosis and/or treatment. Primary-care pediatricians play a key role in its detection as they can undertake the preventive and therapeutic actions requested to promote a child?s optimal development. However, the lack of time and little specific knowledge at primary-care avoid to applying continuous early-detection anomalies procedures. This research paper focuses on the deployment and evaluation of a smart system that enhances the screening of language disorders in primary care. Pediatricians get support to proceed with early referral of language disorders. The proposed model provides them with a decision-support tool for referral actions to trigger essential diagnostic and/or therapeutic actions for a comprehensive individual development. The research was conducted by starting from a sample of 60 cases of children with language disorders. Validation was carried out through two complementary steps: first, by including a team of seven experts from the fields of neonatology, pediatrics, neurology and language therapy, and, second, through the evaluation of 21 more previously diagnosed cases. The results obtained show that therapist positively accepted the system proposal in 18 cases (86%) and suggested system redesign for single referral to a speech therapist in three remaining cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Knowledge resource reuse has become a popular approach within the ontology engineering field, mainly because it can speed up the ontology development process, saving time and money and promoting the application of good practices. The NeOn Methodology provides guidelines for reuse. These guidelines include the selection of the most appropriate knowledge resources for reuse in ontology development. This is a complex decision-making problem where different conflicting objectives, like the reuse cost, understandability, integration workload and reliability, have to be taken into account simultaneously. GMAA is a PC-based decision support system based on an additive multi-attribute utility model that is intended to allay the operational difficulties involved in the Decision Analysis methodology. The paper illustrates how it can be applied to select multimedia ontologies for reuse to develop a new ontology in the multimedia domain. It also demonstrates that the sensitivity analyses provided by GMAA are useful tools for making a final recommendation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents the model named Accepting Networks of Evolutionary Processors as NP-problem solver inspired in the biological DNA operations. A processor has a rules set, splicing rules in this model,an object multiset and a filters set. Rules can be applied in parallel since there exists a large number of copies of objects in the multiset. Processors can form a graph in order to solve a given problem. This paper shows the network configuration in order to solve the SAT problem using linear resources and time. A rule representation arquitecture in distributed environments can be easily implemented using these networks of processors, such as decision support systems, as shown in the paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The implementation of Internet technologies has led to e-Manufacturing technologies becoming more widely used and to the development of tools for compiling, transforming and synchronising manufacturing data through the Web. In this context, a potential area for development is the extension of virtual manufacturing to performance measurement (PM) processes, a critical area for decision making and implementing improvement actions in manufacturing. This paper proposes a PM information framework to integrate decision support systems in e-Manufacturing. Specifically, the proposed framework offers a homogeneous PM information exchange model that can be applied through decision support in e-Manufacturing environment. Its application improves the necessary interoperability in decision-making data processing tasks. It comprises three sub-systems: a data model, a PM information platform and PM-Web services architecture. A practical example of data exchange for measurement processes in the area of equipment maintenance is shown to demonstrate the utility of the model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The complexity of climate change and its evolution during the last few years has a positive impact on new developments and approaches to reduce the emissions of CO2. Looking for a methodology to evaluate the sustainability of a roadway, a tool has been developed. Life Cycle Assessment (LCA) is being accepted by the road industry to measure and evaluate the environmental impacts of an infrastructure, as the energy consumption and carbon footprint. This paper describes the methodology to calculate the CO2 emissions associated with the energy embodied on a roadway along its life cycle, including construction, operations and demolition. It will assist to find solutions to improve the energy footprint and reduce the amount of CO2 emissions. Details are provided of both, the methodology and the data acquisition. This paper is an application of the methodology to the Spanish highways, using a local database. Two case studies and a practical example are studied to show the model as a decision support for sustainable construction in the road industry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents the knowledge model of a distributed decision support system, that has been designed for the management of a national network in Ukraine. It shows how advanced Artificial Intelligence techniques (multiagent systems and knowledge modelling) have been applied to solve this real-world decision support problem: on the one hand its distributed nature, implied by different loci of decision-making at the network nodes, suggested to apply a multiagent solution; on the other, due to the complexity of problem-solving for local network administration, it was useful to apply knowledge modelling techniques, in order to structure the different knowledge types and reasoning processes involved. The paper sets out from a description of our particular management problem. Subsequently, our agent model is described, pointing out the local problem-solving and coordination knowledge models. Finally, the dynamics of the approach is illustrated by an example.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Crop simulation models allow analyzing various tillage-rotation combinations and exploring management scenarios. This study was conducted to test the DSSAT (Decision Support System for Agrotechnology Transfer) modelling system in rainfed semiarid central Spain. The focus is on the combined effect of tillage system and winter cereal-based rotations (cereal/legume/fallow) on the crop yield and soil quality. The observed data come from a 16-year field experiment. The CERES and CROPGRO models, included in DSSAT v4.5, were used to simulate crop growth and yield, and DSSAT- CENTURY was used in the soil organic carbon (SOC) and soil nitrogen (SN) simulations. Genetic coefficients were calibrated using part of the observed data. Field observations showed that barley grain yield was lower for continuous cereal (BB) than for vetch (VB) and fallow (FB) rotations for both tillage systems. The CERES-Barley model also reflected this trend. The model predicted higher yield in the conventional tillage (CT) than in the no tillage (NT) probably due to the higher nitrogen availability in the CT, shown in the simulations. The SOC and SN in the top layer only, were higher in NT than in CT, and decreased with depth in both simulated and observed values. These results suggest that CT-VB and CT-FB were the best combinations for the dry land conditions studied. However, CT presented lower SN and SOC content than NT. This study shows how models can be a useful tool for assessing and predicting crop growth and yield, under different management systems and under specific edapho-climatic conditions. Additional key words: CENTURY model; CERES-Barley; crop simulation models; DSSAT; sequential simula- tion; soil organic carbon.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este proyecto se basa en el sistema JRodos de ayuda a la toma de decisiones en tiempo real en caso de emergencias nucleares y radiológicas. Tras una breve descripción del mismo, se presentan los modelos de cálculo que utiliza el sistema y la organización modular en la que se estructura el programa. Concretamente este documento se centra en un módulo desarrollado recientemente denominado ICRP y caracterizado por tener en cuenta todas las vías de exposición a la contaminación radiológica, incluida la vía de la ingestión que no se había tenido en cuenta en los módulos previos. Este modelo nuevo utiliza resultados obtenidos a partir de la cadena de escala local LSMC como datos de entrada, por lo que se lleva a cabo una descripción detalla del funcionamiento y de la ejecución tanto del módulo ICRP como de la cadena previa LSMC. Finalmente, se ejecuta un ejercicio ICRP usando los datos meteorológicos y de término fuentes reales que se utilizaron en el simulacro CURIEX 2013 realizado en el mes de noviembre de 2013 en la Central Nuclear de Almaraz. Se presenta paso a paso la ejecución de este ejercicio y posteriormente se analizan y explican los resultados obtenidos acompañados de elementos visuales proporcionados por el programa. This project is based on the real time online decision support system for nuclear emergency management called JRodos. After a brief description of it, the calculation models used by the system and its modular organization are presented. In particular, this paper focuses on a newly developed module named ICRP. This module is characterized by the consideration of the fact that all terrestrial exposure pathways, including ingestion, which has not been considered in previous modules. This new model uses the results obtained in a previous local scale model chain called LSMC as input. In this document a detailed description of the operation and implementation of both the ICRP module and its previous LSMC chain is presented. To conclude, an ICRP exercise is performed with real meteorological and source term data used in the simulation exercise CURIEX 2013 carried out in the Almaraz Nuclear Power Plant in November 2013. A stepwise realization of this exercise is presented and subsequently the results are deeply explained and analyzed supplemented with illustrations provided by the program.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este Proyecto Fin de Grado, es el primer paso para abordar la construcción de una plataforma de conocimiento evolutivo para dos sistemas que facilitan la detección precoz de trastornos del lenguaje en niños de 0 a 6 años. Concretamente, el objetivo principal de este proyecto es el diseño, desarrollo y puesta en explotación de un sistema de recogida de propuestas de mejora sobre la base de conocimiento de los sistemas de ayuda a la toma de decisiones Gades y Pegaso. Este sistema está formado fundamentalmente por una aplicación diseñada y construida mediante una arquitectura de componentes de software modular y reutilizable. La aplicación será usada por los usuarios de las plataformas Pegaso y Gades para realizar las propuestas de cambio sobre la base de conocimiento de dichos sistemas. El sistema es accesible vía web y almacena toda la información que maneja en una base de datos. Asimismo, expone un estudio de aplicaciones orientadas al trabajo colaborativo (CSCW) y a la toma de decisiones colaborativa, como paso previo al desarrollo de una funcionalidad futura del propio sistema. ABSTRACT. This Final Degree Project, is the first step to address the construction of a platform for two evolutionary knowledge systems that facilitate early detection of language disorders in children aged 0-6 years. Specifically, the main objective of this project is the design, development and start-up of a system that collect improvement proposals about the knowledge of decision support systems Gades and Pegaso. This system consists mainly of an application designed and built by a modular component architecture and reusable software. The application will be used by users of the Pegaso and Gades platforms for change proposals on the basis of knowledge of such systems. The system is accessible via web and stores all the information managed in a database. It also presents a study of collaborative work oriented applications (CSCW) and collaborative decision making, prior to the development of a future system functionality.