915 resultados para Knowledge base maintenance


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, the importance of the management of eco-innovations has been growing, more in practice than in academia. However, although in the literature there are already some evidences focussed on management of eco-innovations, there is no comprehensive review on the knowledge base of diffusion of eco-innovations. This paper provides a current overview of the existing body of literature, identifying the most active scholars and relevant publications in this field, and deepening in the major disciplines and research streams. Results show that the theory of diffusion of innovations which provided the philosophical underpinnings of how innovations are diffused is not the main knowledge base to explain the diffusion of eco-innovations. Lead market hypothesis, sustainable transitions and the ecological modernization appear as the initial base of the cognitive platform that can contribute to the understanding of diffusion of eco-innovations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes ExperNet, an intelligent multi-agent system that was developed under an EU funded project to assist in the management of a large-scale data network. ExperNet assists network operators at various nodes of a WAN to detect and diagnose hardware failures and network traffic problems and suggests the most feasible solution, through a web-based interface. ExperNet is composed by intelligent agents, capable of both local problem solving and social interaction among them for coordinating problem diagnosis and repair. The current network state is captured and maintained by conventional network management and monitoring software components, which have been smoothly integrated into the system through sophisticated information exchange interfaces. For the implementation of the agents, a distributed Prolog system enhanced with networking facilities was developed. The agents’ knowledge base is developed in an extensible and reactive knowledge base system capable of handling multiple types of knowledge representation. ExperNet has been developed, installed and tested successfully in an experimental network zone of Ukraine.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Desde la aparición de Internet, hace ya más de 20 años ha existido por parte de diversos sectores de la sociedad, científicos, empresas, usuarios, etc. la inquietud por la aplicación de esta tecnología a lo que se ha dado en llamar “El Internet de las Cosas”, que no es más que el control a distancia de cualquier elemento útil o necesario para la vida cotidiana y la industria. Sin embargo el desarrollo masivo de aplicaciones orientadas a esto, no ha evolucionado hasta que no se han producido avances importantes en dos campos: por un lado, en las Redes Inalámbricas de Sensores (WSN), redes compuestas por un conjunto de pequeños dispositivos capaces de transmitir la información que recogen, haciéndola llegar desde su propia red inalámbrica, a otras de amplia cobertura y por otro con la miniaturización cada vez mayor de dispositivos capaces de tener una autonomía suficiente como para procesar datos e interconectarse entre sí. Al igual que en las redes de ordenadores convencionales, las WSN se pueden ver comprometidas en lo que a seguridad se refiere, ya que la masiva implementación de estas redes hará que millones de Terabytes de datos, muchas veces comprometidos o sometidos a estrictas Leyes de protección de los mismos, circulen en la sociedad de la información, de forma que lo que nace como una ventaja muy interesante para sus usuarios, puede convertirse en una pesadilla debido a la amenaza constante hacia los servicios mínimos de seguridad que las compañías desarrolladoras han de garantizar a los usuarios de sus aplicaciones. Éstas, y con el objetivo de proveer un ámbito de seguridad mínimo, deben de realizar un minucioso estudio de la aplicación en particular que se quiere ofrecer con una WSN y también de las características específicas de la red ya que, al estar formadas por dispositivos prácticamente diminutos, pueden tener ciertas limitaciones en cuanto al tamaño de la batería, capacidad de procesamiento, memoria, etc. El presente proyecto desarrolla una aplicación, única, ya que en la actualidad no existe un software con similares características y que aporta un avance importante en dos campos principalmente: por un lado ayudará a los usuarios que deseen desplegar una aplicación en una red WSN a determinar de forma automática cuales son los mecanismos y servicios específicos de seguridad que se han de implementar en dicha red para esa aplicación concreta y, por otro lado proporcionará un apoyo extra a expertos de seguridad que estén investigando en la materia ya que, servirá de plataforma de pruebas para centralizar la información sobre seguridad que se tengan en ese momento en una base de conocimientos única, proporcionando también un método útil de prueba para posibles escenarios virtuales. ABSTRACT. It has been more than 20 years since the Internet appeared and with it, scientists, companies, users, etc. have been wanted to apply this technology to their environment which means to control remotely devices, which are useful for the industry or aspects of the daily life. However, the huge development of these applications oriented to that use, has not evolve till some important researches has been occurred in two fields: on one hand, the field of the Wireless Sensor Networks (WSN) which are networks composed of little devices that are able to transmit the information that they gather making it to pass through from their wireless network to other wider networks and on the other hand with the increase of the miniaturization of the devices which are able to work in autonomous mode so that to process data and connect to each other. WSN could be compromised in the matter of security as well as the conventional computer networks, due to the massive implementation of this kind of networks will cause that millions of Terabytes of data will be going around in the information society, thus what it is thought at first as an interesting advantage for people, could turn to be a nightmare because of the continuous threat to the minimal security services that developing companies must guarantee their applications users. These companies, and with the aim to provide a minimal security realm, they have to do a strict research about the application that they want to implement in one WSN and the specific characteristics of the network as they are made by tiny devices so that they could have certain limitations related to the battery, throughput, memory, etc. This project develops a unique application since, nowadays, there is not any software with similar characteristics and it will be really helpful in mainly two areas: on one side, it will help users who want to deploy an application in one WSN to determine in an automatically way, which ones security services and mechanisms are those which is necessary to implement in that network for the concrete application and, on the other side, it will provide an extra help for the security experts who are researching in wireless sensor network security so that ti will an exceptional platform in order to centralize information about security in the Wireless Sensor Networks in an exclusive knowledge base, providing at the same time a useful method to test virtual scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Quality of Life of a person may depend on early attention to his neurodevel-opment disorders in childhood. Identification of language disorders under the age of six years old can speed up required diagnosis and/or treatment processes. This paper details the enhancement of a Clinical Decision Support System (CDSS) aimed to assist pediatricians and language therapists at early identification and re-ferral of language disorders. The system helps to fine tune the Knowledge Base of Language Delays (KBLD) that was already developed and validated in clinical routine with 146 children. Medical experts supported the construction of Gades CDSS by getting scientific consensus from literature and fifteen years of regis-tered use cases of children with language disorders. The current research focuses on an innovative cooperative model that allows the evolution of the KBLD of Gades through the supervised evaluation of the CDSS learnings with experts¿ feedback. The deployment of the resulting system is being assessed under a mul-tidisciplinary team of seven experts from the fields of speech therapist, neonatol-ogy, pediatrics, and neurology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Early and effective identification of developmental disorders during childhood remains a critical task for the international community. The second highest prevalence of common developmental disorders in children are language delays, which are frequently the first symptoms of a possible disorder. Objective: This paper evaluates a Web-based Clinical Decision Support System (CDSS) whose aim is to enhance the screening of language disorders at a nursery school. The common lack of early diagnosis of language disorders led us to deploy an easy-to-use CDSS in order to evaluate its accuracy in early detection of language pathologies. This CDSS can be used by pediatricians to support the screening of language disorders in primary care. Methods: This paper details the evaluation results of the ?Gades? CDSS at a nursery school with 146 children, 12 educators, and 1 language therapist. The methodology embraces two consecutive phases. The first stage involves the observation of each child?s language abilities, carried out by the educators, to facilitate the evaluation of language acquisition level performed by a language therapist. Next, the same language therapist evaluates the reliability of the observed results. Results: The Gades CDSS was integrated to provide the language therapist with the required clinical information. The validation process showed a global 83.6% (122/146) success rate in language evaluation and a 7% (7/94) rate of non-accepted system decisions within the range of children from 0 to 3 years old. The system helped language therapists to identify new children with potential disorders who required further evaluation. This process will revalidate the CDSS output and allow the enhancement of early detection of language disorders in children. The system does need minor refinement, since the therapists disagreed with some questions from the CDSS knowledge base (KB) and suggested adding a few questions about speech production and pragmatic abilities. The refinement of the KB will address these issues and include the requested improvements, with the support of the experts who took part in the original KB development. Conclusions: This research demonstrated the benefit of a Web-based CDSS to monitor children?s neurodevelopment via the early detection of language delays at a nursery school. Current next steps focus on the design of a model that includes pseudo auto-learning capacity, supervised by experts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El objetivo de esta tesis fin de máster es la construcción mediante técnicas evolutivas de bases de conocimiento con reglas difusas para desarrollar un sistema autónomo que sea capaz de jugar con destreza a un videojuego de lucha en 2D. El uso de la lógica difusa permite manejar imprecisión, la cual está implícita en las variables de entrada al sistema y favorece la comprensión a nivel humano del comportamiento general del controlador. Se ha diseñado, para obtener la base de conocimiento que permita al sistema tomar las decisiones adecuadas durante el combate, un nuevo operador para algoritmos evolutivos. Se ha observado que la programación genética guiada por gramáticas (PGGG) muestra un sesgo debido al cruce que se suele emplear para obtener nuevos individuos en el proceso evolutivo. Para solventar este problema, se propone el método de sedimentación, capaz de evitar la tendencia que tiene la PGGG a generar bases de conocimiento con pocas reglas, de forma independiente a la gramática. Este método se inspira en la sedimentación que se produce en el fondo de los lechos marinos y permite obtener un sustrato de reglas óptimas que forman la solución final una vez que converge el algoritmo.---ABSTRACT---The objective of this thesis is the construction by evolutionary techniques of fuzzy rule-base system to develop an autonomous controller capable of playing a 2D fighting game. The use of fuzzy logic handles imprecision, which is implicit in the input variables of the system and makes the behavior of the controller easier to understand by humans. A new operator for evolutionary algorithms is designed to obtain the knowledge base that allows the system to take appropriate decision during combat. It has been observed that the grammar guided genetic programming (GGGP) shows a bias due to the crossing that is often used for obtaining new individuals in the evolutionary process. To solve this problem, the sedimentation method, able to avoid the tendency of the PGGG to generate knowledge bases with few rules, independently of the grammar is proposed. This method is inspired by the sedimentation that occurs on the bottom of the seabed and creates an optimal rules substrate that ends on the final solution once the algorithm converges.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La Diabetes mellitus es una enfermedad caracterizada por la insuficiente o nula producción de insulina por parte del páncreas o la reducida sensibilidad del organismo a esta hormona, que ayuda a que la glucosa llegue a los tejidos y al sistema nervioso para suministrar energía. La Diabetes tiene una mayor prevalencia en los países desarrollados debido a múltiples factores, entre ellos la obesidad, la vida sedentaria, y disfunciones en el sistema endocrino relacionadas con el páncreas. La Diabetes Tipo 1 es una enfermedad crónica e incurable, en la que son destruidas las células beta del páncreas, que producen la insulina, haciéndose necesaria la administración de insulina de forma exógena para controlar los niveles de glucosa en sangre. El paciente debe seguir una terapia con insulina administrada por vía subcutánea, que debe estar adaptada a sus necesidades metabólicas y a sus hábitos de vida. Esta terapia intenta imitar el perfil insulínico de un páncreas sano. La tecnología actual permite abordar el desarrollo del denominado “páncreas endocrino artificial” (PEA), que aportaría precisión, eficacia y seguridad en la aplicación de las terapias con insulina y permitiría una mayor independencia de los pacientes frente a su enfermedad, que en la actualidad están sujetos a una constante toma de decisiones. El PEA consta de un sensor continuo de glucosa, una bomba de infusión de insulina y un algoritmo de control, que calcula la insulina a infusionar utilizando los niveles de glucosa del paciente como información principal. Este trabajo presenta una modificación en el método de control en lazo cerrado propuesto en un proyecto previo. El controlador del que se parte está compuesto por un controlador basal booleano y un controlador borroso postprandial basado en reglas borrosas heredadas del controlador basal. El controlador postprandial administra el 50% del bolo manual (calculado a partir de la cantidad de carbohidratos que el paciente va a consumir) en el instante del aviso de la ingesta y reparte el resto en instantes posteriores. El objetivo es conseguir una regulación óptima del nivel de glucosa en el periodo postprandial. Con el objetivo de reducir las hiperglucemias que se producen en el periodo postprandial se realiza un transporte de insulina, que es un adelanto de la insulina basal del periodo postprandial que se suministrará junto con un porcentaje variable del bolo manual. Este porcentaje estará relacionado con el estado metabólico del paciente previo a la ingesta. Además se modificará la base de conocimiento para adecuar el comportamiento del controlador al periodo postprandial. Este proyecto está enfocado en la mejora del controlador borroso postprandial previo, modificando dos aspectos: la inferencia del controlador postprandial y añadiendo una toma de decisiones automática sobre el % del bolo manual y el transporte. Se ha propuesto un controlador borroso con una nueva inferencia, que no hereda las características del controlado basal, y ha sido adaptado al periodo postprandial. Se ha añadido una inferencia borrosa que modifica la cantidad de insulina a administrar en el momento del aviso de ingesta y la cantidad de insulina basal a transportar del periodo postprandial al bolo manual. La validación del algoritmo se ha realizado mediante experimentos en simulación utilizando una población de diez pacientes sintéticos pertenecientes al Simulador de Padua/Virginia, evaluando los resultados con estadísticos para después compararlos con los obtenidos con el método de control anterior. Tras la evaluación de los resultados se puede concluir que el nuevo controlador postprandial, acompañado de la toma de decisiones automática, realiza un mejor control glucémico en el periodo postprandial, disminuyendo los niveles de las hiperglucemias. ABSTRACT. Diabetes mellitus is a disease characterized by the insufficient or null production of insulin from the pancreas or by a reduced sensitivity to this hormone, which helps glucose get to the tissues and the nervous system to provide energy. Diabetes has more prevalence in developed countries due to multiple factors, including obesity, sedentary lifestyle and endocrine dysfunctions related to the pancreas. Type 1 Diabetes is a chronic, incurable disease in which beta cells in the pancreas that produce insulin are destroyed, and exogenous insulin delivery is required to control blood glucose levels. The patient must follow a therapy with insulin administered by the subcutaneous route that should be adjusted to the metabolic needs and lifestyle of the patient. This therapy tries to imitate the insulin profile of a non-pathological pancreas. Current technology can adress the development of the so-called “endocrine artificial pancreas” (EAP) that would provide accuracy, efficacy and safety in the application of insulin therapies and will allow patients a higher level of independence from their disease. Patients are currently tied to constant decision making. The EAP consists of a continuous glucose sensor, an insulin infusion pump and a control algorithm that computes the insulin amount that has to be infused using the glucose as the main source of information. This work shows modifications to the control method in closed loop proposed in a previous project. The reference controller is composed by a boolean basal controller and a postprandial rule-based fuzzy controller which inherits the rules from the basal controller. The postprandial controller administrates 50% of the bolus (calculated from the amount of carbohydrates that the patient is going to ingest) in the moment of the intake warning, and distributes the remaining in later instants. The goal is to achieve an optimum regulation of the glucose level in the postprandial period. In order to reduce hyperglycemia in the postprandial period an insulin transport is carried out. It consists on a feedforward of the basal insulin from the postprandial period, which will be administered with a variable percentage of the manual bolus. This percentage would be linked with the metabolic state of the patient in moments previous to the intake. Furthermore, the knowledge base is going to be modified in order to fit the controller performance to the postprandial period. This project is focused on the improvement of the previous controller, modifying two aspects: the postprandial controller inference, and the automatic decision making on the percentage of the manual bolus and the transport. A fuzzy controller with a new inference has been proposed and has been adapted to the postprandial period. A fuzzy inference has been added, which modifies both the amount of manual bolus to administrate at the intake warning and the amount of basal insulin to transport to the prandial bolus. The algorithm assessment has been done through simulation experiments using a synthetic population of 10 patients in the UVA/PADOVA simulator, evaluating the results with statistical parameters for further comparison with those obtained with the previous control method. After comparing results it can be concluded that the new postprandial controller, combined with the automatic decision making, carries out a better glycemic control in the postprandial period, decreasing levels of hyperglycemia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La presente tesis doctoral aborda el estudio de un nuevo material mineral, compuesto principalmente por una matriz de yeso (proveniente de un conglomerante industrial basado en sulfato de calcio multifase) y partículas de aerogel de sílice hidrófugo mesoporoso, compatibilizadas mediante un surfactante polimérico, debido a su alto carácter hidrófugo. La investigación se centra en conocer los factores que influyen en las propiedades mecánicas y conductividad térmica del material compuesto generado. Este estudio pretende contribuir al conocimiento sobre el desarrollo de nuevos morteros de elevado aislamiento térmico que puedan ser utilizados en la rehabilitación energética de edificios de viviendas existentes, debido a que estos representan gran parte del consumo energético del parque de viviendas de España, aunque también a nivel internacional. De los materiales utilizados para desarrollar los morteros estudiados, el yeso, además de ser un material muy abundante, especialmente en España, requiere una menor cantidad de energía para la fabricación de un conglomerante (debido a una menor temperatura de fabricación), en comparación con el cemento o la cal, por lo que presenta una menor huella de carbono que estos últimos. Por otro lado, el aerogel de sílice hidrófugo mesoporoso es, de acuerdo con la documentación disponible, el material que posee actualmente la mayor capacidad de aislamiento térmico en el mercado. El desarrollo de nuevos morteros minerales con una capacidad de aislamiento térmico mayor que los materiales aislantes utilizados tradicionalmente, tiene una aplicación relevante en los casos de rehabilitación energética de edificios históricos y patrimoniales, en los que se requiere la aplicación del aislamiento por el interior de la fachada, ya que este tipo de soluciones tienen el inconveniente de reducir el espacio habitable de las áreas involucradas, especialmente en zonas climáticas en las que el aislamiento térmico puede suponer un espesor considerable, por lo que es ideal utilizar materiales de altas prestaciones de aislamiento térmico capaces de aportar el mismo nivel de aislamiento (o incluso mayor), pero en un espesor considerablemente menor. La investigación se desarrolla en tres etapas: bibliográfica, experimental y de simulación. La primera etapa, parte del estudio de la bibliografía existente, relacionada con materiales aislantes, incluyendo soluciones basadas, tanto en morteros aislantes, como en paneles de aislamiento térmico. La segunda, de carácter experimental, se centra en estudiar la influencia de la microestrucrura y macroestructura, del nuevo material mineral, en las propiedades físicas elementales, mecánicas y conductividad térmica del compuesto. La tercera etapa, mediante una simulación del consumo energético, consiste en cuantificar teóricamente el potencial ahorro energético que puede aportar este material en un caso de rehabilitación energética en particular. La investigación experimental se centró principalmente en conocer los factores principales que influyen en las propiedades mecánicas y conductividad térmica de los materiales compuestos minerales desarrollados en esta tesis. Para ello, se llevó a cabo una caracterización de los materiales de estudio, así como el desarrollo de distintas muestras de ensayo, de tal forma que se pudo estudiar, tanto la hidratación del yeso en los compuestos, como su posterior microestructura y macroestructura, aspectos fundamentales para el entendimiento de las propiedades mecánicas y conductividad térmica del compuesto aislante. De este modo, se pudieron conocer y cuantificar, los factores que influyen en las propiedades estudiadas, aportando una base de conocimiento y entendimiento de este tipo de compuestos minerales con aerogel de sílice hidrófugo, no existiendo estudios publicados hasta el momento de finalización de esta tesis, con la aproximación al material propuesta en este estudio, ni con yeso (basado en sulfato de calcio multifase), ni con otro tipo de conglomerantes. Particularmente, se determinó la influencia que tiene la incorporación de partículas de aerogel de sílice hidrófugo, en grandes proporciones en volumen, en un compuesto mineral basado en distintas fases de sulfato de calcio. No obstante, para llevar a cabo las mezclas, fue necesario utilizar un surfactante para compatibilizar este tipo de partículas, con el conglomerante basado en agua. El uso de este tipo de aditivos tiene una influencia, no solo en el aerogel, sino en las propiedades del compuesto en general, dependiendo de su concentración, por lo que se establecieron dos porcentajes de adición: la primera, determinada a partir de la cantidad mínima necesaria para compatibilizar las mezclas (0,1% del agua de amasado), y la segunda, como límite superior, la concentración utilizada habitualmente a nivel industrial para estabilizar burbujas de aire en hormigones espumados (5%). El surfactante utilizado mostró la capacidad de modificar la superficie del aerogel, cambiando el comportamiento de las partículas frente al agua, permitiendo una invasión parcial de su estructura porosa, por parte del agua de amasado. Este comportamiento supone un aumento muy importante en la relación agua/yeso, afectando el hábito cristalino e influenciando negativamente las propiedades mecánicas de la matriz de yeso, presentando un efecto aún notable a mayor concentración de surfactante (5%). En cuanto a las propiedades finales alcanzadas, fue posible lograr un compuesto mineral ultraligero (200 kg/m3), con alrededor de un 60% de aerogel en volumen y de alta capacidad aislante (0,028 W/m•K), presentando una conductividad térmica notablemente menor que los morteros aislantes del mercado, e incluso también menor que la de los aislantes tradicionales basado en las lanas minerales o EPS; no obstante, con la limitante de presentar bajas propiedades mecánicas, condicionando su posible aplicación futura. Entre los factores principales relacionados con las propiedades mecánicas, se encontró que estas dependen exponencialmente del volumen de yeso en el compuesto; no obstante, factores de segundo orden, como el grado de hidratación, o una mejor distribución del conglomerante entre las partículas de aerogel, debido al aumento de la superficie específica del polvo mineral, pueden aumentar las propiedades mecánicas entre el doble y el triple, dependiendo del volumen de aerogel en cuestión. Además, se encontró que el aerogel, en conjunto con el surfactante, es capaz de introducir una gran cantidad de aire (0,70 m3 por cada m3 de aerogel), que unido al agua evaporada (no consumida por el conglomerante durante la hidratación), el volumen de aire total alcanza, generalmente, un 40%, independientemente de la cantidad de aerogel en la mezcla. De este modo, el aire introducido en la matriz desplaza las proporciones en volumen del aerogel y del yeso, disminuyendo, tanto las propiedades mecánicas, como la capacidad aislante de compuesto mineral. Por otro lado, la conductividad térmica mostró tener una dependencia directa de la contribución de las tres fases principales en el compuesto: yeso, aerogel y aire ocluido. De este modo, se pudo desarrollar un modelo matemático, adaptado de uno existente, capaz de calcular, con bastante precisión, la relación de los tres componentes mencionados, en la conductividad térmica de los compuestos, para el rango de volúmenes y materiales utilizados en esta tesis. Finalmente, la simulación del consumo energético realizada a una vivienda típica de España, de los años 1900 a 1959 (basada en muros de ladrillo macizo), para las zonas climáticas estudiadas (A, D y E), permitió observar el potencial ahorro energético que puede aportar este material, dependiendo de su espesor, como aislamiento interior de los muros de fachada. Particularmente, para la zona A, se determinó un espesor óptimo de 1 cm, mientras que para la zona D y E, 3,5 y 3,9 cm respectivamente. En este sentido, el nuevo material estudiado es capaz de disminuir, entre un 35% y un 80%, el espesor de la capa aislante, en comparación con paneles de lana de roca o los morteros minerales de mayor capacidad aislante del mercado español respectivamente. ABSTRACT The present doctoral thesis studies a new mineral-based composite material, composed by a gypsum matrix (based on an industrial multiphase gypsum binder) and mesoporous hydrophobic silica aerogel particles, compatibilized with a polymeric surfactant due to the high hydrophobic character of the insulating particles. This study pretends to contribute to the development of new composite insulating materials that could be used in energy renovation of existing dwellings, in order to reduce their high energy consumption, as they represent a great part of the total energy consumed in Spain, but also internationally. Between the materials used to develop de studied insulating mortars, gypsum, besides being an abundant material, especially in Spain, requires less energy for the manufacture of a mineral binder (due to lower manufacturing temperatures), compared to lime or cement, thus presenting lower carbon footprint. In other hand, the hydrophobic mesoporous silica aerogel, is, according to the existing references, the material with the highest know insulating capacity in the market. The development of new mineral mortars with higher thermal insulation capacity than traditional insulating materials, presents a relevant application in energy retrofitting of historic and cultural heritage buildings, in which implies that the insulating material should be installed as an internal layer, rather than as an external insulating system. This type of solution involves a reduced internal useful area, especially in climatic zones where the demand for thermal insulation is higher, and so the insulating layer thickness, being idealistic to use materials with very high insulating properties, in order to reach same insulating level (or higher), but in lower thickness than the provided by traditional insulating materials. This research is developed in three main stages: bibliographic, experimental and simulation. The first stage starts by studying the existing references regarding thermally insulating materials, including existing insulating mortars and insulating panels. The second stage, mainly experimental, is centered in the study of the the influence of the microstructure and macrostructure in the physical and mechanical properties, and also in the thermal conductivity of the new mineral-based material. The thirds stage, through energy simulation, consists in theoretically quantifying the energy savings potential that can provide this type of insulating material, in a particular energy retrofitting case study. The experimental research is mainly focused in the study of the factors that influence the mechanical properties and the thermal conductivity of the thermal insulating mineral composites developed in this thesis. For this, the characterization of the studied materials has been performed, as well as the development of several experimental samples, in order to study the hydration of the mineral binder within the composites, but also the final microstructure and macrostructure, fundamental aspects for the understanding of the composite’s mechanical and insulating properties. Thus, is was possible to determine and quantify the factors that influence the studied material properties, providing a knowledge base and understanding of mineral composites that comprises mesoporous hydrophobic silica aerogel particles, being the first study up to date regarding the specific approach of the present study, regarding not just multiphase calcium sulfate plaster, but also other mineral binders. Particularly, the influence of the incorporation of hydrophobic silica aerogel particles, in high volume ratios into a mineral compound, based on different phases of calcium sulfate has been determined. However, to perform mixing, it is necessary to use a surfactant in order to compatibilize these particles with the water-based mineral binder. The use of such additives has an influence, not only in the aerogel, but the overall properties of the compound, so two different surfactant concentration has been studied: the first, the minimum amount of surfactant (used in this thesis) in order to develop the slurries (0.1% concentration of the mixing water), and the second, as the upper limit, the concentration usually used industrially to stabilize air bubbles in foamed concrete (5%). One of the side effects of using such additive, was the modification of the aerogel particles, by changing their behavior in respect to water, generating a partial invasion of the aerogel’s porous structure, by the mixing water. This behavior produces a very important increase in water/binder ratios, affecting the crystal habit and negatively influencing the mechanical properties of the gypsum matrix. This effect further increased when a higher concentration of surfactant (5%) is used. Regarding final materials properties, it was possible to achieve an ultra-lightweight mineral composite (200 kg/m3), with around 60% by volume of aerogel, presenting a very high insulating capacity (0.028 W/m•K), a noticeable lower thermal conductivity compared to the insulating mortars and traditional thermal insulating panels on the market, such as mineral wool or EPS; however, the limiting factor for future’s material application in buildings, is related to the very low mechanical properties achieved. Among the main factors related to the mechanical properties, it has been found an exponential correlation to the volume of gypsum in the composite. However, second-order factors such as the degree of hydration, or a better distribution of the binder between the aerogel particles, due to the increased surface area of the mineral powder, can increase the mechanical properties between two to three times, depending aerogel volume involved. In addition, it was found that the aerogel, together with the surfactant, is able to entrain a large amount of air volume (around 0.70 m3 per m3 of aerogel), which together with the evaporated water (not consumed by the binder during hydration), can reach generally around 40% of entrained air within the gypsum matrix, regardless of the amount of aerogel in the mixture. Thus, the entrained air into the matrix displaces the volume proportions of the aerogel and gypsum, reducing both mechanical and insulating properties of the mineral composite. On the other hand, it has been observed a direct contribution of three main phases into the thermal conductivity of the composite: gypsum, aerogel and entrained air. Thus, it was possible to develop a mathematical model (adapted from an existing one), capable of calculating quite accurate the thermal conductivity of such mineral composites, from the ratio these three components and for the range of volumes and materials used in this thesis. Finally, the energy simulation performed to a typical Spanish dwelling, from the years 1900 to 1959 (mainly constructed with massive clay bricks), within three climatic zones of Spain (A, D and E), showed the energy savings potential that can provide this type of insulating material, depending on the thickness of the applied layer. Particularly, for the climatic A zone, it has been found an optimal layer thickness of 1 cm, while for zone D and E, 3.5 and 3.9 cm respectively. In this manner, the new studied materials is capable of decreasing the thickness of the insulating layer by 35% and 80%, compared with rock wool panels or mineral mortars with the highest insulating performance of the Spanish market respectively.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O estudo busca conhecer o trabalho desenvolvido por professores de diferentes componentes curriculares dos anos finais do Ensino Fundamental da rede pública de ensino da cidade de São Paulo de modo a identificar e compreender suas percepções a respeito dos saberes docentes no contexto do processo ensino e aprendizagem. Assim discute-se os saberes que os professores adquirem e/ou reelaboram na prática pedagógica e que são por eles vistos como possibilidades de mudanças no processo ensino e aprendizagem que contemple um ensino de qualidade. O referencial adotado pauta-se em estudos sobre saberes docentes e sua prática, o conhecimento sobre os processos de ensino e aprendizagem e a formação dos professores que atuam nos anos finais do Ensino de Fundamental, tendo como autores principais, Tardif, Garrido, Gatti e Luckesi. Para tanto, procede-se à análise de documentos oficiais e à aplicação de um questionário a doze professores dos anos finais do Ensino Fundamental com o objetivo de conhecer aspectos da vida profissional, bem como as articulações que eles fazem entre saberes docentes, práticas profissionais e processo ensino e aprendizagem. Os resultados demonstram os saberes desenvolvidos pelos docentes, e as práticas pedagógicas que construíram ao longo de seu exercício profissional frente as dificuldades evidenciadas em alguns grupos de alunos. Por fim, os dados revelam a necessidade de se garantir discussões sobre o currículo das turmas dos anos finais do Ensino Fundamental de forma a se perceber que a evolução que os educadores almejam com todos os envolvidos no processo ensino e aprendizagem se ressignificam na prática do conhecimento.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sob as condições presentes de competitividade global, rápido avanço tecnológico e escassez de recursos, a inovação tornou-se uma das abordagens estratégicas mais importantes que uma organização pode explorar. Nesse contexto, a capacidade de inovação da empresa enquanto capacidade de engajar-se na introdução de novos processos, produtos ou ideias na empresa, é reconhecida como uma das principais fontes de crescimento sustentável, efetividade e até mesmo sobrevivência para as organizações. No entanto, apenas algumas empresas compreenderam na prática o que é necessário para inovar com sucesso e a maioria enxerga a inovação como um grande desafio. A realidade não é diferente no caso das empresas brasileiras e em particular das Pequenas e Médias Empresas (PMEs). Estudos indicam que o grupo das PMEs particularmente demonstra em geral um déficit ainda maior na capacidade de inovação. Em resposta ao desafio de inovar, uma ampla literatura emergiu sobre vários aspectos da inovação. Porém, ainda considere-se que há poucos resultados conclusivos ou modelos compreensíveis na pesquisa sobre inovação haja vista a complexidade do tema que trata de um fenômeno multifacetado impulsionado por inúmeros fatores. Além disso, identifica-se um hiato entre o que é conhecido pela literatura geral sobre inovação e a literatura sobre inovação nas PMEs. Tendo em vista a relevância da capacidade de inovação e o lento avanço do seu entendimento no contexto das empresas de pequeno e médio porte cujas dificuldades para inovar ainda podem ser observadas, o presente estudo se propôs identificar os determinantes da capacidade de inovação das PMEs a fim de construir um modelo de alta capacidade de inovação para esse grupo de empresas. O objetivo estabelecido foi abordado por meio de método quantitativo o qual envolveu a aplicação da análise de regressão logística binária para analisar, sob a perspectiva das PMEs, os 15 determinantes da capacidade de inovação identificados na revisão da literatura. Para adotar a técnica de análise de regressão logística, foi realizada a transformação da variável dependente categórica em binária, sendo grupo 0 denominado capacidade de inovação sem destaque e grupo 1 definido como capacidade de inovação alta. Em seguida procedeu-se com a divisão da amostra total em duas subamostras sendo uma para análise contendo 60% das empresas e a outra para validação (holdout) com os 40% dos casos restantes. A adequação geral do modelo foi avaliada por meio das medidas pseudo R2 (McFadden), chi-quadrado (Hosmer e Lemeshow) e da taxa de sucesso (matriz de classificação). Feita essa avaliação e confirmada a adequação do fit geral do modelo, foram analisados os coeficientes das variáveis incluídas no modelo final quanto ao nível de significância, direção e magnitude. Por fim, prosseguiu-se com a validação do modelo logístico final por meio da análise da taxa de sucesso da amostra de validação. Por meio da técnica de análise de regressão logística, verificou-se que 4 variáveis apresentaram correlação positiva e significativa com a capacidade de inovação das PMEs e que, portanto diferenciam as empresas com capacidade de inovação alta das empresas com capacidade de inovação sem destaque. Com base nessa descoberta, foi criado o modelo final de alta capacidade de inovação para as PMEs composto pelos 4 determinantes: base de conhecimento externo (externo), capacidade de gestão de projetos (interno), base de conhecimento interno (interno) e estratégia (interno).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A study of women leaders in the Colorado Mountain Club (CMC) demonstrated that this group perceived pace as an impediment to leadership growth. This study is an exploratory-quantitative inquiry that assessed the views of 20 of the active women hike leaders in the Denver group. The author designed a survey of factors women hike leaders would rate according to their CMC experiences. Although there are more women members of the Denver group, women leaders comprise only 30% of the leadership group The results from this first ever survey of CMC's women leaders provides a knowledge base for CMC and other interested parties. This study clearly demonstrated the need for more research into its topic of women in leadership positions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the past years, an important volume of research in Natural Language Processing has concentrated on the development of automatic systems to deal with affect in text. The different approaches considered dealt mostly with explicit expressions of emotion, at word level. Nevertheless, expressions of emotion are often implicit, inferrable from situations that have an affective meaning. Dealing with this phenomenon requires automatic systems to have “knowledge” on the situation, and the concepts it describes and their interaction, to be able to “judge” it, in the same manner as a person would. This necessity motivated us to develop the EmotiNet knowledge base — a resource for the detection of emotion from text based on commonsense knowledge on concepts, their interaction and their affective consequence. In this article, we briefly present the process undergone to build EmotiNet and subsequently propose methods to extend the knowledge it contains. We further on analyse the performance of implicit affect detection using this resource. We compare the results obtained with EmotiNet to the use of alternative methods for affect detection. Following the evaluations, we conclude that the structure and content of EmotiNet are appropriate to address the automatic treatment of implicitly expressed affect, that the knowledge it contains can be easily extended and that overall, methods employing EmotiNet obtain better results than traditional emotion detection approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper aims to identify the Mediterranean States’ potential in adopting a regional strategy on climate change adaptation. The author proposes a Mediterranean Strategy on Adaptation to Climate Change as the first step to a political/legal regional approach to climate change issues that would supplement the multilateral process under the United Nations Framework Convention on Climate Change and the Kyoto Protocol. According to the author such a strategy would enhance cooperation between the EU and other Mediterranean states in various ways. The experience of the EU in regulating climate change and its ever growing knowledge-base on its impacts could serve to guide the other Mediterranean states’ and help bridge their knowledge-base gap on the topic. On the other hand, the support and cooperation of the EU’s Mediterranean partners would provide an opportunity for the EU to address better the challenges the climate change threatens to bring in its southernmost regions. The strategy could eventually even pave the way for the very first regional treaty on climate change that could be negotiated under the auspices of the Regional Seas Programme and the Union for the Mediterranean.