798 resultados para Computer-Based Training System
Resumo:
Una de las actuaciones posibles para la gestión de los residuos sólidos urbanos es la valorización energética, es decir la incineración con recuperación de energía. Sin embargo es muy importante controlar adecuadamente el proceso de incineración para evitar en lo posible la liberación de sustancias contaminantes a la atmósfera que puedan ocasionar problemas de contaminación industrial.Conseguir que tanto el proceso de incineración como el tratamiento de los gases se realice en condiciones óptimas presupone tener un buen conocimiento de las dependencias entre las variables de proceso. Se precisan métodos adecuados de medida de las variables más importantes y tratar los valores medidos con modelos adecuados para transformarlos en magnitudes de mando. Un modelo clásico para el control parece poco prometedor en este caso debido a la complejidad de los procesos, la falta de descripción cuantitativa y la necesidad de hacer los cálculos en tiempo real. Esto sólo se puede conseguir con la ayuda de las modernas técnicas de proceso de datos y métodos informáticos, tales como el empleo de técnicas de simulación, modelos matemáticos, sistemas basados en el conocimiento e interfases inteligentes. En [Ono, 1989] se describe un sistema de control basado en la lógica difusa aplicado al campo de la incineración de residuos urbanos. En el centro de investigación FZK de Karslruhe se están desarrollando aplicaciones que combinan la lógica difusa con las redes neuronales [Jaeschke, Keller, 1994] para el control de la planta piloto de incineración de residuos TAMARA. En esta tesis se plantea la aplicación de un método de adquisición de conocimiento para el control de sistemas complejos inspirado en el comportamiento humano. Cuando nos encontramos ante una situación desconocida al principio no sabemos como actuar, salvo por la extrapolación de experiencias anteriores que puedan ser útiles. Aplicando procedimientos de prueba y error, refuerzo de hipótesis, etc., vamos adquiriendo y refinando el conocimiento, y elaborando un modelo mental. Podemos diseñar un método análogo, que pueda ser implementado en un sistema informático, mediante el empleo de técnicas de Inteligencia Artificial.Así, en un proceso complejo muchas veces disponemos de un conjunto de datos del proceso que a priori no nos dan información suficientemente estructurada para que nos sea útil. Para la adquisición de conocimiento pasamos por una serie de etapas: - Hacemos una primera selección de cuales son las variables que nos interesa conocer. - Estado del sistema. En primer lugar podemos empezar por aplicar técnicas de clasificación (aprendizaje no supervisado) para agrupar los datos y obtener una representación del estado de la planta. Es posible establecer una clasificación, pero normalmente casi todos los datos están en una sola clase, que corresponde a la operación normal. Hecho esto y para refinar el conocimiento utilizamos métodos estadísticos clásicos para buscar correlaciones entre variables (análisis de componentes principales) y así poder simplificar y reducir la lista de variables. - Análisis de las señales. Para analizar y clasificar las señales (por ejemplo la temperatura del horno) es posible utilizar métodos capaces de describir mejor el comportamiento no lineal del sistema, como las redes neuronales. Otro paso más consiste en establecer relaciones causales entre las variables. Para ello nos sirven de ayuda los modelos analíticos - Como resultado final del proceso se pasa al diseño del sistema basado en el conocimiento. El objetivo principal es aplicar el método al caso concreto del control de una planta de tratamiento de residuos sólidos urbanos por valorización energética. En primer lugar, en el capítulo 2 Los residuos sólidos urbanos, se trata el problema global de la gestión de los residuos, dando una visión general de las diferentes alternativas existentes, y de la situación nacional e internacional en la actualidad. Se analiza con mayor detalle la problemática de la incineración de los residuos, poniendo especial interés en aquellas características de los residuos que tienen mayor importancia de cara al proceso de combustión.En el capítulo 3, Descripción del proceso, se hace una descripción general del proceso de incineración y de los distintos elementos de una planta incineradora: desde la recepción y almacenamiento de los residuos, pasando por los distintos tipos de hornos y las exigencias de los códigos de buena práctica de combustión, el sistema de aire de combustión y el sistema de humos. Se presentan también los distintos sistemas de depuración de los gases de combustión, y finalmente el sistema de evacuación de cenizas y escorias.El capítulo 4, La planta de tratamiento de residuos sólidos urbanos de Girona, describe los principales sistemas de la planta incineradora de Girona: la alimentación de residuos, el tipo de horno, el sistema de recuperación de energía, y el sistema de depuración de los gases de combustión Se describe también el sistema de control, la operación, los datos de funcionamiento de la planta, la instrumentación y las variables que son de interés para el control del proceso de combustión.En el capítulo 5, Técnicas utilizadas, se proporciona una visión global de los sistemas basados en el conocimiento y de los sistemas expertos. Se explican las diferentes técnicas utilizadas: redes neuronales, sistemas de clasificación, modelos cualitativos, y sistemas expertos, ilustradas con algunos ejemplos de aplicación.Con respecto a los sistemas basados en el conocimiento se analizan en primer lugar las condiciones para su aplicabilidad, y las formas de representación del conocimiento. A continuación se describen las distintas formas de razonamiento: redes neuronales, sistemas expertos y lógica difusa, y se realiza una comparación entre ellas. Se presenta una aplicación de las redes neuronales al análisis de series temporales de temperatura.Se trata también la problemática del análisis de los datos de operación mediante técnicas estadísticas y el empleo de técnicas de clasificación. Otro apartado está dedicado a los distintos tipos de modelos, incluyendo una discusión de los modelos cualitativos.Se describe el sistema de diseño asistido por ordenador para el diseño de sistemas de supervisión CASSD que se utiliza en esta tesis, y las herramientas de análisis para obtener información cualitativa del comportamiento del proceso: Abstractores y ALCMEN. Se incluye un ejemplo de aplicación de estas técnicas para hallar las relaciones entre la temperatura y las acciones del operador. Finalmente se analizan las principales características de los sistemas expertos en general, y del sistema experto CEES 2.0 que también forma parte del sistema CASSD que se ha utilizado.El capítulo 6, Resultados, muestra los resultados obtenidos mediante la aplicación de las diferentes técnicas, redes neuronales, clasificación, el desarrollo de la modelización del proceso de combustión, y la generación de reglas. Dentro del apartado de análisis de datos se emplea una red neuronal para la clasificación de una señal de temperatura. También se describe la utilización del método LINNEO+ para la clasificación de los estados de operación de la planta.En el apartado dedicado a la modelización se desarrolla un modelo de combustión que sirve de base para analizar el comportamiento del horno en régimen estacionario y dinámico. Se define un parámetro, la superficie de llama, relacionado con la extensión del fuego en la parrilla. Mediante un modelo linealizado se analiza la respuesta dinámica del proceso de incineración. Luego se pasa a la definición de relaciones cualitativas entre las variables que se utilizan en la elaboración de un modelo cualitativo. A continuación se desarrolla un nuevo modelo cualitativo, tomando como base el modelo dinámico analítico.Finalmente se aborda el desarrollo de la base de conocimiento del sistema experto, mediante la generación de reglas En el capítulo 7, Sistema de control de una planta incineradora, se analizan los objetivos de un sistema de control de una planta incineradora, su diseño e implementación. Se describen los objetivos básicos del sistema de control de la combustión, su configuración y la implementación en Matlab/Simulink utilizando las distintas herramientas que se han desarrollado en el capítulo anterior.Por último para mostrar como pueden aplicarse los distintos métodos desarrollados en esta tesis se construye un sistema experto para mantener constante la temperatura del horno actuando sobre la alimentación de residuos.Finalmente en el capítulo Conclusiones, se presentan las conclusiones y resultados de esta tesis.
Resumo:
El test de circuits és una fase del procés de producció que cada vegada pren més importància quan es desenvolupa un nou producte. Les tècniques de test i diagnosi per a circuits digitals han estat desenvolupades i automatitzades amb èxit, mentre que aquest no és encara el cas dels circuits analògics. D'entre tots els mètodes proposats per diagnosticar circuits analògics els més utilitzats són els diccionaris de falles. En aquesta tesi se'n descriuen alguns, tot analitzant-ne els seus avantatges i inconvenients. Durant aquests últims anys, les tècniques d'Intel·ligència Artificial han esdevingut un dels camps de recerca més importants per a la diagnosi de falles. Aquesta tesi desenvolupa dues d'aquestes tècniques per tal de cobrir algunes de les mancances que presenten els diccionaris de falles. La primera proposta es basa en construir un sistema fuzzy com a eina per identificar. Els resultats obtinguts son força bons, ja que s'aconsegueix localitzar la falla en un elevat tant percent dels casos. Per altra banda, el percentatge d'encerts no és prou bo quan a més a més s'intenta esbrinar la desviació. Com que els diccionaris de falles es poden veure com una aproximació simplificada al Raonament Basat en Casos (CBR), la segona proposta fa una extensió dels diccionaris de falles cap a un sistema CBR. El propòsit no és donar una solució general del problema sinó contribuir amb una nova metodologia. Aquesta consisteix en millorar la diagnosis dels diccionaris de falles mitjançant l'addició i l'adaptació dels nous casos per tal d'esdevenir un sistema de Raonament Basat en Casos. Es descriu l'estructura de la base de casos així com les tasques d'extracció, de reutilització, de revisió i de retenció, fent èmfasi al procés d'aprenentatge. En el transcurs del text s'utilitzen diversos circuits per mostrar exemples dels mètodes de test descrits, però en particular el filtre biquadràtic és l'utilitzat per provar les metodologies plantejades, ja que és un dels benchmarks proposats en el context dels circuits analògics. Les falles considerades son paramètriques, permanents, independents i simples, encara que la metodologia pot ser fàcilment extrapolable per a la diagnosi de falles múltiples i catastròfiques. El mètode es centra en el test dels components passius, encara que també es podria extendre per a falles en els actius.
Resumo:
O sistema educativo e formativo português tem vindo a mudar significativamente, devido ao fenómeno da globalização e às exigências da sociedade do conhecimento. Com efeito, os desafios de uma economia mais dinâmica e competitiva, baseada no conhecimento requerem a definição de novas políticas educativas. Os objetivos de aumentar a equidade e a oportunidade de educação para todos os alunos e de combater o abandono e o insucesso escolar conduziram à implementação, em Portugal, de algumas medidas que envolvam os jovens em programas de formação, tais como os cursos de educação e formação (CEF). É no seio deste novo e complexo contexto educativo que colocamos a relevante questão: como podem as equipas pedagógicas dos cursos de educação e formação responder, adequadamente, às exigências de atuação neste tipo de percurso diversificado de formação? A natureza dos constantes constrangimentos que os professores enfrentam permitiu-nos concluir que estes têm de trabalhar sobre a sua própria capacidade de mudança, de forma a responderem a todas estas crescentes demandas, pelo que, neste sentido, a mudança assume-se como uma extraordinária oportunidade de desenvolvimento profissional. Esta construção de capacidade ou reculturing (Fullan, 2007) é o resultado de várias adaptações e decisões, tomadas pelos professores, colaborativamente como comunidades de aprendizagem profissional (CAP). Na verdade, nas CAP os docentes estão moral e intelectualmente comprometidos com a melhoria, a inovação e a sustentabilidade da educação, por conseguinte elas não são apenas um meio de melhorar os resultados dos alunos e de aumentar as suas aprendizagens, como são, também, o processo mais eficaz de implicar os docentes no desenvolvimento profissional contínuo, profundamente ligado à ação. Consequentemente, no sentido de transformar as equipas pedagógicas em comunidades de aprendizagem profissional apresentamos um projeto de formação que se concretizará através da implementação de um círculo de estudos, no contexto escolar, que pretende assegurar o desenvolvimento e a atualização dos conhecimentos e competências dos professores dos CEF e melhorar a qualidade e eficácia da aprendizagem e da prática docente. As expectativas em relação aos resultados desta formação são bastante elevadas e alicerçam-se na recetividade e disponibilidade demonstradas, por todos os professores dos CEF, para participarem neste projeto de formação.
Resumo:
This paper describes a case study of an electronic data management system developed in-house by the Facilities Management Directorate (FMD) of an educational institution in the UK. The FMD Maintenance and Business Services department is responsible for the maintenance of the built-estate owned by the university. The department needs to have a clear definition of the type of work undertaken and the administration that enables any maintenance work to be carried out. These include the management of resources, budget, cash flow and workflow of reactive, preventative and planned maintenance of the campus. In order to be more efficient in supporting the business process, the FMD had decided to move from a paper-based information system to an electronic system, WREN, to support the business process of the FMD. Some of the main advantages of WREN are that it is tailor-made to fit the purpose of the users; it is cost effective when it comes to modifications on the system; and the database can also be used as a knowledge management tool. There is a trade-off; as WREN is tailored to the specific requirements of the FMD, it may not be easy to implement within a different institution without extensive modifications. However, WREN is successful in not only allowing the FMD to carry out the tasks of maintaining and looking after the built-estate of the university, but also has achieved its aim to minimise costs and maximise efficiency.
Resumo:
Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.
Resumo:
Research shows that poor indoor air quality (IAQ) in school buildings can cause a reduction in the students’ performance assessed by short-term computer-based tests; whereas good air quality in classrooms can enhance children's concentration and also teachers’ productivity. Investigation of air quality in classrooms helps us to characterise pollutant levels and implement corrective measures. Outdoor pollution, ventilation equipment, furnishings, and human activities affect IAQ. In school classrooms, the occupancy density is high (1.8–2.4 m2/person) compared to offices (10 m2/person). Ventilation systems expend energy and there is a trend to save energy by reducing ventilation rates. We need to establish the minimum acceptable level of fresh air required for the health of the occupants. This paper describes a project, which will aim to investigate the effect of IAQ and ventilation rates on pupils’ performance and health using psychological tests. The aim is to recommend suitable ventilation rates for classrooms and examine the suitability of the air quality guidelines for classrooms. The air quality, ventilation rates and pupils’ performance in classrooms will be evaluated in parallel measurements. In addition, Visual Analogue Scales will be used to assess subjective perception of the classroom environment and SBS symptoms. Pupil performance will be measured with Computerised Assessment Tests (CAT), and Pen and Paper Performance Tasks while physical parameters of the classroom environment will be recorded using an advanced data logging system. A total number of 20 primary schools in the Reading area are expected to participate in the present investigation, and the pupils participating in this study will be within the age group of 9–11 years. On completion of the project, based on the overall data recommendations for suitable ventilation rates for schools will be formulated.
Resumo:
Skill and risk taking are argued to be independent and to require different remedial programs. However, it is possible to contend that skill-based training could be associated with an increase, a decrease, or no change in risk taking behavior. In 3 experiments, the authors examined the influence of a skill-based training program (hazard perception) on the risk taking behavior of car drivers (using video-based driving simulations). Experiment 1 demonstrated a decrease in risk taking for novice drivers. In Experiment 2, the authors examined the possibilities that the skills training might operate through either a nonspecific reduction in risk taking or a specific improvement in hazard perception. Evidence supported the latter. These findings were replicated in a more ecological context in Experiment 3, which compared advanced and nonadvanced police drivers.
Resumo:
Two experiments implement and evaluate a training scheme for learning to apply frequency formats to probability judgements couched in terms of percentages. Results indicate that both conditional and cumulative probability judgements can be improved in this manner, however the scheme is insufficient to promote any deeper understanding of the problem structure. In both experiments, training on one problem type only (either conditional or cumulative risk judgements) resulted in an inappropriate transfer of a learned method at test. The obstacles facing a frequency-based training programme for teaching appropriate use of probability data are discussed. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
Research shows that poor indoor air quality (IAQ) in school buildings can cause a reduction in the students' performance assessed by short-term computer-based tests: whereas good air quality in classrooms can enhance children's concentration and also teachers' productivity. Investigation of air quality in classrooms helps us to characterise pollutant levels and implement corrective measures. Outdoor pollution, ventilation equipment, furnishings, and human activities affect IAQ. In school classrooms, the occupancy density is high (1.8-2.4m(2)/person) compared to offices (10 m(2)/person). Ventilation systems expend energy and there is a trend to save energy by reducing ventilation rates. We need to establish the minimum acceptable level of fresh air required for the health of the occupants. This paper describes a project, which will aim to investigate the effect of IAQ and ventilation rates on pupils' performance and health using psychological tests. The aim is to recommend suitable ventilation rates for classrooms and examine the suitability of the air quality guidelines for classrooms. The air quality, ventilation rates and pupils' performance in classrooms will be evaluated in parallel measurements. In addition, Visual Analogue Scales will be used to assess subjective perception of the classroom environment and SBS symptoms. Pupil performance will be measured with Computerised Assessment Tests (CAT), and Pen and Paper Performance Tasks while physical parameters of the classroom environment will be recorded using an advanced data logging system. A total number of 20 primary schools in the Reading area are expected to participate in the present investigation, and the pupils participating in this study will be within the age group of 9-11 years. On completion of the project, based oil the overall data recommendations for suitable ventilation rates for schools will be formulated. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The Java language first came to public attention in 1995. Within a year, it was being speculated that Java may be a good language for parallel and distributed computing. Its core features, including being objected oriented and platform independence, as well as having built-in network support and threads, has encouraged this view. Today, Java is being used in almost every type of computer-based system, ranging from sensor networks to high performance computing platforms, and from enterprise applications through to complex research-based.simulations. In this paper the key features that make Java a good language for parallel and distributed computing are first discussed. Two Java-based middleware systems, namely MPJ Express, an MPI-like Java messaging system, and Tycho, a wide-area asynchronous messaging framework with an integrated virtual registry are then discussed. The paper concludes by highlighting the advantages of using Java as middleware to support distributed applications.
Resumo:
Finding an estimate of the channel impulse response (CIR) by correlating a received known (training) sequence with the sent training sequence is commonplace. Where required, it is also common to truncate the longer correlation to a sub-set of correlation coefficients by finding the set of N sequential correlation coefficients with the maximum power. This paper presents a new approach to selecting the optimal set of N CIR coefficients from the correlation rather than relying on power. The algorithm reconstructs a set of predicted symbols using the training sequence and various sub-sets of the correlation to find the sub-set that results in the minimum mean squared error between the actual received symbols and the reconstructed symbols. The application of the algorithm is presented in the context of the TDMA based GSM/GPRS system to demonstrate an improvement in the system performance with the new algorithm and the results are presented in the paper. However, the application lends itself to any training sequence based communication system often found within wireless consumer electronic device(1).
Resumo:
In this paper a new system identification algorithm is introduced for Hammerstein systems based on observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a non-uniform rational B-spline (NURB) neural network. The proposed system identification algorithm for this NURB network based Hammerstein system consists of two successive stages. First the shaping parameters in NURB network are estimated using a particle swarm optimization (PSO) procedure. Then the remaining parameters are estimated by the method of the singular value decomposition (SVD). Numerical examples including a model based controller are utilized to demonstrate the efficacy of the proposed approach. The controller consists of computing the inverse of the nonlinear static function approximated by NURB network, followed by a linear pole assignment controller.
Resumo:
THE clinical skills of medical professionals rely strongly on the sense of touch, combined with anatomical and diagnostic knowledge. Haptic exploratory procedures allow the expert to detect anomalies via gross and fine palpation, squeezing, and contour following. Haptic feedback is also key to medical interventions, for example when an anaesthetist inserts an epidural needle, a surgeon makes an incision, a dental surgeon drills into a carious lesion, or a veterinarian sutures a wound. Yet, current trends in medical technology and training methods involve less haptic feedback to clinicians and trainees. For example, minimally invasive surgery removes the direct contact between the patient and clinician that gives rise to natural haptic feedback, and furthermore introduces scaling and rotational transforms that confuse the relationship between movements of the hand and the surgical site. Similarly, it is thought that computer-based medical simulation and training systems require high-resolution and realistic haptic feedback to the trainee for significant training transfer to occur. The science and technology of haptics thus has great potential to affect the performance of medical procedures and learning of clinical skills. This special section is about understanding
Resumo:
In any wide-area distributed system there is a need to communicate and interact with a range of networked devices and services ranging from computer-based ones (CPU, memory and disk), to network components (hubs, routers, gateways) and specialised data sources (embedded devices, sensors, data-feeds). In order for the ensemble of underlying technologies to provide an environment suitable for virtual organisations to flourish, the resources that comprise the fabric of the Grid must be monitored in a seamless manner that abstracts away from the underlying complexity. Furthermore, as various competing Grid middleware offerings are released and evolve, an independent overarching monitoring service should act as a corner stone that ties these systems together. GridRM is a standards-based approach that is independent of any given middleware and that can utilise legacy and emerging resource-monitoring technologies. The main objective of the project is to produce a standardised and extensible architecture that provides seamless mechanisms to interact with native monitoring agents across heterogeneous resources.
Resumo:
The use of antibiotics in birds and animals intended for human consumption within the European Union (EU) and elsewhere has been subject to regulation prohibiting the use of antimicrobials as growth promoters and the use of last resort antibiotics in an attempt to reduce the spread of multi-resistant Gram negative bacteria. Given the inexorable spread of antibiotic resistance there is an increasing need for improved monitoring of our food. Using selective media, Gram negative bacteria were isolated from retail chicken of UK-Intensively reared (n = 27), Irish-Intensively reared (n = 19) and UK-Free range (n = 30) origin and subjected to an oligonucleotide based array system for the detection of 47 clinically relevant antibiotic resistance genes (ARGs) and two integrase genes. High incidences of β-lactamase genes were noted in all sample types, acc (67%), cmy (80%), fox (55%) and tem (40%) while chloramphenicol resistant determinants were detected in bacteria from the UK poultry portions and were absent in bacteria from the Irish samples. Denaturing Gradient Gel Electrophoresis (DGGE) was used to qualitatively analyse the Gram negative population in the samples and showed the expected diversity based on band stabbing and DNA sequencing. The array system proved to be a quick method for the detection of antibiotic resistance gene (ARG) burden within a mixed Gram negative bacterial population.