877 resultados para Engineering, Electronics and Electrical|Artificial Intelligence
Resumo:
Current research on sleep using experimental animals is limited by the expense and time-consuming nature of traditional EEG/EMG recordings. We present here an alternative, noninvasive approach utilizing piezoelectric films configured as highly sensitive motion detectors. These film strips attached to the floor of the rodent cage produce an electrical output in direct proportion to the distortion of the material. During sleep, movement associated with breathing is the predominant gross body movement and, thus, output from the piezoelectric transducer provided an accurate respiratory trace during sleep. During wake, respiratory movements are masked by other motor activities. An automatic pattern recognition system was developed to identify periods of sleep and wake using the piezoelectric generated signal. Due to the complex and highly variable waveforms that result from subtle postural adjustments in the animals, traditional signal analysis techniques were not sufficient for accurate classification of sleep versus wake. Therefore, a novel pattern recognition algorithm was developed that successfully distinguished sleep from wake in approximately 95% of all epochs. This algorithm may have general utility for a variety of signals in biomedical and engineering applications. This automated system for monitoring sleep is noninvasive, inexpensive, and may be useful for large-scale sleep studies including genetic approaches towards understanding sleep and sleep disorders, and the rapid screening of the efficacy of sleep or wake promoting drugs.
Resumo:
This proposed thesis is entitled “Plasma Polymerised Organic Thin Films: A study on the Structural, Electrical, and Nonlinear Optical Properties for Possible Applications. Polymers and polymer based materials find enormous applications in the realm of electronics and optoelectronics. They are employed as both active and passive components in making various devices. Enormous research activities are going on in this area for the last three decades or so, and many useful contributions are made quite accidentally. Conducting polymers is such a discovery, and eversince the discovery of conducting polyacetylene, a new branch of science itself has emerged in the form of synthetic metals. Conducting polymers are useful materials for many applications like polymer displays, high density data storage, polymer FETs, polymer LEDs, photo voltaic devices and electrochemical cells. With the emergence of molecular electronics and its potential in finding useful applications, organic thin films are receiving an unusual attention by scientists and engineers alike. This is evident from the vast literature pertaining to this field appearing in various journals. Recently, computer aided design of organic molecules have added further impetus to the ongoing research activities in this area. Polymers, especially, conducting polymers can be prepared both in the bulk and in the thinfilm form. However, many applications necessitate that they are grown in the thin film form either as free standing or on appropriate substrates. As far as their bulk counterparts are concerned, they can be prepared by various polymerisation techniques such as chemical routes and electrochemical means. A survey of the literature reveals that polymers like polyaniline, polypyrrole, polythiophene, have been investigated with a view to studying their structural electrical and optical properties. Among the various alternate techniques employed for the preparation of polymer thin films, the method of plasma polymerisation needs special attention in this context. The technique of plasma polymerisation is an inexpensive method and often requires very less infra structure. This method includes the employment of ac, rf, dc, microwave and pulsed sources. They produce pinhole free homogeneous films on appropriate substrates under controlled conditions. In conventional plasma polymerisation set up, the monomer is fed into an evacuated chamber and an ac/rf/dc/ w/pulsed discharge is created which enables the monomer species to dissociate, leading to the formation of polymer thin films. However, it has been found that the structure and hence the properties exhibited by plasma polymerized thin films are quite different from that of their counterparts produced by other thin film preparation techniques such as electrochemical deposition or spin coating. The properties of these thin films can be tuned only if the interrelationship between the structure and other properties are understood from a fundamental point of view. So very often, a through evaluation of the various properties is a pre-requisite for tailoring the properties of the thin films for applications. It has been found that conjugation is a necessary condition for enhancing the conductivity of polymer thin films. RF technique of plasma polymerisation is an excellent tool to induce conjugation and this modifies the electrical properties too. Both oxidative and reductive doping can be employed to modify the electrical properties of the polymer thin films for various applications. This is where organic thin films based on polymers scored over inorganic thin films, where in large area devices can be fabricated with organic semiconductors which is difficult to achieve by inorganic materials. For such applications, a variety of polymers have been synthesized such as polyaniline, polythiophene, polypyrrole etc. There are newer polymers added to this family every now and then. There are many virgin areas where plasma polymers are yet to make a foray namely low-k dielectrics or as potential nonlinear optical materials such as optical limiters. There are also many materials which are not been prepared by the method of plasma polymerisation. Some of the materials which are not been dealt with are phenyl hydrazine and tea tree oil. The advantage of employing organic extracts like tea tree oil monomers as precursors for making plasma polymers is that there can be value addition to the already existing uses and possibility exists in converting them to electronic grade materials, especially semiconductors and optically active materials for photonic applications. One of the major motivations of this study is to synthesize plasma polymer thin films based on aniline, phenyl hydrazine, pyrrole, tea tree oil and eucalyptus oil by employing both rf and ac plasma polymerisation techniques. This will be carried out with the objective of growing thin films on various substrates such as glass, quartz and indium tin oxide (ITO) coated glass. There are various properties namely structural, electrical, dielectric permittivity, nonlinear optical properties which are to be evaluated to establish the relationship with the structure and the other properties. Special emphasis will be laid in evaluating the optical parameters like refractive index (n), extinction coefficient (k), the real and imaginary components of dielectric constant and the optical transition energies of the polymer thin films from the spectroscopic ellipsometric studies. Apart from evaluating these physical constants, it is also possible to predict whether a material exhibit nonlinear optical properties by ellipsometric investigations. So further studies using open aperture z-scan technique in order to evaluate the nonlinear optical properties of a few selected samples which are potential nonlinear optical materials is another objective of the present study. It will be another endeavour to offer an appropriate explanation for the nonlinear optical properties displayed by these films. Doping of plasma polymers is found to modify both the electrical conductivity and optical properties. Iodine is found to modify the properties of the polymer thin films. However insitu iodine doping is tricky and the film often looses its stability because of the escape of iodine. An appropriate insitu technique of doping will be developed to dope iodine in to the plasma polymerized thin films. Doping of polymer thin films with iodine results in improved and modified optical and electrical properties. However it requires tools like FTIR and UV-Vis-NIR spectroscopy to elucidate the structural and optical modifications imparted to the polymer films. This will be attempted here to establish the role of iodine in the modification of the properties exhibited by the films
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
The system described herein represents the first example of a recommender system in digital ecosystems where agents negotiate services on behalf of small companies. The small companies compete not only with price or quality, but with a wider service-by-service composition by subcontracting with other companies. The final result of these offerings depends on negotiations at the scale of millions of small companies. This scale requires new platforms for supporting digital business ecosystems, as well as related services like open-id, trust management, monitors and recommenders. This is done in the Open Negotiation Environment (ONE), which is an open-source platform that allows agents, on behalf of small companies, to negotiate and use the ecosystem services, and enables the development of new agent technologies. The methods and tools of cyber engineering are necessary to build up Open Negotiation Environments that are stable, a basic condition for predictable business and reliable business environments. Aiming to build stable digital business ecosystems by means of improved collective intelligence, we introduce a model of negotiation style dynamics from the point of view of computational ecology. This model inspires an ecosystem monitor as well as a novel negotiation style recommender. The ecosystem monitor provides hints to the negotiation style recommender to achieve greater stability of an open negotiation environment in a digital business ecosystem. The greater stability provides the small companies with higher predictability, and therefore better business results. The negotiation style recommender is implemented with a simulated annealing algorithm at a constant temperature, and its impact is shown by applying it to a real case of an open negotiation environment populated by Italian companies
Resumo:
In this paper, we employ techniques from artificial intelligence such as reinforcement learning and agent based modeling as building blocks of a computational model for an economy based on conventions. First we model the interaction among firms in the private sector. These firms behave in an information environment based on conventions, meaning that a firm is likely to behave as its neighbors if it observes that their actions lead to a good pay off. On the other hand, we propose the use of reinforcement learning as a computational model for the role of the government in the economy, as the agent that determines the fiscal policy, and whose objective is to maximize the growth of the economy. We present the implementation of a simulator of the proposed model based on SWARM, that employs the SARSA(λ) algorithm combined with a multilayer perceptron as the function approximation for the action value function.
Resumo:
The activated sludge process - the main biological technology usually applied to wastewater treatment plants (WWTP) - directly depends on live beings (microorganisms), and therefore on unforeseen changes produced by them. It could be possible to get a good plant operation if the supervisory control system is able to react to the changes and deviations in the system and can take the necessary actions to restore the system’s performance. These decisions are often based both on physical, chemical, microbiological principles (suitable to be modelled by conventional control algorithms) and on some knowledge (suitable to be modelled by knowledge-based systems). But one of the key problems in knowledge-based control systems design is the development of an architecture able to manage efficiently the different elements of the process (integrated architecture), to learn from previous cases (spec@c experimental knowledge) and to acquire the domain knowledge (general expert knowledge). These problems increase when the process belongs to an ill-structured domain and is composed of several complex operational units. Therefore, an integrated and distributed AI architecture seems to be a good choice. This paper proposes an integrated and distributed supervisory multi-level architecture for the supervision of WWTP, that overcomes some of the main troubles of classical control techniques and those of knowledge-based systems applied to real world systems
Resumo:
La principal contribución de esta Tesis es la propuesta de un modelo de agente BDI graduado (g-BDI) que permita especificar una arquitetura de agente capaz de representar y razonar con actitudes mentales graduadas. Consideramos que una arquitectura BDI más exible permitirá desarrollar agentes que alcancen mejor performance en entornos inciertos y dinámicos, al servicio de otros agentes (humanos o no) que puedan tener un conjunto de motivaciones graduadas. En el modelo g-BDI, las actitudes graduadas del agente tienen una representación explícita y adecuada. Los grados en las creencias representan la medida en que el agente cree que una fórmula es verdadera, en los deseos positivos o negativos permiten al agente establecer respectivamente, diferentes niveles de preferencias o de rechazo. Las graduaciones en las intenciones también dan una medida de preferencia pero en este caso, modelan el costo/beneficio que le trae al agente alcanzar una meta. Luego, a partir de la representación e interacción de estas actitudes graduadas, pueden ser modelados agentes que muestren diferentes tipos de comportamiento. La formalización del modelo g-BDI está basada en los sistemas multi-contextos. Diferentes lógicas modales multivaluadas se han propuesto para representar y razonar sobre las creencias, deseos e intenciones, presentando en cada caso una axiomática completa y consistente. Para tratar con la semántica operacional del modelo de agente, primero se definió un calculus para la ejecución de sistemas multi-contextos, denominado Multi-context calculus. Luego, mediante este calculus se le ha dado al modelo g-BDI semántica computacional. Por otra parte, se ha presentado una metodología para la ingeniería de agentes g-BDI en un escenario multiagente. El objeto de esta propuesta es guiar el diseño de sistemas multiagentes, a partir de un problema del mundo real. Por medio del desarrollo de un sistema recomendador en turismo como caso de estudio, donde el agente recomendador tiene una arquitectura g-BDI, se ha mostrado que este modelo es valioso para diseñar e implementar agentes concretos. Finalmente, usando este caso de estudio se ha realizado una experimentación sobre la flexibilidad y performance del modelo de agente g-BDI, demostrando que es útil para desarrollar agentes que manifiesten conductas diversas. También se ha mostrado que los resultados obtenidos con estos agentes recomendadores modelizados con actitudes graduadas, son mejores que aquellos alcanzados por los agentes con actitudes no-graduadas.
Resumo:
Password Authentication Protocol (PAP) is widely used in the Wireless Fidelity Point-to-Point Protocol to authenticate an identity and password for a peer. This paper uses a new knowledge-based framework to verify the PAP protocol and a fixed version. Flaws are found in both the original and the fixed versions. A new enhanced protocol is provided and the security of it is proved The whole process is implemented in a mechanical reasoning platform, Isabelle. It only takes a few seconds to find flaws in the original and the fixed protocol and to verify that the enhanced version of the PAP protocol is secure.
Resumo:
Knowledge-elicitation is a common technique used to produce rules about the operation of a plant from the knowledge that is available from human expertise. Similarly, data-mining is becoming a popular technique to extract rules from the data available from the operation of a plant. In the work reported here knowledge was required to enable the supervisory control of an aluminium hot strip mill by the determination of mill set-points. A method was developed to fuse knowledge-elicitation and data-mining to incorporate the best aspects of each technique, whilst avoiding known problems. Utilisation of the knowledge was through an expert system, which determined schedules of set-points and provided information to human operators. The results show that the method proposed in this paper was effective in producing rules for the on-line control of a complex industrial process. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Knowledge-elicitation is a common technique used to produce rules about the operation of a plant from the knowledge that is available from human expertise. Similarly, data-mining is becoming a popular technique to extract rules from the data available from the operation of a plant. In the work reported here knowledge was required to enable the supervisory control of an aluminium hot strip mill by the determination of mill set-points. A method was developed to fuse knowledge-elicitation and data-mining to incorporate the best aspects of each technique, whilst avoiding known problems. Utilisation of the knowledge was through an expert system, which determined schedules of set-points and provided information to human operators. The results show that the method proposed in this paper was effective in producing rules for the on-line control of a complex industrial process.
Resumo:
Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.
Resumo:
Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.
Resumo:
Species` potential distribution modelling consists of building a representation of the fundamental ecological requirements of a species from biotic and abiotic conditions where the species is known to occur. Such models can be valuable tools to understand the biogeography of species and to support the prediction of its presence/absence considering a particular environment scenario. This paper investigates the use of different supervised machine learning techniques to model the potential distribution of 35 plant species from Latin America. Each technique was able to extract a different representation of the relations between the environmental conditions and the distribution profile of the species. The experimental results highlight the good performance of random trees classifiers, indicating this particular technique as a promising candidate for modelling species` potential distribution. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Credit scoring modelling comprises one of the leading formal tools for supporting the granting of credit. Its core objective consists of the generation of a score by means of which potential clients can be listed in the order of the probability of default. A critical factor is whether a credit scoring model is accurate enough in order to provide correct classification of the client as a good or bad payer. In this context the concept of bootstraping aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the fitted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper we propose a new bagging-type variant procedure, which we call poly-bagging, consisting of combining predictors over a succession of resamplings. The study is derived by credit scoring modelling. The proposed poly-bagging procedure was applied to some different artificial datasets and to a real granting of credit dataset up to three successions of resamplings. We observed better classification accuracy for the two-bagged and the three-bagged models for all considered setups. These results lead to a strong indication that the poly-bagging approach may promote improvement on the modelling performance measures, while keeping a flexible and straightforward bagging-type structure easy to implement. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A novel mathematical framework inspired on Morse Theory for topological triangle characterization in 2D meshes is introduced that is useful for applications involving the creation of mesh models of objects whose geometry is not known a priori. The framework guarantees a precise control of topological changes introduced as a result of triangle insertion/removal operations and enables the definition of intuitive high-level operators for managing the mesh while keeping its topological integrity. An application is described in the implementation of an innovative approach for the detection of 2D objects from images that integrates the topological control enabled by geometric modeling with traditional image processing techniques. (C) 2008 Published by Elsevier B.V.