795 resultados para Slot-based task-splitting algorithms
Resumo:
Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining
Resumo:
Fingerprint based authentication systems are one of the cost-effective biometric authentication techniques employed for personal identification. As the data base population increases, fast identification/recognition algorithms are required with high accuracy. Accuracy can be increased using multimodal evidences collected by multiple biometric traits. In this work, consecutive fingerprint images are taken, global singularities are located using directional field strength and their local orientation vector is formulated with respect to the base line of the finger. Feature level fusion is carried out and a 32 element feature template is obtained. A matching score is formulated for the identification and 100% accuracy was obtained for a database of 300 persons. The polygonal feature vector helps to reduce the size of the feature database from the present 70-100 minutiae features to just 32 features and also a lower matching threshold can be fixed compared to single finger based identification
Resumo:
The main objective of this thesis is to design and develop spectral signature based chipless RFID tags Multiresonators are essential component of spectral signature based chipless tags. To enhance the data coding capacity in spectral signature based tags require large number of resonances in a limited bandwidth. The frequency of the resonators have to be close to each other. To achieve this condition, the quality factor of each resonance needs to be high. The thesis discusses about various types of multiresonators, their practical implementation and how they can be used in design. Encoding of data into spectral domain is another challenge in chipless tag design. Here, the technique used is the presence or absence encoding technique. The presence of a resonance is used to encode Logic 1 and absence of a speci c resonance is used to encode Logic 0. Di erent types of multiresonators such as open stub multiresonators, coupled bunch hairpin resonators and shorted slot ground ring resonator are proposed in this thesis.
Resumo:
From the early stages of the twentieth century, polyaniline (PANI), a well-known and extensively studied conducting polymer has captured the attention of scientific community owing to its interesting electrical and optical properties. Starting from its structural properties, to the currently pursued optical, electrical and electrochemical properties, extensive investigations on pure PANI and its composites are still much relevant to explore its potentialities to the maximum extent. The synthesis of highly crystalline PANI films with ordered structure and high electrical conductivity has not been pursued in depth yet. Recently, nanostructured PANI and the nanocomposites of PANI have attracted a great deal of research attention owing to the possibilities of applications in optical switching devices, optoelectronics and energy storage devices. The work presented in the thesis is centered around the realization of highly conducting and structurally ordered PANI and its composites for applications mainly in the areas of nonlinear optics and electrochemical energy storage. Out of the vast variety of application fields of PANI, these two areas are specifically selected for the present studies, because of the following observations. The non-linear optical properties and the energy storing properties of PANI depend quite sensitively on the extent of conjugation of the polymer structure, the type and concentration of the dopants added and the type and size of the nano particles selected for making the nanocomposites. The first phase of the work is devoted to the synthesis of highly ordered and conducting films of PANI doped with various dopants and the structural, morphological and electrical characterization followed by the synthesis of metal nanoparticles incorporated PANI samples and the detailed optical characterization in the linear and nonlinear regimes. The second phase of the work comprises the investigations on the prospects of PANI in realizing polymer based rechargeable lithium ion cells with the inherent structural flexibility of polymer systems and environmental safety and stability. Secondary battery systems have become an inevitable part of daily life. They can be found in most of the portable electronic gadgets and recently they have started powering automobiles, although the power generated is low. The efficient storage of electrical energy generated from solar cells is achieved by using suitable secondary battery systems. The development of rechargeable battery systems having excellent charge storage capacity, cyclability, environmental friendliness and flexibility has yet to be realized in practice. Rechargeable Li-ion cells employing cathode active materials like LiCoO2, LiMn2O4, LiFePO4 have got remarkable charge storage capacity with least charge leakage when not in use. However, material toxicity, chance of cell explosion and lack of effective cell recycling mechanism pose significant risk factors which are to be addressed seriously. These cells also lack flexibility in their design due to the structural characteristics of the electrode materials. Global research is directed towards identifying new class of electrode materials with less risk factors and better structural stability and flexibility. Polymer based electrode materials with inherent flexibility, stability and eco-friendliness can be a suitable choice. One of the prime drawbacks of polymer based cathode materials is the low electronic conductivity. Hence the real task with this class of materials is to get better electronic conductivity with good electrical storage capability. Electronic conductivity can be enhanced by using proper dopants. In the designing of rechargeable Li-ion cells with polymer based cathode active materials, the key issue is to identify the optimum lithiation of the polymer cathode which can ensure the highest electronic conductivity and specific charge capacity possible The development of conducting polymer based rechargeable Li-ion cells with high specific capacity and excellent cycling characteristics is a highly competitive area among research and development groups, worldwide. Polymer based rechargeable batteries are specifically attractive due to the environmentally benign nature and the possible constructional flexibility they offer. Among polymers having electrical transport properties suitable for rechargeable battery applications, polyaniline is the most favoured one due to its tunable electrical conducting properties and the availability of cost effective precursor materials for its synthesis. The performance of a battery depends significantly on the characteristics of its integral parts, the cathode, anode and the electrolyte, which in turn depend on the materials used. Many research groups are involved in developing new electrode and electrolyte materials to enhance the overall performance efficiency of the battery. Currently explored electrolytes for Li ion battery applications are in liquid or gel form, which makes well-defined sealing essential. The use of solid electrolytes eliminates the need for containment of liquid electrolytes, which will certainly simplify the cell design and improve the safety and durability. The other advantages of polymer electrolytes include dimensional stability, safety and the ability to prevent lithium dendrite formation. One of the ultimate aims of the present work is to realize all solid state, flexible and environment friendly Li-ion cells with high specific capacity and excellent cycling stability. Part of the present work is hence focused on identifying good polymer based solid electrolytes essential for realizing all solid state polymer based Li ion cells.The present work is an attempt to study the versatile roles of polyaniline in two different fields of technological applications like nonlinear optics and energy storage. Conducting form of doped PANI films with good extent of crystallinity have been realized using a level surface assisted casting method in addition to the generally employed technique of spin coating. Metal nanoparticles embedded PANI offers a rich source for nonlinear optical studies and hence gold and silver nanoparticles have been used for making the nanocomposites in bulk and thin film forms. These PANI nanocomposites are found to exhibit quite dominant third order optical non-linearity. The highlight of these studies is the observation of the interesting phenomenon of the switching between saturable absorption (SA) and reverse saturable absorption (RSA) in the films of Ag/PANI and Au/PANI nanocomposites, which offers prospects of applications in optical switching. The investigations on the energy storage prospects of PANI were carried out on Li enriched PANI which was used as the cathode active material for assembling rechargeable Li-ion cells. For Li enrichment or Li doping of PANI, n-Butyllithium (n-BuLi) in hexanes was used. The Li doping as well as the Li-ion cell assembling were carried out in an argon filled glove box. Coin cells were assembled with Li doped PANI with different doping concentrations, as the cathode, LiPF6 as the electrolyte and Li metal as the anode. These coin cells are found to show reasonably good specific capacity around 22mAh/g and excellent cycling stability and coulombic efficiency around 99%. To improve the specific capacity, composites of Li doped PANI with inorganic cathode active materials like LiFePO4 and LiMn2O4 were synthesized and coin cells were assembled as mentioned earlier to assess the electrochemical capability. The cells assembled using the composite cathodes are found to show significant enhancement in specific capacity to around 40mAh/g. One of the other interesting observations is the complete blocking of the adverse effects of Jahn-Teller distortion, when the composite cathode, PANI-LiMn2O4 is used for assembling the Li-ion cells. This distortion is generally observed, near room temperature, when LiMn2O4 is used as the cathode, which significantly reduces the cycling stability of the cells.
Resumo:
In der Arbeit werden zunächst die wesentlichsten Fakten über Schiefpolynome wiederholt, der Fokus liegt dabei auf Shift- und q-Shift-Operatoren in Charakteristik Null. Alle für die Arithmetik mit diesen Objekten notwendigen Konzepte und Algorithmen finden sich im ersten Kapitel. Einige der zur Bestimmung von Lösungen notwendigen Daten können aus dem Newtonpolygon, einer den Operatoren zugeordneten geometrischen Figur, abgelesen werden. Die Herleitung dieser Zusammenhänge ist das Thema des zweiten Kapitels der Arbeit, wobei dies insbesondere im q-Shift-Fall in dieser Form neu ist. Das dritte Kapitel beschäftigt sich mit der Bestimmung polynomieller und rationaler Lösungen dieser Operatoren, dabei folgt es im Wesentlichen der Darstellung von Mark van Hoeij. Der für die Faktorisierung von (q-)Shift Operatoren interessanteste Fall sind die sogenannten (q-)hypergeometrischen Lösungen, die direkt zu Rechtsfaktoren erster Ordnung korrespondieren. Im vierten Kapitel wird der van Hoeij-Algorithmus vom Shift- auf den q-Shift-Fall übertragen. Außerdem wird eine deutliche Verbesserung des q-Petkovsek-Algorithmus mit Hilfe der Daten des Newtonpolygons hergeleitet. Das fünfte Kapitel widmet sich der Berechnung allgemeiner Faktoren, wozu zunächst der adjungierte Operator eingeführt wird, der die Berechnung von Linksfaktoren erlaubt. Dann wird ein Algorithmus zur Berechnung von Rechtsfaktoren beliebiger Ordnung dargestellt. Für die praktische Benutzung ist dies allerdings für höhere Ordnungen unpraktikabel. Bei fast allen vorgestellten Algorithmen tritt das Lösen linearer Gleichungssysteme über rationalen Funktionenkörpern als Zwischenschritt auf. Dies ist in den meisten Computeralgebrasystemen nicht befriedigend gelöst. Aus diesem Grund wird im letzten Kapitel ein auf Evaluation und Interpolation basierender Algorithmus zur Lösung dieses Problems vorgestellt, der in allen getesteten Systemen den Standard-Algorithmen deutlich überlegen ist. Alle Algorithmen der Arbeit sind in einem MuPAD-Package implementiert, das der Arbeit beiliegt und eine komfortable Handhabung der auftretenden Objekte erlaubt. Mit diesem Paket können in MuPAD nun viele Probleme gelöst werden, für die es vorher keine Funktionen gab.
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
The scope of this work is the fundamental growth, tailoring and characterization of self-organized indium arsenide quantum dots (QDs) and their exploitation as active region for diode lasers emitting in the 1.55 µm range. This wavelength regime is especially interesting for long-haul telecommunications as optical fibers made from silica glass have the lowest optical absorption. Molecular Beam Epitaxy is utilized as fabrication technique for the quantum dots and laser structures. The results presented in this thesis depict the first experimental work for which this reactor was used at the University of Kassel. Most research in the field of self-organized quantum dots has been conducted in the InAs/GaAs material system. It can be seen as the model system of self-organized quantum dots, but is not suitable for the targeted emission wavelength. Light emission from this system at 1.55 µm is hard to accomplish. To stay as close as possible to existing processing technology, the In(AlGa)As/InP (100) material system is deployed. Depending on the epitaxial growth technique and growth parameters this system has the drawback of producing a wide range of nano species besides quantum dots. Best known are the elongated quantum dashes (QDash). Such structures are preferentially formed, if InAs is deposited on InP. This is related to the low lattice-mismatch of 3.2 %, which is less than half of the value in the InAs/GaAs system. The task of creating round-shaped and uniform QDs is rendered more complex considering exchange effects of arsenic and phosphorus as well as anisotropic effects on the surface that do not need to be dealt with in the InAs/GaAs case. While QDash structures haven been studied fundamentally as well as in laser structures, they do not represent the theoretical ideal case of a zero-dimensional material. Creating round-shaped quantum dots on the InP(100) substrate remains a challenging task. Details of the self-organization process are still unknown and the formation of the QDs is not fully understood yet. In the course of the experimental work a novel growth concept was discovered and analyzed that eases the fabrication of QDs. It is based on different crystal growth and ad-atom diffusion processes under supply of different modifications of the arsenic atmosphere in the MBE reactor. The reactor is equipped with special valved cracking effusion cells for arsenic and phosphorus. It represents an all-solid source configuration that does not rely on toxic gas supply. The cracking effusion cell are able to create different species of arsenic and phosphorus. This constitutes the basis of the growth concept. With this method round-shaped QD ensembles with superior optical properties and record-low photoluminescence linewidth were achieved. By systematically varying the growth parameters and working out a detailed analysis of the experimental data a range of parameter values, for which the formation of QDs is favored, was found. A qualitative explanation of the formation characteristics based on the surface migration of In ad-atoms is developed. Such tailored QDs are finally implemented as active region in a self-designed diode laser structure. A basic characterization of the static and temperature-dependent properties was carried out. The QD lasers exceed a reference quantum well laser in terms of inversion conditions and temperature-dependent characteristics. Pulsed output powers of several hundred milli watt were measured at room temperature. In particular, the lasers feature a high modal gain that even allowed cw-emission at room temperature of a processed ridge wave guide device as short as 340 µm with output powers of 17 mW. Modulation experiments performed at the Israel Institute of Technology (Technion) showed a complex behavior of the QDs in the laser cavity. Despite the fact that the laser structure is not fully optimized for a high-speed device, data transmission capabilities of 15 Gb/s combined with low noise were achieved. To the best of the author`s knowledge, this renders the lasers the fastest QD devices operating at 1.55 µm. The thesis starts with an introductory chapter that pronounces the advantages of optical fiber communication in general. Chapter 2 will introduce the fundamental knowledge that is necessary to understand the importance of the active region`s dimensions for the performance of a diode laser. The novel growth concept and its experimental analysis are presented in chapter 3. Chapter 4 finally contains the work on diode lasers.
Resumo:
The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.
Resumo:
This thesis investigates a method for human-robot interaction (HRI) in order to uphold productivity of industrial robots like minimization of the shortest operation time, while ensuring human safety like collision avoidance. For solving such problems an online motion planning approach for robotic manipulators with HRI has been proposed. The approach is based on model predictive control (MPC) with embedded mixed integer programming. The planning strategies of the robotic manipulators mainly considered in the thesis are directly performed in the workspace for easy obstacle representation. The non-convex optimization problem is approximated by a mixed-integer program (MIP). It is further effectively reformulated such that the number of binary variables and the number of feasible integer solutions are drastically decreased. Safety-relevant regions, which are potentially occupied by the human operators, can be generated online by a proposed method based on hidden Markov models. In contrast to previous approaches, which derive predictions based on probability density functions in the form of single points, such as most likely or expected human positions, the proposed method computes safety-relevant subsets of the workspace as a region which is possibly occupied by the human at future instances of time. The method is further enhanced by combining reachability analysis to increase the prediction accuracy. These safety-relevant regions can subsequently serve as safety constraints when the motion is planned by optimization. This way one arrives at motion plans that are safe, i.e. plans that avoid collision with a probability not less than a predefined threshold. The developed methods have been successfully applied to a developed demonstrator, where an industrial robot works in the same space as a human operator. The task of the industrial robot is to drive its end-effector according to a nominal sequence of grippingmotion-releasing operations while no collision with a human arm occurs.
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
Free-word order languages have long posed significant problems for standard parsing algorithms. This thesis presents an implemented parser, based on Government-Binding (GB) theory, for a particular free-word order language, Warlpiri, an aboriginal language of central Australia. The words in a sentence of a free-word order language may swap about relatively freely with little effect on meaning: the permutations of a sentence mean essentially the same thing. It is assumed that this similarity in meaning is directly reflected in the syntax. The parser presented here properly processes free word order because it assigns the same syntactic structure to the permutations of a single sentence. The parser also handles fixed word order, as well as other phenomena. On the view presented here, there is no such thing as a "configurational" or "non-configurational" language. Rather, there is a spectrum of languages that are more or less ordered. The operation of this parsing system is quite different in character from that of more traditional rule-based parsing systems, e.g., context-free parsers. In this system, parsing is carried out via the construction of two different structures, one encoding precedence information and one encoding hierarchical information. This bipartite representation is the key to handling both free- and fixed-order phenomena. This thesis first presents an overview of the portion of Warlpiri that can be parsed. Following this is a description of the linguistic theory on which the parser is based. The chapter after that describes the representations and algorithms of the parser. In conclusion, the parser is compared to related work. The appendix contains a substantial list of test cases ??th grammatical and ungrammatical ??at the parser has actually processed.
Resumo:
In this text, we present two stereo-based head tracking techniques along with a fast 3D model acquisition system. The first tracking technique is a robust implementation of stereo-based head tracking designed for interactive environments with uncontrolled lighting. We integrate fast face detection and drift reduction algorithms with a gradient-based stereo rigid motion tracking technique. Our system can automatically segment and track a user's head under large rotation and illumination variations. Precision and usability of this approach are compared with previous tracking methods for cursor control and target selection in both desktop and interactive room environments. The second tracking technique is designed to improve the robustness of head pose tracking for fast movements. Our iterative hybrid tracker combines constraints from the ICP (Iterative Closest Point) algorithm and normal flow constraint. This new technique is more precise for small movements and noisy depth than ICP alone, and more robust for large movements than the normal flow constraint alone. We present experiments which test the accuracy of our approach on sequences of real and synthetic stereo images. The 3D model acquisition system we present quickly aligns intensity and depth images, and reconstructs a textured 3D mesh. 3D views are registered with shape alignment based on our iterative hybrid tracker. We reconstruct the 3D model using a new Cubic Ray Projection merging algorithm which takes advantage of a novel data structure: the linked voxel space. We present experiments to test the accuracy of our approach on 3D face modelling using real-time stereo images.
Resumo:
This paper describes a trainable system capable of tracking faces and facialsfeatures like eyes and nostrils and estimating basic mouth features such as sdegrees of openness and smile in real time. In developing this system, we have addressed the twin issues of image representation and algorithms for learning. We have used the invariance properties of image representations based on Haar wavelets to robustly capture various facial features. Similarly, unlike previous approaches this system is entirely trained using examples and does not rely on a priori (hand-crafted) models of facial features based on optical flow or facial musculature. The system works in several stages that begin with face detection, followed by localization of facial features and estimation of mouth parameters. Each of these stages is formulated as a problem in supervised learning from examples. We apply the new and robust technique of support vector machines (SVM) for classification in the stage of skin segmentation, face detection and eye detection. Estimation of mouth parameters is modeled as a regression from a sparse subset of coefficients (basis functions) of an overcomplete dictionary of Haar wavelets.
Resumo:
A difficulty in the design of automated text summarization algorithms is in the objective evaluation. Viewing summarization as a tradeoff between length and information content, we introduce a technique based on a hierarchy of classifiers to rank, through model selection, different summarization methods. This summary evaluation technique allows for broader comparison of summarization methods than the traditional techniques of summary evaluation. We present an empirical study of two simple, albeit widely used, summarization methods that shows the different usages of this automated task-based evaluation system and confirms the results obtained with human-based evaluation methods over smaller corpora.
Resumo:
A persistent issue of debate in the area of 3D object recognition concerns the nature of the experientially acquired object models in the primate visual system. One prominent proposal in this regard has expounded the use of object centered models, such as representations of the objects' 3D structures in a coordinate frame independent of the viewing parameters [Marr and Nishihara, 1978]. In contrast to this is another proposal which suggests that the viewing parameters encountered during the learning phase might be inextricably linked to subsequent performance on a recognition task [Tarr and Pinker, 1989; Poggio and Edelman, 1990]. The 'object model', according to this idea, is simply a collection of the sample views encountered during training. Given that object centered recognition strategies have the attractive feature of leading to viewpoint independence, they have garnered much of the research effort in the field of computational vision. Furthermore, since human recognition performance seems remarkably robust in the face of imaging variations [Ellis et al., 1989], it has often been implicitly assumed that the visual system employs an object centered strategy. In the present study we examine this assumption more closely. Our experimental results with a class of novel 3D structures strongly suggest the use of a view-based strategy by the human visual system even when it has the opportunity of constructing and using object-centered models. In fact, for our chosen class of objects, the results seem to support a stronger claim: 3D object recognition is 2D view-based.