813 resultados para feature based modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Landwirtschaft spielt eine zentrale Rolle im Erdsystem. Sie trägt durch die Emission von CO2, CH4 und N2O zum Treibhauseffekt bei, kann Bodendegradation und Eutrophierung verursachen, regionale Wasserkreisläufe verändern und wird außerdem stark vom Klimawandel betroffen sein. Da all diese Prozesse durch die zugrunde liegenden Nährstoff- und Wasserflüsse eng miteinander verknüpft sind, sollten sie in einem konsistenten Modellansatz betrachtet werden. Dennoch haben Datenmangel und ungenügendes Prozessverständnis dies bis vor kurzem auf der globalen Skala verhindert. In dieser Arbeit wird die erste Version eines solchen konsistenten globalen Modellansatzes präsentiert, wobei der Schwerpunkt auf der Simulation landwirtschaftlicher Erträge und den resultierenden N2O-Emissionen liegt. Der Grund für diese Schwerpunktsetzung liegt darin, dass die korrekte Abbildung des Pflanzenwachstums eine essentielle Voraussetzung für die Simulation aller anderen Prozesse ist. Des weiteren sind aktuelle und potentielle landwirtschaftliche Erträge wichtige treibende Kräfte für Landnutzungsänderungen und werden stark vom Klimawandel betroffen sein. Den zweiten Schwerpunkt bildet die Abschätzung landwirtschaftlicher N2O-Emissionen, da bislang kein prozessbasiertes N2O-Modell auf der globalen Skala eingesetzt wurde. Als Grundlage für die globale Modellierung wurde das bestehende Agrarökosystemmodell Daycent gewählt. Neben der Schaffung der Simulationsumgebung wurden zunächst die benötigten globalen Datensätze für Bodenparameter, Klima und landwirtschaftliche Bewirtschaftung zusammengestellt. Da für Pflanzzeitpunkte bislang keine globale Datenbasis zur Verfügung steht, und diese sich mit dem Klimawandel ändern werden, wurde eine Routine zur Berechnung von Pflanzzeitpunkten entwickelt. Die Ergebnisse zeigen eine gute Übereinstimmung mit Anbaukalendern der FAO, die für einige Feldfrüchte und Länder verfügbar sind. Danach wurde das Daycent-Modell für die Ertragsberechnung von Weizen, Reis, Mais, Soja, Hirse, Hülsenfrüchten, Kartoffel, Cassava und Baumwolle parametrisiert und kalibriert. Die Simulationsergebnisse zeigen, dass Daycent die wichtigsten Klima-, Boden- und Bewirtschaftungseffekte auf die Ertragsbildung korrekt abbildet. Berechnete Länderdurchschnitte stimmen gut mit Daten der FAO überein (R2 = 0.66 für Weizen, Reis und Mais; R2 = 0.32 für Soja), und räumliche Ertragsmuster entsprechen weitgehend der beobachteten Verteilung von Feldfrüchten und subnationalen Statistiken. Vor der Modellierung landwirtschaftlicher N2O-Emissionen mit dem Daycent-Modell stand eine statistische Analyse von N2O-und NO-Emissionsmessungen aus natürlichen und landwirtschaftlichen Ökosystemen. Die als signifikant identifizierten Parameter für N2O (Düngemenge, Bodenkohlenstoffgehalt, Boden-pH, Textur, Feldfrucht, Düngersorte) und NO (Düngemenge, Bodenstickstoffgehalt, Klima) entsprechen weitgehend den Ergebnissen einer früheren Analyse. Für Emissionen aus Böden unter natürlicher Vegetation, für die es bislang keine solche statistische Untersuchung gab, haben Bodenkohlenstoffgehalt, Boden-pH, Lagerungsdichte, Drainierung und Vegetationstyp einen signifikanten Einfluss auf die N2O-Emissionen, während NO-Emissionen signifikant von Bodenkohlenstoffgehalt und Vegetationstyp abhängen. Basierend auf den daraus entwickelten statistischen Modellen betragen die globalen Emissionen aus Ackerböden 3.3 Tg N/y für N2O, und 1.4 Tg N/y für NO. Solche statistischen Modelle sind nützlich, um Abschätzungen und Unsicherheitsbereiche von N2O- und NO-Emissionen basierend auf einer Vielzahl von Messungen zu berechnen. Die Dynamik des Bodenstickstoffs, insbesondere beeinflusst durch Pflanzenwachstum, Klimawandel und Landnutzungsänderung, kann allerdings nur durch die Anwendung von prozessorientierten Modellen berücksichtigt werden. Zur Modellierung von N2O-Emissionen mit dem Daycent-Modell wurde zunächst dessen Spurengasmodul durch eine detailliertere Berechnung von Nitrifikation und Denitrifikation und die Berücksichtigung von Frost-Auftau-Emissionen weiterentwickelt. Diese überarbeitete Modellversion wurde dann an N2O-Emissionsmessungen unter verschiedenen Klimaten und Feldfrüchten getestet. Sowohl die Dynamik als auch die Gesamtsummen der N2O-Emissionen werden befriedigend abgebildet, wobei die Modelleffizienz für monatliche Mittelwerte zwischen 0.1 und 0.66 für die meisten Standorte liegt. Basierend auf der überarbeiteten Modellversion wurden die N2O-Emissionen für die zuvor parametrisierten Feldfrüchte berechnet. Emissionsraten und feldfruchtspezifische Unterschiede stimmen weitgehend mit Literaturangaben überein. Düngemittelinduzierte Emissionen, die momentan vom IPCC mit 1.25 +/- 1% der eingesetzten Düngemenge abgeschätzt werden, reichen von 0.77% (Reis) bis 2.76% (Mais). Die Summe der berechneten Emissionen aus landwirtschaftlichen Böden beträgt für die Mitte der 1990er Jahre 2.1 Tg N2O-N/y, was mit den Abschätzungen aus anderen Studien übereinstimmt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In now-a-days semiconductor and MEMS technologies the photolithography is the working horse for fabrication of functional devices. The conventional way (so called Top-Down approach) of microstructuring starts with photolithography, followed by patterning the structures using etching, especially dry etching. The requirements for smaller and hence faster devices lead to decrease of the feature size to the range of several nanometers. However, the production of devices in this scale range needs photolithography equipment, which must overcome the diffraction limit. Therefore, new photolithography techniques have been recently developed, but they are rather expensive and restricted to plane surfaces. Recently a new route has been presented - so-called Bottom-Up approach - where from a single atom or a molecule it is possible to obtain functional devices. This creates new field - Nanotechnology - where one speaks about structures with dimensions 1 - 100 nm, and which has the possibility to replace the conventional photolithography concerning its integral part - the self-assembly. However, this technique requires additional and special equipment and therefore is not yet widely applicable. This work presents a general scheme for the fabrication of silicon and silicon dioxide structures with lateral dimensions of less than 100 nm that avoids high-resolution photolithography processes. For the self-aligned formation of extremely small openings in silicon dioxide layers at in depth sharpened surface structures, the angle dependent etching rate distribution of silicon dioxide against plasma etching with a fluorocarbon gas (CHF3) was exploited. Subsequent anisotropic plasma etching of the silicon substrate material through the perforated silicon dioxide masking layer results in high aspect ratio trenches of approximately the same lateral dimensions. The latter can be reduced and precisely adjusted between 0 and 200 nm by thermal oxidation of the silicon structures owing to the volume expansion of silicon during the oxidation. On the basis of this a technology for the fabrication of SNOM calibration standards is presented. Additionally so-formed trenches were used as a template for CVD deposition of diamond resulting in high aspect ratio diamond knife. A lithography-free method for production of periodic and nonperiodic surface structures using the angular dependence of the etching rate is also presented. It combines the self-assembly of masking particles with the conventional plasma etching techniques known from microelectromechanical system technology. The method is generally applicable to bulk as well as layered materials. In this work, layers of glass spheres of different diameters were assembled on the sample surface forming a mask against plasma etching. Silicon surface structures with periodicity of 500 nm and feature dimensions of 20 nm were produced in this way. Thermal oxidation of the so structured silicon substrate offers the capability to vary the fill factor of the periodic structure owing to the volume expansion during oxidation but also to define silicon dioxide surface structures by selective plasma etching. Similar structures can be simply obtained by structuring silicon dioxide layers on silicon. The method offers a simple route for bridging the Nano- and Microtechnology and moreover, an uncomplicated way for photonic crystal fabrication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, progress in the area of mobile telecommunications has changed our way of life, in the private as well as the business domain. Mobile and wireless networks have ever increasing bit rates, mobile network operators provide more and more services, and at the same time costs for the usage of mobile services and bit rates are decreasing. However, mobile services today still lack functions that seamlessly integrate into users’ everyday life. That is, service attributes such as context-awareness and personalisation are often either proprietary, limited or not available at all. In order to overcome this deficiency, telecommunications companies are heavily engaged in the research and development of service platforms for networks beyond 3G for the provisioning of innovative mobile services. These service platforms are to support such service attributes. Service platforms are to provide basic service-independent functions such as billing, identity management, context management, user profile management, etc. Instead of developing own solutions, developers of end-user services such as innovative messaging services or location-based services can utilise the platform-side functions for their own purposes. In doing so, the platform-side support for such functions takes away complexity, development time and development costs from service developers. Context-awareness and personalisation are two of the most important aspects of service platforms in telecommunications environments. The combination of context-awareness and personalisation features can also be described as situation-dependent personalisation of services. The support for this feature requires several processing steps. The focus of this doctoral thesis is on the processing step, in which the user’s current context is matched against situation-dependent user preferences to find the matching user preferences for the current user’s situation. However, to achieve this, a user profile management system and corresponding functionality is required. These parts are also covered by this thesis. Altogether, this thesis provides the following contributions: The first part of the contribution is mainly architecture-oriented. First and foremost, we provide a user profile management system that addresses the specific requirements of service platforms in telecommunications environments. In particular, the user profile management system has to deal with situation-specific user preferences and with user information for various services. In order to structure the user information, we also propose a user profile structure and the corresponding user profile ontology as part of an ontology infrastructure in a service platform. The second part of the contribution is the selection mechanism for finding matching situation-dependent user preferences for the personalisation of services. This functionality is provided as a sub-module of the user profile management system. Contrary to existing solutions, our selection mechanism is based on ontology reasoning. This mechanism is evaluated in terms of runtime performance and in terms of supported functionality compared to other approaches. The results of the evaluation show the benefits and the drawbacks of ontology modelling and ontology reasoning in practical applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

English: The present thesis describes the synthesis of 1,1’-ferrocendiyl-based pyridylphosphine ligands, the exploration of their fundamental coordination chemistry and preliminary experiments with selected complexes aimed at potential applications. One main aspect is the synthesis of the bidentate ferrocene-based pyridylphosphine ligands 1-(Pyrid-2-yl)-1’-diphenylphosphinoferrocene, 1-(Pyrid-3-yl)-1’-diphenylphosphinoferrocene and 1-[(Pyrid-2-yl)methyl]-1’-diphenylphosphinoferrocene. A specific feature of these ligands is the ball-bearing like flexibility of the ferrocenebased backbone. An additional flexibility element is the rotation around the C–C single bonds. Consequently, the donor atoms can realise a wide range of positions with respect to each other and are therefore able to adapt to the coordination requirements of different metal centres. The flexibility of the ligand also plays a role in another key aspect of this work, which concerns the coordination mode, i. e. bridging vs. chelating. In addition to the flexibility, also the position of the donor atoms to each other is important. This is largely affected by the position of the pyridyl nitrogen (pyrid-2-yl vs. pyrid-3-yl) and the methylen group in 1-[(Pyrid-2-yl)methyl]-1’-diphenylphosphinoferrocene. Another interesting point is the combination of a soft phosphorus donor atom with a harder nitrogen donor atom, according to the HSAB principle. This combination generates a unique binding profile, since the pi-acceptor character of the P site is able to stabilise a metal centre in a low oxidation state, while the nitrogen sigma-donor ability can make the metal more susceptible to oxidative addition reactions. A P,N-donor combination can afford hemilabile binding profiles, which would be ideal for catalysis. Beyond 1,2-substituted ferrocene derivatives, which are quite successful in catalytic applications, 1,1’-derivatives are rather underrepresented. While a low-yield synthetic pathway to 1-(Pyrid-2-yl)-1’-diphenylphosphinoferrocene was already described in the literature [I. R. Butler, Organometallics 1992, 11, 74.], it was possible to find a new, improved and simplified synthetic pathway. Both other ligands were unknown prior to this work. Satisfactory results in the synthesis of 1-(Pyrid-3-yl)-1’-diphenylphosphinoferrocene could be achieved by working in analogy to the new synthetic procedure for 1-(Pyrid-2-yl)-1’-diphenylphosphinoferrocene. The synthesis of 1-[(Pyrid-2-yl)methyl]-1’-diphenylphosphinoferrocene has been handled by the group of Prof. Petr Stepnicka from Charles University, Prague, Czech Republic. The synthesis of tridentate ligands with an analogous heterodentate arrangement, was investigated briefly as a sideline of this study. The major part of this thesis deals with the fundamental coordination chemistry towards transition metals of the groups 10, 11 and 12. Due to the well-established catalytic properties of analogous palladium complexes, the coordination chemistry towards palladium (group 10) is of particular interest. The metals zinc and cadmium (group 12) are also of substantial importance because they are redox-inert in their divalent state. This is relevant in view of electrochemical investigations concerning the utilisation of the ligands as molecular redox sensors. Also mercury and the monovalent metals silver and gold (group 11) are included because of their rich coordination chemistry. It is essential to answer questions concerning aspects of the ligands’ coordination mode bearing in mind the HSAB principle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enterprise Modeling (EM) is currently in operation either as a technique to represent and understand the structure and behavior of the enterprise, or as a technique to analyze business processes, and in many cases as support technique for business process reengineering. However, EM architectures and methods for Enterprise Engineering can also used to support new management techniques like SIX SIGMA, because these new techniques need a clear, transparent and integrated definition and description of the business activities of the enterprise to be able to build up, optimize and operate an successful enterprise. The main goal of SIX SIGMA is to optimize the performance of processes. A still open question is: "What are the adequate Quality criteria and methods to ensure such performance? What must we do to get Quality governance?" This paper describes a method including an Enterprise Engineering method and SIX SIGMA strategy to reach Quality Governance

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scope of this work is the fundamental growth, tailoring and characterization of self-organized indium arsenide quantum dots (QDs) and their exploitation as active region for diode lasers emitting in the 1.55 µm range. This wavelength regime is especially interesting for long-haul telecommunications as optical fibers made from silica glass have the lowest optical absorption. Molecular Beam Epitaxy is utilized as fabrication technique for the quantum dots and laser structures. The results presented in this thesis depict the first experimental work for which this reactor was used at the University of Kassel. Most research in the field of self-organized quantum dots has been conducted in the InAs/GaAs material system. It can be seen as the model system of self-organized quantum dots, but is not suitable for the targeted emission wavelength. Light emission from this system at 1.55 µm is hard to accomplish. To stay as close as possible to existing processing technology, the In(AlGa)As/InP (100) material system is deployed. Depending on the epitaxial growth technique and growth parameters this system has the drawback of producing a wide range of nano species besides quantum dots. Best known are the elongated quantum dashes (QDash). Such structures are preferentially formed, if InAs is deposited on InP. This is related to the low lattice-mismatch of 3.2 %, which is less than half of the value in the InAs/GaAs system. The task of creating round-shaped and uniform QDs is rendered more complex considering exchange effects of arsenic and phosphorus as well as anisotropic effects on the surface that do not need to be dealt with in the InAs/GaAs case. While QDash structures haven been studied fundamentally as well as in laser structures, they do not represent the theoretical ideal case of a zero-dimensional material. Creating round-shaped quantum dots on the InP(100) substrate remains a challenging task. Details of the self-organization process are still unknown and the formation of the QDs is not fully understood yet. In the course of the experimental work a novel growth concept was discovered and analyzed that eases the fabrication of QDs. It is based on different crystal growth and ad-atom diffusion processes under supply of different modifications of the arsenic atmosphere in the MBE reactor. The reactor is equipped with special valved cracking effusion cells for arsenic and phosphorus. It represents an all-solid source configuration that does not rely on toxic gas supply. The cracking effusion cell are able to create different species of arsenic and phosphorus. This constitutes the basis of the growth concept. With this method round-shaped QD ensembles with superior optical properties and record-low photoluminescence linewidth were achieved. By systematically varying the growth parameters and working out a detailed analysis of the experimental data a range of parameter values, for which the formation of QDs is favored, was found. A qualitative explanation of the formation characteristics based on the surface migration of In ad-atoms is developed. Such tailored QDs are finally implemented as active region in a self-designed diode laser structure. A basic characterization of the static and temperature-dependent properties was carried out. The QD lasers exceed a reference quantum well laser in terms of inversion conditions and temperature-dependent characteristics. Pulsed output powers of several hundred milli watt were measured at room temperature. In particular, the lasers feature a high modal gain that even allowed cw-emission at room temperature of a processed ridge wave guide device as short as 340 µm with output powers of 17 mW. Modulation experiments performed at the Israel Institute of Technology (Technion) showed a complex behavior of the QDs in the laser cavity. Despite the fact that the laser structure is not fully optimized for a high-speed device, data transmission capabilities of 15 Gb/s combined with low noise were achieved. To the best of the author`s knowledge, this renders the lasers the fastest QD devices operating at 1.55 µm. The thesis starts with an introductory chapter that pronounces the advantages of optical fiber communication in general. Chapter 2 will introduce the fundamental knowledge that is necessary to understand the importance of the active region`s dimensions for the performance of a diode laser. The novel growth concept and its experimental analysis are presented in chapter 3. Chapter 4 finally contains the work on diode lasers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the relationship between higher education and the requirement of the world of work with an emphasis on the effect of problem-based learning (PBL) on graduates' competencies. The implementation of full PBL method is costly (Albanese & Mitchell, 1993; Berkson, 1993; Finucane, Shannon, & McGrath, 2009). However, the implementation of PBL in a less than curriculum-wide mode is more achievable in a broader context (Albanese, 2000). This means higher education institutions implement only a few PBL components in the curriculum. Or a teacher implements a few PBL components at the courses level. For this kind of implementation there is a need to identify PBL components and their effects on particular educational outputs (Hmelo-Silver, 2004; Newman, 2003). So far, however there has been little research about this topic. The main aims of this study were: (1) to identify each of PBL components which were manifested in the development of a valid and reliable PBL implementation questionnaire and (2) to determine the effect of each identified PBL component to specific graduates' competencies. The analysis was based on quantitative data collected in the survey of medicine graduates of Gadjah Mada University, Indonesia. A total of 225 graduates responded to the survey. The result of confirmatory factor analysis (CFA) showed that all individual constructs of PBL and graduates' competencies had acceptable GOFs (Goodness-of-fit). Additionally, the values of the factor loadings (standardize loading estimates), the AVEs (average variance extracted), CRs (construct reliability), and ASVs (average shared squared variance) showed the proof of convergent and discriminant validity. All values indicated valid and reliable measurements. The investigation of the effects of PBL showed that each PBL component had specific effects on graduates' competencies. Interpersonal competencies were affected by Student-centred learning (β = .137; p < .05) and Small group components (β = .078; p < .05). Problem as stimulus affected Leadership (β = .182; p < .01). Real-world problems affected Personal and organisational competencies (β = .140; p < .01) and Interpersonal competencies (β = .114; p < .05). Teacher as facilitator affected Leadership (β = 142; p < .05). Self-directed learning affected Field-related competencies (β = .080; p < .05). These results can help higher education institution and educator to have informed choice about the implementation of PBL components. With this information higher education institutions and educators could fulfil their educational goals and in the same time meet their limited resources. This study seeks to improve prior studies' research method in four major ways: (1) by indentifying PBL components based on theory and empirical data; (2) by using latent variables in the structural equation modelling instead of using a variable as a proxy of a construct; (3) by using CFA to validate the latent structure of the measurement, thus providing better evidence of validity; and (4) by using graduate survey data which is suitable for analysing PBL effects in the frame work of the relationship between higher education and the world of work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A modeling study of hippocampal pyramidal neurons is described. This study is based on simulations using HIPPO, a program which simulates the somatic electrical activity of these cells. HIPPO is based on a) descriptions of eleven non-linear conductances that have been either reported for this class of cell in the literature or postulated in the present study, and b) an approximation of the electrotonic structure of the cell that is derived in this thesis, based on data for the linear properties of these cells. HIPPO is used a) to integrate empirical data from a variety of sources on the electrical characteristics of this type of cell, b) to investigate the functional significance of the various elements that underly the electrical behavior, and c) to provide a tool for the electrophysiologist to supplement direct observation of these cells and provide a method of testing speculations regarding parameters that are not accessible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Freehand sketching is both a natural and crucial part of design, yet is unsupported by current design automation software. We are working to combine the flexibility and ease of use of paper and pencil with the processing power of a computer to produce a design environment that feels as natural as paper, yet is considerably smarter. One of the most basic steps in accomplishing this is converting the original digitized pen strokes in the sketch into the intended geometric objects using feature point detection and approximation. We demonstrate how multiple sources of information can be combined for feature detection in strokes and apply this technique using two approaches to signal processing, one using simple average based thresholding and a second using scale space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this text, we present two stereo-based head tracking techniques along with a fast 3D model acquisition system. The first tracking technique is a robust implementation of stereo-based head tracking designed for interactive environments with uncontrolled lighting. We integrate fast face detection and drift reduction algorithms with a gradient-based stereo rigid motion tracking technique. Our system can automatically segment and track a user's head under large rotation and illumination variations. Precision and usability of this approach are compared with previous tracking methods for cursor control and target selection in both desktop and interactive room environments. The second tracking technique is designed to improve the robustness of head pose tracking for fast movements. Our iterative hybrid tracker combines constraints from the ICP (Iterative Closest Point) algorithm and normal flow constraint. This new technique is more precise for small movements and noisy depth than ICP alone, and more robust for large movements than the normal flow constraint alone. We present experiments which test the accuracy of our approach on sequences of real and synthetic stereo images. The 3D model acquisition system we present quickly aligns intensity and depth images, and reconstructs a textured 3D mesh. 3D views are registered with shape alignment based on our iterative hybrid tracker. We reconstruct the 3D model using a new Cubic Ray Projection merging algorithm which takes advantage of a novel data structure: the linked voxel space. We present experiments to test the accuracy of our approach on 3D face modelling using real-time stereo images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A persistent issue of debate in the area of 3D object recognition concerns the nature of the experientially acquired object models in the primate visual system. One prominent proposal in this regard has expounded the use of object centered models, such as representations of the objects' 3D structures in a coordinate frame independent of the viewing parameters [Marr and Nishihara, 1978]. In contrast to this is another proposal which suggests that the viewing parameters encountered during the learning phase might be inextricably linked to subsequent performance on a recognition task [Tarr and Pinker, 1989; Poggio and Edelman, 1990]. The 'object model', according to this idea, is simply a collection of the sample views encountered during training. Given that object centered recognition strategies have the attractive feature of leading to viewpoint independence, they have garnered much of the research effort in the field of computational vision. Furthermore, since human recognition performance seems remarkably robust in the face of imaging variations [Ellis et al., 1989], it has often been implicitly assumed that the visual system employs an object centered strategy. In the present study we examine this assumption more closely. Our experimental results with a class of novel 3D structures strongly suggest the use of a view-based strategy by the human visual system even when it has the opportunity of constructing and using object-centered models. In fact, for our chosen class of objects, the results seem to support a stronger claim: 3D object recognition is 2D view-based.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new method to select features for a face detection system using Support Vector Machines (SVMs). In the first step we reduce the dimensionality of the input space by projecting the data into a subset of eigenvectors. The dimension of the subset is determined by a classification criterion based on minimizing a bound on the expected error probability of an SVM. In the second step we select features from the SVM feature space by removing those that have low contributions to the decision function of the SVM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building robust recognition systems requires a careful understanding of the effects of error in sensed features. Error in these image features results in a region of uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty region when model poses are based on matching three image and model points, for both Gaussian and bounded error in the detection of image points, and for both scaled-orthographic and perspective projection models. This result applies to objects that are fully three- dimensional, where past results considered only two-dimensional objects. Further, we introduce a linear programming algorithm to compute the uncertainty region when poses are based on any number of initial matches. Finally, we use these results to extend, from two-dimensional to three- dimensional objects, robust implementations of alignmentt interpretation- tree search, and ransformation clustering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This analysis was stimulated by the real data analysis problem of household expenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that try to add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spending excluding alcohol/tobacco similar for teetotal and non-teetotal households? In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than one component, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durables within the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small. While this analysis is based on around economic data, the ideas carry over to many other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)