956 resultados para Constraint based modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mikrooptische Filter sind heutzutage in vielen Bereichen in der Telekommunikation unersetzlich. Wichtige Einsatzgebiete sind aber auch spektroskopische Systeme in der Medizin-, Prozess- und Umwelttechnik. Diese Arbeit befasst sich mit der Technologieentwicklung und Herstellung von luftspaltbasierenden, vertikal auf einem Substrat angeordneten, oberflächenmikromechanisch hergestellten Fabry-Perot-Filtern. Es werden zwei verschiedene Filtervarianten, basierend auf zwei verschiedenen Materialsystemen, ausführlich untersucht. Zum einen handelt es sich dabei um die Weiterentwicklung von kontinuierlich mikromechanisch durchstimmbaren InP / Luftspaltfiltern; zum anderen werden neuartige, kostengünstige Siliziumnitrid / Luftspaltfilter wissenschaftlich behandelt. Der Inhalt der Arbeit ist so gegliedert, dass nach einer Einleitung mit Vergleichen zu Arbeiten und Ergebnissen anderer Forschergruppen weltweit, zunächst einige theoretische Grundlagen zur Berechnung der spektralen Reflektivität und Transmission von beliebigen optischen Schichtanordnungen aufgezeigt werden. Auß erdem wird ein kurzer theoretischer Ü berblick zu wichtigen Eigenschaften von Fabry-Perot-Filtern sowie der Möglichkeit einer mikromechanischen Durchstimmbarkeit gegeben. Daran anschließ end folgt ein Kapitel, welches sich den grundlegenden technologischen Aspekten der Herstellung von luftspaltbasierenden Filtern widmet. Es wird ein Zusammenhang zu wichtigen Referenzarbeiten hergestellt, auf denen diverse Weiterentwicklungen dieser Arbeit basieren. Die beiden folgenden Kapitel erläutern dann ausführlich das Design, die Herstellung und die Charakterisierung der beiden oben erwähnten Filtervarianten. Abgesehen von der vorangehenden Epitaxie von InP / GaInAs Schichten, ist die Herstellung der InP / Luftspaltfilter komplett im Institut durchgeführt worden. Die Herstellungsschritte sind ausführlich in der Arbeit erläutert, wobei ein Schwerpunktthema das trockenchemische Ä tzen von InP sowie GaInAs, welches als Opferschichtmaterial für die Herstellung der Luftspalte genutzt wurde, behandelt. Im Verlauf der wissenschaftlichen Arbeit konnten sehr wichtige technische Verbesserungen entwickelt und eingesetzt werden, welche zu einer effizienteren technologischen Herstellung der Filter führten und in der vorliegenden Niederschrift ausführlich dokumentiert sind. Die hergestellten, für einen Einsatz in der optischen Telekommunikation entworfenen, elektrostatisch aktuierbaren Filter sind aus zwei luftspaltbasierenden Braggspiegeln aufgebaut, welche wiederum jeweils 3 InP-Schichten von (je nach Design) 357nm bzw. 367nm Dicke aufweisen. Die Filter bestehen aus im definierten Abstand parallel übereinander angeordneten Membranen, die über Verbindungsbrücken unterschiedlicher Anzahl und Länge an Haltepfosten befestigt sind. Da die mit 357nm bzw. 367nm vergleichsweise sehr dünnen Schichten freitragende Konstrukte mit bis zu 140 nm Länge bilden, aber trotzdem Positionsgenauigkeiten im nm-Bereich einhalten müssen, handelt es sich hierbei um sehr anspruchsvolle mikromechanische Bauelemente. Um den Einfluss der zahlreichen geometrischen Strukturparameter studieren zu können, wurden verschiedene laterale Filterdesigns implementiert. Mit den realisierten Filter konnte ein enorm weiter spektraler Abstimmbereich erzielt werden. Je nach lateralem Design wurden internationale Bestwerte für durchstimmbare Fabry-Perot-Filter von mehr als 140nm erreicht. Die Abstimmung konnte dabei kontinuierlich mit einer angelegten Spannung von nur wenigen Volt durchgeführt werden. Im Vergleich zu früher berichteten Ergebnissen konnten damit sowohl die Wellenlängenabstimmung als auch die dafür benötigte Abstimmungsspannung signifikant verbessert werden. Durch den hohen Brechungsindexkontrast und die geringe Schichtdicke zeigen die Filter ein vorteilhaftes, extrem weites Stopband in der Größ enordnung um 550nm. Die gewählten, sehr kurzen Kavitätslängen ermöglichen einen freien Spektralbereich des Filters welcher ebenfalls in diesen Größ enordnungen liegt, so dass ein weiter spektraler Einsatzbereich ermöglicht wird. Während der Arbeit zeigte sich, dass Verspannungen in den freitragenden InPSchichten die Funktionsweise der mikrooptischen Filter stark beeinflussen bzw. behindern. Insbesondere eine Unterätzung der Haltepfosten und die daraus resultierende Verbiegung der Ecken an denen sich die Verbindungsbrücken befinden, führte zu enormen vertikalen Membranverschiebungen, welche die Filtereigenschaften verändern. Um optimale Ergebnisse zu erreichen, muss eine weitere Verbesserung der Epitaxie erfolgen. Jedoch konnten durch den zusätzlichen Einsatz einer speziellen Schutzmaske die Unterätzung der Haltepfosten und damit starke vertikale Verformungen reduziert werden. Die aus der Verspannung resultierenden Verformungen und die Reaktion einzelner freistehender InP Schichten auf eine angelegte Gleich- oder Wechselspannung wurde detailliert untersucht. Mittels Weisslichtinterferometrie wurden lateral identische Strukturen verglichen, die aus unterschiedlich dicken InP-Schichten (357nm bzw. 1065nm) bestehen. Einen weiteren Hauptteil der Arbeit stellen Siliziumnitrid / Luftspaltfilter dar, welche auf einem neuen, im Rahmen dieser Dissertation entwickelten, technologischen Ansatz basieren. Die Filter bestehen aus zwei Braggspiegeln, die jeweils aus fünf 590nm dicken, freistehenden Siliziumnitridschichten aufgebaut sind und einem Abstand von 390nm untereinander aufweisen. Die Filter wurden auf Glassubstraten hergestellt. Der Herstellungsprozess ist jedoch auch mit vielen anderen Materialien oder Prozessen kompatibel, so dass z.B. eine Integration mit anderen Bauelemente relativ leicht möglich ist. Die Prozesse dieser ebenfalls oberflächenmikromechanisch hergestellten Filter wurden konsequent auf niedrige Herstellungskosten optimiert. Als Opferschichtmaterial wurde hier amorph abgeschiedenes Silizium verwendet. Der Herstellungsprozess beinhaltet die Abscheidung verspannungsoptimierter Schichten (Silizium und Siliziumnitrid) mittels PECVD, die laterale Strukturierung per reaktiven Ionenätzen mit den Gasen SF6 / CHF3 / Ar sowie Fotolack als Maske, die nasschemische Unterätzung der Opferschichten mittels KOH und das Kritisch-Punkt-Trocken der Proben. Die Ergebnisse der optischen Charakterisierung der Filter zeigen eine hohe Ü bereinstimmung zwischen den experimentell ermittelten Daten und den korrespondierenden theoretischen Modellrechnungen. Weisslichtinterferometermessungen der freigeätzten Strukturen zeigen ebene Filterschichten und bestätigen die hohe vertikale Positioniergenauigkeit, die mit diesem technologischen Ansatz erreicht werden kann.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 21st century has brought new challenges for forest management at a time when globalization in world trade is increasing and global climate change is becoming increasingly apparent. In addition to various goods and services like food, feed, timber or biofuels being provided to humans, forest ecosystems are a large store of terrestrial carbon and account for a major part of the carbon exchange between the atmosphere and the land surface. Depending on the stage of the ecosystems and/or management regimes, forests can be either sinks, or sources of carbon. At the global scale, rapid economic development and a growing world population have raised much concern over the use of natural resources, especially forest resources. The challenging question is how can the global demands for forest commodities be satisfied in an increasingly globalised economy, and where could they potentially be produced? For this purpose, wood demand estimates need to be integrated in a framework, which is able to adequately handle the competition for land between major land-use options such as residential land or agricultural land. This thesis is organised in accordance with the requirements to integrate the simulation of forest changes based on wood extraction in an existing framework for global land-use modelling called LandSHIFT. Accordingly, the following neuralgic points for research have been identified: (1) a review of existing global-scale economic forest sector models (2) simulation of global wood production under selected scenarios (3) simulation of global vegetation carbon yields and (4) the implementation of a land-use allocation procedure to simulate the impact of wood extraction on forest land-cover. Modelling the spatial dynamics of forests on the global scale requires two important inputs: (1) simulated long-term wood demand data to determine future roundwood harvests in each country and (2) the changes in the spatial distribution of woody biomass stocks to determine how much of the resource is available to satisfy the simulated wood demands. First, three global timber market models are reviewed and compared in order to select a suitable economic model to generate wood demand scenario data for the forest sector in LandSHIFT. The comparison indicates that the ‘Global Forest Products Model’ (GFPM) is most suitable for obtaining projections on future roundwood harvests for further study with the LandSHIFT forest sector. Accordingly, the GFPM is adapted and applied to simulate wood demands for the global forestry sector conditional on selected scenarios from the Millennium Ecosystem Assessment and the Global Environmental Outlook until 2050. Secondly, the Lund-Potsdam-Jena (LPJ) dynamic global vegetation model is utilized to simulate the change in potential vegetation carbon stocks for the forested locations in LandSHIFT. The LPJ data is used in collaboration with spatially explicit forest inventory data on aboveground biomass to allocate the demands for raw forest products and identify locations of deforestation. Using the previous results as an input, a methodology to simulate the spatial dynamics of forests based on wood extraction is developed within the LandSHIFT framework. The land-use allocation procedure specified in the module translates the country level demands for forest products into woody biomass requirements for forest areas, and allocates these on a five arc minute grid. In a first version, the model assumes only actual conditions through the entire study period and does not explicitly address forest age structure. Although the module is in a very preliminary stage of development, it already captures the effects of important drivers of land-use change like cropland and urban expansion. As a first plausibility test, the module performance is tested under three forest management scenarios. The module succeeds in responding to changing inputs in an expected and consistent manner. The entire methodology is applied in an exemplary scenario analysis for India. A couple of future research priorities need to be addressed, particularly the incorporation of plantation establishments; issue of age structure dynamics; as well as the implementation of a new technology change factor in the GFPM which can allow the specification of substituting raw wood products (especially fuelwood) by other non-wood products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land use has become a force of global importance, considering that 34% of the Earth’s ice-free surface was covered by croplands or pastures in 2000. The expected increase in global human population together with eminent climate change and associated search for energy sources other than fossil fuels can, through land-use and land-cover changes (LUCC), increase the pressure on nature’s resources, further degrade ecosystem services, and disrupt other planetary systems of key importance to humanity. This thesis presents four modeling studies on the interplay between LUCC, increased production of biofuels and climate change in four selected world regions. In the first study case two new crop types (sugarcane and jatropha) are parameterized in the LPJ for managed Lands dynamic global vegetation model for calculation of their potential productivity. Country-wide spatial variation in the yields of sugarcane and jatropha incurs into substantially different land requirements to meet the biofuel production targets for 2015 in Brazil and India, depending on the location of plantations. Particularly the average land requirements for jatropha in India are considerably higher than previously estimated. These findings indicate that crop zoning is important to avoid excessive LUCC. In the second study case the LandSHIFT model of land-use and land-cover changes is combined with life cycle assessments to investigate the occurrence and extent of biofuel-driven indirect land-use changes (ILUC) in Brazil by 2020. The results show that Brazilian biofuels can indeed cause considerable ILUC, especially by pushing the rangeland frontier into the Amazonian forests. The carbon debt caused by such ILUC would result in no carbon savings (from using plant-based ethanol and biodiesel instead of fossil fuels) before 44 years for sugarcane ethanol and 246 years for soybean biodiesel. The intensification of livestock grazing could avoid such ILUC. We argue that such an intensification of livestock should be supported by the Brazilian biofuel sector, based on the sector’s own interest in minimizing carbon emissions. In the third study there is the development of a new method for crop allocation in LandSHIFT, as influenced by the occurrence and capacity of specific infrastructure units. The method is exemplarily applied in a first assessment of the potential availability of land for biogas production in Germany. The results indicate that Germany has enough land to fulfill virtually all (90 to 98%) its current biogas plant capacity with only cultivated feedstocks. Biogas plants located in South and Southwestern (North and Northeastern) Germany might face more (less) difficulties to fulfill their capacities with cultivated feedstocks, considering that feedstock transport distance to plants is a crucial issue for biogas production. In the fourth study an adapted version of LandSHIFT is used to assess the impacts of contrasting scenarios of climate change and conservation targets on land use in the Brazilian Amazon. Model results show that severe climate change in some regions by 2050 can shift the deforestation frontier to areas that would experience low levels of human intervention under mild climate change (such as the western Amazon forests or parts of the Cerrado savannas). Halting deforestation of the Amazon and of the Brazilian Cerrado would require either a reduction in the production of meat or an intensification of livestock grazing in the region. Such findings point out the need for an integrated/multicisciplinary plan for adaptation to climate change in the Amazon. The overall conclusions of this thesis are that (i) biofuels must be analyzed and planned carefully in order to effectively reduce carbon emissions; (ii) climate change can have considerable impacts on the location and extent of LUCC; and (iii) intensification of grazing livestock represents a promising venue for minimizing the impacts of future land-use and land-cover changes in Brazil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enterprise Modeling (EM) is currently in operation either as a technique to represent and understand the structure and behavior of the enterprise, or as a technique to analyze business processes, and in many cases as support technique for business process reengineering. However, EM architectures and methods for Enterprise Engineering can also used to support new management techniques like SIX SIGMA, because these new techniques need a clear, transparent and integrated definition and description of the business activities of the enterprise to be able to build up, optimize and operate an successful enterprise. The main goal of SIX SIGMA is to optimize the performance of processes. A still open question is: "What are the adequate Quality criteria and methods to ensure such performance? What must we do to get Quality governance?" This paper describes a method including an Enterprise Engineering method and SIX SIGMA strategy to reach Quality Governance

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth in high data rate communication systems has introduced new high spectral efficient modulation techniques and standards such as LTE-A (long term evolution-advanced) for 4G (4th generation) systems. These techniques have provided a broader bandwidth but introduced high peak-to-average power ratio (PAR) problem at the high power amplifier (HPA) level of the communication system base transceiver station (BTS). To avoid spectral spreading due to high PAR, stringent requirement on linearity is needed which brings the HPA to operate at large back-off power at the expense of power efficiency. Consequently, high power devices are fundamental in HPAs for high linearity and efficiency. Recent development in wide bandgap power devices, in particular AlGaN/GaN HEMT, has offered higher power level with superior linearity-efficiency trade-off in microwaves communication. For cost-effective HPA design to production cycle, rigorous computer aided design (CAD) AlGaN/GaN HEMT models are essential to reflect real response with increasing power level and channel temperature. Therefore, large-size AlGaN/GaN HEMT large-signal electrothermal modeling procedure is proposed. The HEMT structure analysis, characterization, data processing, model extraction and model implementation phases have been covered in this thesis including trapping and self-heating dispersion accounting for nonlinear drain current collapse. The small-signal model is extracted using the 22-element modeling procedure developed in our department. The intrinsic large-signal model is deeply investigated in conjunction with linearity prediction. The accuracy of the nonlinear drain current has been enhanced through several issues such as trapping and self-heating characterization. Also, the HEMT structure thermal profile has been investigated and corresponding thermal resistance has been extracted through thermal simulation and chuck-controlled temperature pulsed I(V) and static DC measurements. Higher-order equivalent thermal model is extracted and implemented in the HEMT large-signal model to accurately estimate instantaneous channel temperature. Moreover, trapping and self-heating transients has been characterized through transient measurements. The obtained time constants are represented by equivalent sub-circuits and integrated in the nonlinear drain current implementation to account for complex communication signals dynamic prediction. The obtained verification of this table-based large-size large-signal electrothermal model implementation has illustrated high accuracy in terms of output power, gain, efficiency and nonlinearity prediction with respect to standard large-signal test signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Upper Blue Nile River Basin (UBNRB) located in the western part of Ethiopia, between 7° 45’ and 12° 45’N and 34° 05’ and 39° 45’E has a total area of 174962 km2 . More than 80% of the population in the basin is engaged in agricultural activities. Because of the particularly dry climate in the basin, likewise to most other regions of Ethiopia, the agricultural productivity depends to a very large extent on the occurrence of the seasonal rains. This situation makes agriculture highly vulnerable to the impact of potential climate hazards which are about to inflict Africa as a whole and Ethiopia in particular. To analyze these possible impacts of future climate change on the water resources in the UBNRB, in the first part of the thesis climate projection for precipitation, minimum and maximum temperatures in the basin, using downscaled predictors from three GCMs (ECHAM5, GFDL21 and CSIRO-MK3) under SRES scenarios A1B and A2 have been carried out. The two statistical downscaling models used are SDSM and LARS-WG, whereby SDSM is used to downscale ECHAM5-predictors alone and LARS-WG is applied in both mono-model mode with predictors from ECHAM5 and in multi-model mode with combined predictors from ECHAM5, GFDL21 and CSIRO-MK3. For the calibration/validation of the downscaled models, observed as well as NCEP climate data in the 1970 - 2000 reference period is used. The future projections are made for two time periods; 2046-2065 (2050s) and 2081-2100 (2090s). For the 2050s future time period the downscaled climate predictions indicate rise of 0.6°C to 2.7°C for the seasonal maximum temperatures Tmax, and of 0.5°C to 2.44°C for the minimum temperatures Tmin. Similarly, during the 2090s the seasonal Tmax increases by 0.9°C to 4.63°C and Tmin by 1°C to 4.6°C, whereby these increases are generally higher for the A2 than for the A1B scenario. For most sub-basins of the UBNRB, the predicted changes of Tmin are larger than those of Tmax. Meanwhile, for the precipitation, both downscaling tools predict large changes which, depending on the GCM employed, are such that the spring and summer seasons will be experiencing decreases between -36% to 1% and the autumn and winter seasons an increase of -8% to 126% for the two future time periods, regardless of the SRES scenario used. In the second part of the thesis the semi-distributed, physically based hydrologic model, SWAT (Soil Water Assessment Tool), is used to evaluate the impacts of the above-predicted future climate change on the hydrology and water resources of the UBNRB. Hereby the downscaled future predictors are used as input in the SWAT model to predict streamflow of the Upper Blue Nile as well as other relevant water resources parameter in the basin. Calibration and validation of the streamflow model is done again on 1970-2000 measured discharge at the outlet gage station Eldiem, whereby the most sensitive out the numerous “tuneable” calibration parameters in SWAT have been selected by means of a sophisticated sensitivity analysis. Consequently, a good calibration/validation model performance with a high NSE-coefficient of 0.89 is obtained. The results of the future simulations of streamflow in the basin, using both SDSM- and LARS-WG downscaled output in SWAT reveal a decline of -10% to -61% of the future Blue Nile streamflow, And, expectedly, these obviously adverse effects on the future UBNRB-water availibiliy are more exacerbated for the 2090’s than for the 2050’s, regardless of the SRES.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates a method for human-robot interaction (HRI) in order to uphold productivity of industrial robots like minimization of the shortest operation time, while ensuring human safety like collision avoidance. For solving such problems an online motion planning approach for robotic manipulators with HRI has been proposed. The approach is based on model predictive control (MPC) with embedded mixed integer programming. The planning strategies of the robotic manipulators mainly considered in the thesis are directly performed in the workspace for easy obstacle representation. The non-convex optimization problem is approximated by a mixed-integer program (MIP). It is further effectively reformulated such that the number of binary variables and the number of feasible integer solutions are drastically decreased. Safety-relevant regions, which are potentially occupied by the human operators, can be generated online by a proposed method based on hidden Markov models. In contrast to previous approaches, which derive predictions based on probability density functions in the form of single points, such as most likely or expected human positions, the proposed method computes safety-relevant subsets of the workspace as a region which is possibly occupied by the human at future instances of time. The method is further enhanced by combining reachability analysis to increase the prediction accuracy. These safety-relevant regions can subsequently serve as safety constraints when the motion is planned by optimization. This way one arrives at motion plans that are safe, i.e. plans that avoid collision with a probability not less than a predefined threshold. The developed methods have been successfully applied to a developed demonstrator, where an industrial robot works in the same space as a human operator. The task of the industrial robot is to drive its end-effector according to a nominal sequence of grippingmotion-releasing operations while no collision with a human arm occurs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biological systems exhibit rich and complex behavior through the orchestrated interplay of a large array of components. It is hypothesized that separable subsystems with some degree of functional autonomy exist; deciphering their independent behavior and functionality would greatly facilitate understanding the system as a whole. Discovering and analyzing such subsystems are hence pivotal problems in the quest to gain a quantitative understanding of complex biological systems. In this work, using approaches from machine learning, physics and graph theory, methods for the identification and analysis of such subsystems were developed. A novel methodology, based on a recent machine learning algorithm known as non-negative matrix factorization (NMF), was developed to discover such subsystems in a set of large-scale gene expression data. This set of subsystems was then used to predict functional relationships between genes, and this approach was shown to score significantly higher than conventional methods when benchmarking them against existing databases. Moreover, a mathematical treatment was developed to treat simple network subsystems based only on their topology (independent of particular parameter values). Application to a problem of experimental interest demonstrated the need for extentions to the conventional model to fully explain the experimental data. Finally, the notion of a subsystem was evaluated from a topological perspective. A number of different protein networks were examined to analyze their topological properties with respect to separability, seeking to find separable subsystems. These networks were shown to exhibit separability in a nonintuitive fashion, while the separable subsystems were of strong biological significance. It was demonstrated that the separability property found was not due to incomplete or biased data, but is likely to reflect biological structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images ("view extrapolation"), yet without ever modeling the 3D structure of the scene. The method relies on synthesizing a chain of "trilinear tensors" that governs the warping function from the example images to the novel image, together with a multi-dimensional interpolation function that synthesizes the non-rigid motions of the viewed object from the virtual camera position. We show that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images --- for the purpose of cheap and fast viewers that can run on standard hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a method for modeling object classes (such as faces) using 2D example images and an algorithm for matching a model to a novel image. The object class models are "learned'' from example images that we call prototypes. In addition to the images, the pixelwise correspondences between a reference prototype and each of the other prototypes must also be provided. Thus a model consists of a linear combination of prototypical shapes and textures. A stochastic gradient descent algorithm is used to match a model to a novel image by minimizing the error between the model and the novel image. Example models are shown as well as example matches to novel images. The robustness of the matching algorithm is also evaluated. The technique can be used for a number of applications including the computation of correspondence between novel images of a certain known class, object recognition, image synthesis and image compression.