879 resultados para Physics Based Modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-speed semiconductor lasers are an integral part in the implemen- tation of high-bit-rate optical communications systems. They are com- pact, rugged, reliable, long-lived, and relatively inexpensive sources of coherent light. Due to the very low attenuation window that exists in the silica based optical fiber at 1.55 μm and the zero dispersion point at 1.3 μm, they have become the mainstay of optical fiber com- munication systems. For the fabrication of lasers with gratings such as, distributed bragg reflector or distributed feedback lasers, etching is the most critical step. Etching defines the lateral dimmensions of the structure which determines the performance of optoelectronic devices. In this thesis studies and experiments were carried out about the exist- ing etching processes for InP and a novel dry etching process was de- veloped. The newly developed process was based on Cl2/CH4/H2/Ar chemistry and resulted in very smooth surfaces and vertical side walls. With this process the grating definition was significantly improved as compared to other technological developments in the respective field. A surface defined grating definition approach is used in this thesis work which does not require any re-growth steps and makes the whole fabrication process simpler and cost effective. Moreover, this grating fabrication process is fully compatible with nano-imprint lithography and can be used for high throughput low-cost manufacturing. With usual etching techniques reported before it is not possible to etch very deep because of aspect ratio dependent etching phenomenon where with increasing etch depth the etch rate slows down resulting in non-vertical side walls and footing effects. Although with our de- veloped process quite vertical side walls were achieved but footing was still a problem. To overcome the challenges related to grating defini- tion and deep etching, a completely new three step gas chopping dry etching process was developed. This was the very first time that a time multiplexed etching process for an InP based material system was demonstrated. The developed gas chopping process showed extra ordinary results including high mask selectivity of 15, moderate etch- ing rate, very vertical side walls and a record high aspect ratio of 41. Both the developed etching processes are completely compatible with nano imprint lithography and can be used for low-cost high-throughput fabrication. A large number of broad area laser, ridge waveguide laser, distributed feedback laser, distributed bragg reflector laser and coupled cavity in- jection grating lasers were fabricated using the developed one step etch- ing process. Very extensive characterization was done to optimize all the important design and fabrication parameters. The devices devel- oped have shown excellent performance with a very high side mode suppression ratio of more than 52 dB, an output power of 17 mW per facet, high efficiency of 0.15 W/A, stable operation over temperature and injected currents and a threshold current as low as 30 mA for almost 1 mm long device. A record high modulation bandwidth of 15 GHz with electron-photon resonance and open eye diagrams for 10 Gbps data transmission were also shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Upper Blue Nile River Basin (UBNRB) located in the western part of Ethiopia, between 7° 45’ and 12° 45’N and 34° 05’ and 39° 45’E has a total area of 174962 km2 . More than 80% of the population in the basin is engaged in agricultural activities. Because of the particularly dry climate in the basin, likewise to most other regions of Ethiopia, the agricultural productivity depends to a very large extent on the occurrence of the seasonal rains. This situation makes agriculture highly vulnerable to the impact of potential climate hazards which are about to inflict Africa as a whole and Ethiopia in particular. To analyze these possible impacts of future climate change on the water resources in the UBNRB, in the first part of the thesis climate projection for precipitation, minimum and maximum temperatures in the basin, using downscaled predictors from three GCMs (ECHAM5, GFDL21 and CSIRO-MK3) under SRES scenarios A1B and A2 have been carried out. The two statistical downscaling models used are SDSM and LARS-WG, whereby SDSM is used to downscale ECHAM5-predictors alone and LARS-WG is applied in both mono-model mode with predictors from ECHAM5 and in multi-model mode with combined predictors from ECHAM5, GFDL21 and CSIRO-MK3. For the calibration/validation of the downscaled models, observed as well as NCEP climate data in the 1970 - 2000 reference period is used. The future projections are made for two time periods; 2046-2065 (2050s) and 2081-2100 (2090s). For the 2050s future time period the downscaled climate predictions indicate rise of 0.6°C to 2.7°C for the seasonal maximum temperatures Tmax, and of 0.5°C to 2.44°C for the minimum temperatures Tmin. Similarly, during the 2090s the seasonal Tmax increases by 0.9°C to 4.63°C and Tmin by 1°C to 4.6°C, whereby these increases are generally higher for the A2 than for the A1B scenario. For most sub-basins of the UBNRB, the predicted changes of Tmin are larger than those of Tmax. Meanwhile, for the precipitation, both downscaling tools predict large changes which, depending on the GCM employed, are such that the spring and summer seasons will be experiencing decreases between -36% to 1% and the autumn and winter seasons an increase of -8% to 126% for the two future time periods, regardless of the SRES scenario used. In the second part of the thesis the semi-distributed, physically based hydrologic model, SWAT (Soil Water Assessment Tool), is used to evaluate the impacts of the above-predicted future climate change on the hydrology and water resources of the UBNRB. Hereby the downscaled future predictors are used as input in the SWAT model to predict streamflow of the Upper Blue Nile as well as other relevant water resources parameter in the basin. Calibration and validation of the streamflow model is done again on 1970-2000 measured discharge at the outlet gage station Eldiem, whereby the most sensitive out the numerous “tuneable” calibration parameters in SWAT have been selected by means of a sophisticated sensitivity analysis. Consequently, a good calibration/validation model performance with a high NSE-coefficient of 0.89 is obtained. The results of the future simulations of streamflow in the basin, using both SDSM- and LARS-WG downscaled output in SWAT reveal a decline of -10% to -61% of the future Blue Nile streamflow, And, expectedly, these obviously adverse effects on the future UBNRB-water availibiliy are more exacerbated for the 2090’s than for the 2050’s, regardless of the SRES.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates a method for human-robot interaction (HRI) in order to uphold productivity of industrial robots like minimization of the shortest operation time, while ensuring human safety like collision avoidance. For solving such problems an online motion planning approach for robotic manipulators with HRI has been proposed. The approach is based on model predictive control (MPC) with embedded mixed integer programming. The planning strategies of the robotic manipulators mainly considered in the thesis are directly performed in the workspace for easy obstacle representation. The non-convex optimization problem is approximated by a mixed-integer program (MIP). It is further effectively reformulated such that the number of binary variables and the number of feasible integer solutions are drastically decreased. Safety-relevant regions, which are potentially occupied by the human operators, can be generated online by a proposed method based on hidden Markov models. In contrast to previous approaches, which derive predictions based on probability density functions in the form of single points, such as most likely or expected human positions, the proposed method computes safety-relevant subsets of the workspace as a region which is possibly occupied by the human at future instances of time. The method is further enhanced by combining reachability analysis to increase the prediction accuracy. These safety-relevant regions can subsequently serve as safety constraints when the motion is planned by optimization. This way one arrives at motion plans that are safe, i.e. plans that avoid collision with a probability not less than a predefined threshold. The developed methods have been successfully applied to a developed demonstrator, where an industrial robot works in the same space as a human operator. The task of the industrial robot is to drive its end-effector according to a nominal sequence of grippingmotion-releasing operations while no collision with a human arm occurs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we present an atomistic-continuum model for simulations of ultrafast laser-induced melting processes in semiconductors on the example of silicon. The kinetics of transient non-equilibrium phase transition mechanisms is addressed with MD method on the atomic level, whereas the laser light absorption, strong generated electron-phonon nonequilibrium, fast heat conduction, and photo-excited free carrier diffusion are accounted for with a continuum TTM-like model (called nTTM). First, we independently consider the applications of nTTM and MD for the description of silicon, and then construct the combined MD-nTTM model. Its development and thorough testing is followed by a comprehensive computational study of fast nonequilibrium processes induced in silicon by an ultrashort laser irradiation. The new model allowed to investigate the effect of laser-induced pressure and temperature of the lattice on the melting kinetics. Two competing melting mechanisms, heterogeneous and homogeneous, were identified in our big-scale simulations. Apart from the classical heterogeneous melting mechanism, the nucleation of the liquid phase homogeneously inside the material significantly contributes to the melting process. The simulations showed, that due to the open diamond structure of the crystal, the laser-generated internal compressive stresses reduce the crystal stability against the homogeneous melting. Consequently, the latter can take a massive character within several picoseconds upon the laser heating. Due to the large negative volume of melting of silicon, the material contracts upon the phase transition, relaxes the compressive stresses, and the subsequent melting proceeds heterogeneously until the excess of thermal energy is consumed. A series of simulations for a range of absorbed fluences allowed us to find the threshold fluence value at which homogeneous liquid nucleation starts contributing to the classical heterogeneous propagation of the solid-liquid interface. A series of simulations for a range of the material thicknesses showed that the sample width we chosen in our simulations (800 nm) corresponds to a thick sample. Additionally, in order to support the main conclusions, the results were verified for a different interatomic potential. Possible improvements of the model to account for nonthermal effects are discussed and certain restrictions on the suitable interatomic potentials are found. As a first step towards the inclusion of these effects into MD-nTTM, we performed nanometer-scale MD simulations with a new interatomic potential, designed to reproduce ab initio calculations at the laser-induced electronic temperature of 18946 K. The simulations demonstrated that, similarly to thermal melting, nonthermal phase transition occurs through nucleation. A series of simulations showed that higher (lower) initial pressure reinforces (hinders) the creation and the growth of nonthermal liquid nuclei. For the example of Si, the laser melting kinetics of semiconductors was found to be noticeably different from that of metals with a face-centered cubic crystal structure. The results of this study, therefore, have important implications for interpretation of experimental data on the kinetics of melting process of semiconductors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images ("view extrapolation"), yet without ever modeling the 3D structure of the scene. The method relies on synthesizing a chain of "trilinear tensors" that governs the warping function from the example images to the novel image, together with a multi-dimensional interpolation function that synthesizes the non-rigid motions of the viewed object from the virtual camera position. We show that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images --- for the purpose of cheap and fast viewers that can run on standard hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a method for modeling object classes (such as faces) using 2D example images and an algorithm for matching a model to a novel image. The object class models are "learned'' from example images that we call prototypes. In addition to the images, the pixelwise correspondences between a reference prototype and each of the other prototypes must also be provided. Thus a model consists of a linear combination of prototypical shapes and textures. A stochastic gradient descent algorithm is used to match a model to a novel image by minimizing the error between the model and the novel image. Example models are shown as well as example matches to novel images. The robustness of the matching algorithm is also evaluated. The technique can be used for a number of applications including the computation of correspondence between novel images of a certain known class, object recognition, image synthesis and image compression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most psychophysical studies of object recognition have focussed on the recognition and representation of individual objects subjects had previously explicitely been trained on. Correspondingly, modeling studies have often employed a 'grandmother'-type representation where the objects to be recognized were represented by individual units. However, objects in the natural world are commonly members of a class containing a number of visually similar objects, such as faces, for which physiology studies have provided support for a representation based on a sparse population code, which permits generalization from the learned exemplars to novel objects of that class. In this paper, we present results from psychophysical and modeling studies intended to investigate object recognition in natural ('continuous') object classes. In two experiments, subjects were trained to perform subordinate level discrimination in a continuous object class - images of computer-rendered cars - created using a 3D morphing system. By comparing the recognition performance of trained and untrained subjects we could estimate the effects of viewpoint-specific training and infer properties of the object class-specific representation learned as a result of training. We then compared the experimental findings to simulations, building on our recently presented HMAX model of object recognition in cortex, to investigate the computational properties of a population-based object class representation as outlined above. We find experimental evidence, supported by modeling results, that training builds a viewpoint- and class-specific representation that supplements a pre-existing repre-sentation with lower shape discriminability but possibly greater viewpoint invariance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tsunoda et al. (2001) recently studied the nature of object representation in monkey inferotemporal cortex using a combination of optical imaging and extracellular recordings. In particular, they examined IT neuron responses to complex natural objects and "simplified" versions thereof. In that study, in 42% of the cases, optical imaging revealed a decrease in the number of activation patches in IT as stimuli were "simplified". However, in 58% of the cases, "simplification" of the stimuli actually led to the appearance of additional activation patches in IT. Based on these results, the authors propose a scheme in which an object is represented by combinations of active and inactive columns coding for individual features. We examine the patterns of activation caused by the same stimuli as used by Tsunoda et al. in our model of object recognition in cortex (Riesenhuber 99). We find that object-tuned units can show a pattern of appearance and disappearance of features identical to the experiment. Thus, the data of Tsunoda et al. appear to be in quantitative agreement with a simple object-based representation in which an object's identity is coded by its similarities to reference objects. Moreover, the agreement of simulations and experiment suggests that the simplification procedure used by Tsunoda (2001) is not necessarily an accurate method to determine neuronal tuning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stock markets employ specialized traders, market-makers, designed to provide liquidity and volume to the market by constantly supplying both supply and demand. In this paper, we demonstrate a novel method for modeling the market as a dynamic system and a reinforcement learning algorithm that learns profitable market-making strategies when run on this model. The sequence of buys and sells for a particular stock, the order flow, we model as an Input-Output Hidden Markov Model fit to historical data. When combined with the dynamics of the order book, this creates a highly non-linear and difficult dynamic system. Our reinforcement learning algorithm, based on likelihood ratios, is run on this partially-observable environment. We demonstrate learning results for two separate real stocks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling and simulation permeate all areas of business, science and engineering. With the increase in the scale and complexity of simulations, large amounts of computational resources are required, and collaborative model development is needed, as multiple parties could be involved in the development process. The Grid provides a platform for coordinated resource sharing and application development and execution. In this paper, we survey existing technologies in modeling and simulation, and we focus on interoperability and composability of simulation components for both simulation development and execution. We also present our recent work on an HLA-based simulation framework on the Grid, and discuss the issues to achieve composability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new approach to model and classify breast parenchymal tissue. Given a mammogram, first, we will discover the distribution of the different tissue densities in an unsupervised manner, and second, we will use this tissue distribution to perform the classification. We achieve this using a classifier based on local descriptors and probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature. We studied the influence of different descriptors like texture and SIFT features at the classification stage showing that textons outperform SIFT in all cases. Moreover we demonstrate that pLSA automatically extracts meaningful latent aspects generating a compact tissue representation based on their densities, useful for discriminating on mammogram classification. We show the results of tissue classification over the MIAS and DDSM datasets. We compare our method with approaches that classified these same datasets showing a better performance of our proposal

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research work deals with the problem of modeling and design of low level speed controller for the mobile robot PRIM. The main objective is to develop an effective educational tool. On one hand, the interests in using the open mobile platform PRIM consist in integrating several highly related subjects to the automatic control theory in an educational context, by embracing the subjects of communications, signal processing, sensor fusion and hardware design, amongst others. On the other hand, the idea is to implement useful navigation strategies such that the robot can be served as a mobile multimedia information point. It is in this context, when navigation strategies are oriented to goal achievement, that a local model predictive control is attained. Hence, such studies are presented as a very interesting control strategy in order to develop the future capabilities of the system