990 resultados para modeling algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

An innovative background modeling technique that is able to accurately segment foreground regions in RGB-D imagery (RGB plus depth) has been presented in this paper. The technique is based on a Bayesian framework that efficiently fuses different sources of information to segment the foreground. In particular, the final segmentation is obtained by considering a prediction of the foreground regions, carried out by a novel Bayesian Network with a depth-based dynamic model, and, by considering two independent depth and color-based mixture of Gaussians background models. The efficient Bayesian combination of all these data reduces the noise and uncertainties introduced by the color and depth features and the corresponding models. As a result, more compact segmentations, and refined foreground object silhouettes are obtained. Experimental results with different databases suggest that the proposed technique outperforms existing state-of-the-art algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En los últimos años, el Ge ha ganado de nuevo atención con la finalidad de ser integrado en el seno de las existentes tecnologías de microelectrónica. Aunque no se le considera como un canddato capaz de reemplazar completamente al Si en el futuro próximo, probalemente servirá como un excelente complemento para aumentar las propiedades eléctricas en dispositivos futuros, especialmente debido a su alta movilidad de portadores. Esta integración requiere de un avance significativo del estado del arte en los procesos de fabricado. Técnicas de simulación, como los algoritmos de Monte Carlo cinético (KMC), proporcionan un ambiente atractivo para llevar a cabo investigación y desarrollo en este campo, especialmente en términos de costes en tiempo y financiación. En este estudio se han usado, por primera vez, técnicas de KMC con el fin entender el procesado “front-end” de Ge en su fabricación, específicamente la acumulación de dañado y amorfización producidas por implantación iónica y el crecimiento epitaxial en fase sólida (SPER) de las capas amorfizadas. Primero, simulaciones de aproximación de clisiones binarias (BCA) son usadas para calcular el dañado causado por cada ión. La evolución de este dañado en el tiempo se simula usando KMC sin red, o de objetos (OKMC) en el que sólamente se consideran los defectos. El SPER se simula a través de una aproximación KMC de red (LKMC), siendo capaz de seguir la evolución de los átomos de la red que forman la intercara amorfo/cristalina. Con el modelo de amorfización desarrollado a lo largo de este trabajo, implementado en un simulador multi-material, se pueden simular todos estos procesos. Ha sido posible entender la acumulación de dañado, desde la generación de defectos puntuales hasta la formación completa de capas amorfas. Esta acumulación ocurre en tres regímenes bien diferenciados, empezando con un ritmo lento de formación de regiones de dañado, seguido por una rápida relajación local de ciertas áreas en la fase amorfa donde ambas fases, amorfa y cristalina, coexisten, para terminar en la amorfización completa de capas extensas, donde satura el ritmo de acumulación. Dicha transición ocurre cuando la concentración de dañado supera cierto valor límite, el cual es independiente de las condiciones de implantación. Cuando se implantan los iones a temperaturas relativamente altas, el recocido dinámico cura el dañado previamente introducido y se establece una competición entre la generación de dañado y su disolución. Estos efectos se vuelven especialmente importantes para iones ligeros, como el B, el cual crea dañado más diluido, pequeño y distribuido de manera diferente que el causado por la implantación de iones más pesados, como el Ge. Esta descripción reproduce satisfactoriamente la cantidad de dañado y la extensión de las capas amorfas causadas por implantación iónica reportadas en la bibliografía. La velocidad de recristalización de la muestra previamente amorfizada depende fuertemente de la orientación del sustrato. El modelo LKMC presentado ha sido capaz de explicar estas diferencias entre orientaciones a través de un simple modelo, dominado por una única energía de activación y diferentes prefactores en las frecuencias de SPER dependiendo de las configuraciones de vecinos de los átomos que recristalizan. La formación de maclas aparece como una consecuencia de esta descripción, y es predominante en sustratos crecidos en la orientación (111)Ge. Este modelo es capaz de reproducir resultados experimentales para diferentes orientaciones, temperaturas y tiempos de evolución de la intercara amorfo/cristalina reportados por diferentes autores. Las parametrizaciones preliminares realizadas de los tensores de activación de tensiones son también capaces de proveer una buena correlación entre las simulaciones y los resultados experimentales de velocidad de SPER a diferentes temperaturas bajo una presión hidrostática aplicada. Los estudios presentados en esta tesis han ayudado a alcanzar un mejor entendimiento de los mecanismos de producción de dañado, su evolución, amorfización y SPER para Ge, además de servir como una útil herramienta para continuar el trabajo en este campo. In the recent years, Ge has regained attention to be integrated into existing microelectronic technologies. Even though it is not thought to be a feasible full replacement to Si in the near future, it will likely serve as an excellent complement to enhance electrical properties in future devices, specially due to its high carrier mobilities. This integration requires a significant upgrade of the state-of-the-art of regular manufacturing processes. Simulation techniques, such as kinetic Monte Carlo (KMC) algorithms, provide an appealing environment to research and innovation in the field, specially in terms of time and funding costs. In the present study, KMC techniques are used, for the first time, to understand Ge front-end processing, specifically damage accumulation and amorphization produced by ion implantation and Solid Phase Epitaxial Regrowth (SPER) of the amorphized layers. First, Binary Collision Approximation (BCA) simulations are used to calculate the damage caused by every ion. The evolution of this damage over time is simulated using non-lattice, or Object, KMC (OKMC) in which only defects are considered. SPER is simulated through a Lattice KMC (LKMC) approach, being able to follow the evolution of the lattice atoms forming the amorphous/crystalline interface. With the amorphization model developed in this work, implemented into a multi-material process simulator, all these processes can be simulated. It has been possible to understand damage accumulation, from point defect generation up to full amorphous layers formation. This accumulation occurs in three differentiated regimes, starting at a slow formation rate of the damage regions, followed by a fast local relaxation of areas into the amorphous phase where both crystalline and amorphous phases coexist, ending in full amorphization of extended layers, where the accumulation rate saturates. This transition occurs when the damage concentration overcomes a certain threshold value, which is independent of the implantation conditions. When implanting ions at relatively high temperatures, dynamic annealing takes place, healing the previously induced damage and establishing a competition between damage generation and its dissolution. These effects become specially important for light ions, as B, for which the created damage is more diluted, smaller and differently distributed than that caused by implanting heavier ions, as Ge. This description successfully reproduces damage quantity and extension of amorphous layers caused by means of ion implantation reported in the literature. Recrystallization velocity of the previously amorphized sample strongly depends on the substrate orientation. The presented LKMC model has been able to explain these differences between orientations through a simple model, dominated by one only activation energy and different prefactors for the SPER rates depending on the neighboring configuration of the recrystallizing atoms. Twin defects formation appears as a consequence of this description, and are predominant for (111)Ge oriented grown substrates. This model is able to reproduce experimental results for different orientations, temperatures and times of evolution of the amorphous/crystalline interface reported by different authors. Preliminary parameterizations for the activation strain tensors are able to also provide a good match between simulations and reported experimental results for SPER velocities at different temperatures under the appliance of hydrostatic pressure. The studies presented in this thesis have helped to achieve a greater understanding of damage generation, evolution, amorphization and SPER mechanisms in Ge, and also provide a useful tool to continue research in this field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design, development, and use of complex systems models raises a unique class of challenges and potential pitfalls, many of which are commonly recurring problems. Over time, researchers gain experience in this form of modeling, choosing algorithms, techniques, and frameworks that improve the quality, confidence level, and speed of development of their models. This increasing collective experience of complex systems modellers is a resource that should be captured. Fields such as software engineering and architecture have benefited from the development of generic solutions to recurring problems, called patterns. Using pattern development techniques from these fields, insights from communities such as learning and information processing, data mining, bioinformatics, and agent-based modeling can be identified and captured. Collections of such 'pattern languages' would allow knowledge gained through experience to be readily accessible to less-experienced practitioners and to other domains. This paper proposes a methodology for capturing the wisdom of computational modelers by introducing example visualization patterns, and a pattern classification system for analyzing the relationship between micro and macro behaviour in complex systems models. We anticipate that a new field of complex systems patterns will provide an invaluable resource for both practicing and future generations of modelers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The data available during the drug discovery process is vast in amount and diverse in nature. To gain useful information from such data, an effective visualisation tool is required. To provide better visualisation facilities to the domain experts (screening scientist, biologist, chemist, etc.),we developed a software which is based on recently developed principled visualisation algorithms such as Generative Topographic Mapping (GTM) and Hierarchical Generative Topographic Mapping (HGTM). The software also supports conventional visualisation techniques such as Principal Component Analysis, NeuroScale, PhiVis, and Locally Linear Embedding (LLE). The software also provides global and local regression facilities . It supports regression algorithms such as Multilayer Perceptron (MLP), Radial Basis Functions network (RBF), Generalised Linear Models (GLM), Mixture of Experts (MoE), and newly developed Guided Mixture of Experts (GME). This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install & use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, the data available to tackle many scientific challenges is vast in quantity and diverse in nature. The exploration of heterogeneous information spaces requires suitable mining algorithms as well as effective visual interfaces. miniDVMS v1.8 provides a flexible visual data mining framework which combines advanced projection algorithms developed in the machine learning domain and visual techniques developed in the information visualisation domain. The advantage of this interface is that the user is directly involved in the data mining process. Principled projection methods, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), are integrated with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates, and user interaction facilities, to provide this integrated visual data mining framework. The software also supports conventional visualisation techniques such as principal component analysis (PCA), Neuroscale, and PhiVis. This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install and use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. A methodology for noninvasively characterizing the three-dimensional (3-D) shape of the complete human eye is not currently available for research into ocular diseases that have a structural substrate, such as myopia. A novel application of a magnetic resonance imaging (MRI) acquisition and analysis technique is presented that, for the first time, allows the 3-D shape of the eye to be investigated fully. METHODS. The technique involves the acquisition of a T2-weighted MRI, which is optimized to reveal the fluid-filled chambers of the eye. Automatic segmentation and meshing algorithms generate a 3-D surface model, which can be shaded with morphologic parameters such as distance from the posterior corneal pole and deviation from sphericity. Full details of the method are illustrated with data from 14 eyes of seven individuals. The spatial accuracy of the calculated models is demonstrated by comparing the MRI-derived axial lengths with values measured in the same eyes using interferometry. RESULTS. The color-coded eye models showed substantial variation in the absolute size of the 14 eyes. Variations in the sphericity of the eyes were also evident, with some appearing approximately spherical whereas others were clearly oblate and one was slightly prolate. Nasal-temporal asymmetries were noted in some subjects. CONCLUSIONS. The MRI acquisition and analysis technique allows a novel way of examining 3-D ocular shape. The ability to stratify and analyze eye shape, ocular volume, and sphericity will further extend the understanding of which specific biometric parameters predispose emmetropic children subsequently to develop myopia. Copyright © Association for Research in Vision and Ophthalmology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since wireless network optimisations can be typically designed and evaluated independently of one another under the assumption that they can be applied jointly or independently. In this paper, we have analysis some rate algorithms in wireless networks. Since wireless networks have different standards in IEEE with peculiar features, data rate is one of those important parameters that wireless networks depend on for performances. The optimisation of this network is dependent on the behaviour of a particular rate algorithm in a network scenario. We have considered some first and second generation's rate algorithm, and it is all about selecting an appropriate data rate that any available wireless network can utilise for transmission in order to achieve a good performance. We have designed and analysis a wireless network and results obtained for some rate algorithms, like ONOE and AARF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of this study is development of parallelised version of severely sequential and iterative numerical algorithms based on multi-threaded parallel platform such as a graphics processing unit. This requires design and development of a platform-specific numerical solution that can benefit from the parallel capabilities of the chosen platform. Graphics processing unit was chosen as a parallel platform for design and development of a numerical solution for a specific physical model in non-linear optics. This problem appears in describing ultra-short pulse propagation in bulk transparent media that has recently been subject to several theoretical and numerical studies. The mathematical model describing this phenomenon is a challenging and complex problem and its numerical modeling limited on current modern workstations. Numerical modeling of this problem requires a parallelisation of an essentially serial algorithms and elimination of numerical bottlenecks. The main challenge to overcome is parallelisation of the globally non-local mathematical model. This thesis presents a numerical solution for elimination of numerical bottleneck associated with the non-local nature of the mathematical model. The accuracy and performance of the parallel code is identified by back-to-back testing with a similar serial version.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Link adaptation is a critical component of IEEE 802.11 systems. In this paper, we analytically model a retransmission based Auto Rate Fallback (ARF) link adaptation algorithm. Both packet collisions and packet corruptions are modeled with the algorithm. The models can provide insights into the dynamics of the link adaptation algorithms and configuration of algorithms parameters. It is also observed that when the competing number of stations is high, packet collisions can largely affected the performance of ARF and make ARF operate with the lowest date rate, even when no packet corruption occur. This is in contrast to the existing assumption that packet collision will not affect the correct operation of ARF and can be ignored in the evaluation of ARF. The work presented in this paper can provide guidelines on configuring the link adaptation algorithms and designing new link adaptation algorithms for future high speed 802.11 systems. © 2006 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The polyparametric intelligence information system for diagnostics human functional state in medicine and public health is developed. The essence of the system consists in polyparametric describing of human functional state with the unified set of physiological parameters and using the polyparametric cognitive model developed as the tool for a system analysis of multitude data and diagnostics of a human functional state. The model is developed on the basis of general principles geometry and symmetry by algorithms of artificial intelligence systems. The architecture of the system is represented. The model allows analyzing traditional signs - absolute values of electrophysiological parameters and new signs generated by the model – relationships of ones. The classification of physiological multidimensional data is made with a transformer of the model. The results are presented to a physician in a form of visual graph – a pattern individual functional state. This graph allows performing clinical syndrome analysis. A level of human functional state is defined in the case of the developed standard (“ideal”) functional state. The complete formalization of results makes it possible to accumulate physiological data and to analyze them by mathematics methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summarizing the accumulated experience for a long time in the polyparametric cognitive modeling of different physiological processes (electrocardiogram, electroencephalogram, electroreovasogram and others) and the development on this basis some diagnostics methods give ground for formulating a new methodology of the system analysis in biology. The gist of the methodology consists of parametrization of fractals of electrophysiological processes, matrix description of functional state of an object with a unified set of parameters, construction of the polyparametric cognitive geometric model with artificial intelligence algorithms. The geometry model enables to display the parameter relationships are adequate to requirements of the system approach. The objective character of the elements of the models and high degree of formalization which facilitate the use of the mathematical methods are advantages of these models. At the same time the geometric images are easily interpreted in physiological and clinical terms. The polyparametric modeling is an object oriented tool possessed advances functional facilities and some principal features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The applications of micro-end-milling operations have increased recently. A Micro-End-Milling Operation Guide and Research Tool (MOGART) package has been developed for the study and monitoring of micro-end-milling operations. It includes an analytical cutting force model, neural network based data mapping and forecasting processes, and genetic algorithms based optimization routines. MOGART uses neural networks to estimate tool machinability and forecast tool wear from the experimental cutting force data, and genetic algorithms with the analytical model to monitor tool wear, breakage, run-out, cutting conditions from the cutting force profiles. ^ The performance of MOGART has been tested on the experimental data of over 800 experimental cases and very good agreement has been observed between the theoretical and experimental results. The MOGART package has been applied to the micro-end-milling operation study of Engineering Prototype Center of Radio Technology Division of Motorola Inc. ^