907 resultados para computationally efficient algorithm


Relevância:

80.00% 80.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 26A33 (primary), 35S15

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new mesoscale simulation model for solids dissolution based on an computationally efficient and versatile digital modelling approach (DigiDiss) is considered and validated against analytical solutions and published experimental data for simple geometries. As the digital model is specifically designed to handle irregular shapes and complex multi-component structures, use of the model is explored for single crystals (sugars) and clusters. Single crystals and the cluster were first scanned using X-ray microtomography to obtain a digital version of their structures. The digitised particles and clusters were used as a structural input to digital simulation. The same particles were then dissolved in water and the dissolution process was recorded by a video camera and analysed yielding: the overall dissolution times and images of particle size and shape during the dissolution. The results demonstrate the coherence of simulation method to reproduce experimental behaviour, based on known chemical and diffusion properties of constituent phase. The paper discusses how further sophistications to the modelling approach will need to include other important effects such as complex disintegration effects (particle ejection, uncertainties in chemical properties). The nature of the digital modelling approach is well suited to for future implementation with high speed computation using hybrid conventional (CPU) and graphical processor (GPU) systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Az új választási törvény egyik célja a korábbinál igazságosabb választási körzetek kialakítása. Ezt a Velencei Bizottság választási kódexében megfogalmazott ajánlásokhoz hasonló, bár azoknál némileg megengedőbb szabályok révén biztosítja. A szabályok rögzítik a körzetek számát, illetve hogy a körzetek nem oszthatnak ketté kisebb településeket, és nem nyúlhatnak át a megyehatárokon. Tanulmányunkban belátjuk, hogy a szabályok betartása mellett a körzetek kialakítása matematikailag lehetetlen. Javaslatot teszünk a probléma optimális megoldására elvi alapon is, vizsgáljuk a módszer tulajdonságait, majd az általunk megfogalmazott hatékony algoritmussal, a 2010. évi országgyűlési választások adatainak felhasználásával meghatározzuk a körzetek megyék közti elosztásának legjobb megoldását. Végül kitérünk a demográfiai változások várható hatásaira, és több javaslatot teszünk a korlátok hosszú távú betartására: javasoljuk a választási körzetek számának körülbelül 130-ra növelését; egy-egy felülvizsgálat alkalmával a választási körzetek számának megváltoztathatóságát; illetve a körzetek megyék helyett régiók szerinti szervezését. _______ One of the aims of the new electoral law of Hungary has been to apportion voters to voting districts more fairly. This is ensured by a set of rules rather more permissive than those put forward in the Code of Good Practice in Electoral Matters issued by the Venice Commission. These rules fix the size of the voting districts, and require voting districts not to split smaller towns and villages and not to cross county borders. The article shows that such an apportionment is mathematically impos-sible, and makes suggestions for a theoretical approach to resolving this problem: determine the optimal apportionment by studying the properties of their approach, and use the authors efficient algorithm on the data for the 2010 national elections. The article also examines the expected effect of demographic changes and formulates recommendations for adhering to the rules over the long term: increase the number of voting districts to about 130, allow the number of voting districts to change flexibly at each revision of the districts, and base the districts on regions rather than counties.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Semantic Binary Data Model (SBM) is a viable alternative to the now-dominant relational data model. SBM would be especially advantageous for applications dealing with complex interrelated networks of objects provided that a robust efficient implementation can be achieved. This dissertation presents an implementation design method for SBM, algorithms, and their analytical and empirical evaluation. Our method allows building a robust and flexible database engine with a wider applicability range and improved performance. ^ Extensions to SBM are introduced and an implementation of these extensions is proposed that allows the database engine to efficiently support applications with a predefined set of queries. A New Record data structure is proposed. Trade-offs of employing Fact, Record and Bitmap Data structures for storing information in a semantic database are analyzed. ^ A clustering ID distribution algorithm and an efficient algorithm for object ID encoding are proposed. Mapping to an XML data model is analyzed and a new XML-based XSDL language facilitating interoperability of the system is defined. Solutions to issues associated with making the database engine multi-platform are presented. An improvement to the atomic update algorithm suitable for certain scenarios of database recovery is proposed. ^ Specific guidelines are devised for implementing a robust and well-performing database engine based on the extended Semantic Data Model. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent technological developments have made it possible to design various microdevices where fluid flow and heat transfer are involved. For the proper design of such systems, the governing physics needs to be investigated. Due to the difficulty to study complex geometries in micro scales using experimental techniques, computational tools are developed to analyze and simulate flow and heat transfer in microgeometries. However, conventional numerical methods using the Navier-Stokes equations fail to predict some aspects of microflows such as nonlinear pressure distribution, increase mass flow rate, slip flow and temperature jump at the solid boundaries. This necessitates the development of new computational methods which depend on the kinetic theory that are both accurate and computationally efficient. In this study, lattice Boltzmann method (LBM) was used to investigate the flow and heat transfer in micro sized geometries. The LBM depends on the Boltzmann equation which is valid in the whole rarefaction regime that can be observed in micro flows. Results were obtained for isothermal channel flows at Knudsen numbers higher than 0.01 at different pressure ratios. LBM solutions for micro-Couette and micro-Poiseuille flow were found to be in good agreement with the analytical solutions valid in the slip flow regime (0.01 < Kn < 0.1) and direct simulation Monte Carlo solutions that are valid in the transition regime (0.1 < Kn < 10) for pressure distribution and velocity field. The isothermal LBM was further extended to simulate flows including heat transfer. The method was first validated for continuum channel flows with and without constrictions by comparing the thermal LBM results against accurate solutions obtained from analytical equations and finite element method. Finally, the capability of thermal LBM was improved by adding the effect of rarefaction and the method was used to analyze the behavior of gas flow in microchannels. The major finding of this research is that, the newly developed particle-based method described here can be used as an alternative numerical tool in order to study non-continuum effects observed in micro-electro-mechanical-systems (MEMS).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Advances in multiscale material modeling of structural concrete have created an upsurge of interest in the accurate evaluation of mechanical properties and volume fractions of its nano constituents. The task is accomplished by analyzing the response of a material to indentation, obtained as an outcome of a nanoindentation experiment, using a procedure called the Oliver and Pharr (OP) method. Despite its widespread use, the accuracy of this method is often questioned when it is applied to the data from heterogeneous materials or from the materials that show pile-up and sink-in during indentation, which necessitates the development of an alternative method. ^ In this study, a model is developed within the framework defined by contact mechanics to compute the nanomechanical properties of a material from its indentation response. Unlike the OP method, indentation energies are employed in the form of dimensionless constants to evaluate model parameters. Analysis of the load-displacement data pertaining to a wide range of materials revealed that the energy constants may be used to determine the indenter tip bluntness, hardness and initial unloading stiffness of the material. The proposed model has two main advantages: (1) it does not require the computation of the contact area, a source of error in the existing method; and (2) it incorporates the effect of peak indentation load, dwelling period and indenter tip bluntness on the measured mechanical properties explicitly. ^ Indentation tests are also carried out on samples from cement paste to validate the energy based model developed herein by determining the elastic modulus and hardness of different phases of the paste. As a consequence, it has been found that the model computes the mechanical properties in close agreement with that obtained by the OP method; a discrepancy, though insignificant, is observed more in the case of C-S-H than in the anhydrous phase. Nevertheless, the proposed method is computationally efficient, and thus it is highly suitable when the grid indentation technique is required to be performed. In addition, several empirical relations are developed that are found to be crucial in understanding the nanomechanical behavior of cementitious materials.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. ^ A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. ^ The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. ^ The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. ^ To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. ^ In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent technological developments have made it possible to design various microdevices where fluid flow and heat transfer are involved. For the proper design of such systems, the governing physics needs to be investigated. Due to the difficulty to study complex geometries in micro scales using experimental techniques, computational tools are developed to analyze and simulate flow and heat transfer in microgeometries. However, conventional numerical methods using the Navier-Stokes equations fail to predict some aspects of microflows such as nonlinear pressure distribution, increase mass flow rate, slip flow and temperature jump at the solid boundaries. This necessitates the development of new computational methods which depend on the kinetic theory that are both accurate and computationally efficient. In this study, lattice Boltzmann method (LBM) was used to investigate the flow and heat transfer in micro sized geometries. The LBM depends on the Boltzmann equation which is valid in the whole rarefaction regime that can be observed in micro flows. Results were obtained for isothermal channel flows at Knudsen numbers higher than 0.01 at different pressure ratios. LBM solutions for micro-Couette and micro-Poiseuille flow were found to be in good agreement with the analytical solutions valid in the slip flow regime (0.01 < Kn < 0.1) and direct simulation Monte Carlo solutions that are valid in the transition regime (0.1 < Kn < 10) for pressure distribution and velocity field. The isothermal LBM was further extended to simulate flows including heat transfer. The method was first validated for continuum channel flows with and without constrictions by comparing the thermal LBM results against accurate solutions obtained from analytical equations and finite element method. Finally, the capability of thermal LBM was improved by adding the effect of rarefaction and the method was used to analyze the behavior of gas flow in microchannels. The major finding of this research is that, the newly developed particle-based method described here can be used as an alternative numerical tool in order to study non-continuum effects observed in micro-electro-mechanical-systems (MEMS).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Allocating resources optimally is a nontrivial task, especially when multiple

self-interested agents with conflicting goals are involved. This dissertation

uses techniques from game theory to study two classes of such problems:

allocating resources to catch agents that attempt to evade them, and allocating

payments to agents in a team in order to stabilize it. Besides discussing what

allocations are optimal from various game-theoretic perspectives, we also study

how to efficiently compute them, and if no such algorithms are found, what

computational hardness results can be proved.

The first class of problems is inspired by real-world applications such as the

TOEFL iBT test, course final exams, driver's license tests, and airport security

patrols. We call them test games and security games. This dissertation first

studies test games separately, and then proposes a framework of Catcher-Evader

games (CE games) that generalizes both test games and security games. We show

that the optimal test strategy can be efficiently computed for scored test

games, but it is hard to compute for many binary test games. Optimal Stackelberg

strategies are hard to compute for CE games, but we give an empirically

efficient algorithm for computing their Nash equilibria. We also prove that the

Nash equilibria of a CE game are interchangeable.

The second class of problems involves how to split a reward that is collectively

obtained by a team. For example, how should a startup distribute its shares, and

what salary should an enterprise pay to its employees. Several stability-based

solution concepts in cooperative game theory, such as the core, the least core,

and the nucleolus, are well suited to this purpose when the goal is to avoid

coalitions of agents breaking off. We show that some of these solution concepts

can be justified as the most stable payments under noise. Moreover, by adjusting

the noise models (to be arguably more realistic), we obtain new solution

concepts including the partial nucleolus, the multiplicative least core, and the

multiplicative nucleolus. We then study the computational complexity of those

solution concepts under the constraint of superadditivity. Our result is based

on what we call Small-Issues-Large-Team games and it applies to popular

representation schemes such as MC-nets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Le but de cette thèse est d’explorer le potentiel sismique des étoiles naines blanches pulsantes, et en particulier celles à atmosphères riches en hydrogène, les étoiles ZZ Ceti. La technique d’astérosismologie exploite l’information contenue dans les modes normaux de vibration qui peuvent être excités lors de phases particulières de l’évolution d’une étoile. Ces modes modulent le flux émergent de l’étoile pulsante et se manifestent principalement en termes de variations lumineuses multi-périodiques. L’astérosismologie consiste donc à examiner la luminosité d’étoiles pulsantes en fonction du temps, afin d’en extraire les périodes, les amplitudes apparentes, ainsi que les phases relatives des modes de pulsation détectés, en utilisant des méthodes standards de traitement de signal, telles que des techniques de Fourier. L’étape suivante consiste à comparer les périodes de pulsation observées avec des périodes générées par un modèle stellaire en cherchant l’accord optimal avec un modèle physique reconstituant le plus fidèlement possible l’étoile pulsante. Afin d’assurer une recherche optimale dans l’espace des paramètres, il est nécessaire d’avoir de bons modèles physiques, un algorithme d’optimisation de comparaison de périodes efficace, et une puissance de calcul considérable. Les périodes des modes de pulsation de modèles stellaires de naines blanches peuvent être généralement calculées de manière précise et fiable sur la base de la théorie linéaire des pulsations stellaires dans sa version adiabatique. Afin de définir dans son ensemble un modèle statique de naine blanche propre à l’analyse astérosismologique, il est nécessaire de spécifier la gravité de surface, la température effective, ainsi que différents paramètres décrivant la disposition en couche de l’enveloppe. En utilisant parallèlement les informations obtenues de manière indépendante (température effective et gravité de surface) par la méthode spectroscopique, il devient possible de vérifier la validité de la solution obtenue et de restreindre de manière remarquable l’espace des paramètres. L’exercice astérosismologique, s’il est réussi, mène donc à la détermination précise des paramètres de la structure globale de l’étoile pulsante et fournit de l’information unique sur sa structure interne et l’état de sa phase évolutive. On présente dans cette thèse l’analyse complète réussie, de l’extraction des fréquences à la solution sismique, de quatre étoiles naines blanches pulsantes. Il a été possible de déterminer les paramètres structuraux de ces étoiles et de les comparer remarquablement à toutes les contraintes indépendantes disponibles dans la littérature, mais aussi d’inférer sur la dynamique interne et de reconstruire le profil de rotation interne. Dans un premier temps, on analyse le duo d’étoiles ZZ Ceti, GD 165 et Ross 548, afin de comprendre les différences entre leurs propriétés de pulsation, malgré le fait qu’elles soient des étoiles similaires en tout point, spectroscopiquement parlant. L’analyse sismique révèle des structures internes différentes, et dévoile la sensibilité de certains modes de pulsation à la composition interne du noyau de l’étoile. Afin de palier à cette sensibilité, nouvellement découverte, et de rivaliser avec les données de qualité exceptionnelle que nous fournissent les missions spatiales Kepler et Kepler2, on développe une nouvelle paramétrisation des profils chimiques dans le coeur, et on valide la robustesse de notre technique et de nos modèles par de nombreux tests. Avec en main la nouvelle paramétrisation du noyau, on décroche enfin le ”Saint Graal” de l’astérosismologie, en étant capable de reproduire pour la première fois les périodes observées à la précision des observations, dans le cas de l’étude sismique des étoiles KIC 08626021 et de GD 1212.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Atrial fibrillation (AF) is a major global health issue as it is the most prevalent sustained supraventricular arrhythmia. Catheter-based ablation of some parts of the atria is considered an effective treatment of AF. The main objective of this research is to analyze atrial intracardiac electrograms (IEGMs) and extract insightful information for the ablation therapy. Throughout this thesis we propose several computationally efficient algorithms that take streams of IEGMs from different atrial sites as the input signals, sequentially analyze them in various domains (e.g., time and frequency), and create color-coded three-dimensional map of the atria to be used in the ablation therapy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.