991 resultados para computational costs


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Availability has become a primary goal of information security and is as significant as other goals, in particular, confidentiality and integrity. Maintaining availability of essential services on the public Internet is an increasingly difficult task in the presence of sophisticated attackers. Attackers may abuse limited computational resources of a service provider and thus managing computational costs is a key strategy for achieving the goal of availability. In this thesis we focus on cryptographic approaches for managing computational costs, in particular computational effort. We focus on two cryptographic techniques: computational puzzles in cryptographic protocols and secure outsourcing of cryptographic computations. This thesis contributes to the area of cryptographic protocols in the following ways. First we propose the most efficient puzzle scheme based on modular exponentiations which, unlike previous schemes of the same type, involves only a few modular multiplications for solution verification; our scheme is provably secure. We then introduce a new efficient gradual authentication protocol by integrating a puzzle into a specific signature scheme. Our software implementation results for the new authentication protocol show that our approach is more efficient and effective than the traditional RSA signature-based one and improves the DoSresilience of Secure Socket Layer (SSL) protocol, the most widely used security protocol on the Internet. Our next contributions are related to capturing a specific property that enables secure outsourcing of cryptographic tasks in partial-decryption. We formally define the property of (non-trivial) public verifiability for general encryption schemes, key encapsulation mechanisms (KEMs), and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption avors. We show that some generic transformations and concrete constructions enjoy this property and then present a new public-key encryption (PKE) scheme having this property and proof of security under the standard assumptions. Finally, we combine puzzles with PKE schemes for enabling delayed decryption in applications such as e-auctions and e-voting. For this we first introduce the notion of effort-release PKE (ER-PKE), encompassing the well-known timedrelease encryption and encapsulated key escrow techniques. We then present a security model for ER-PKE and a generic construction of ER-PKE complying with our security notion.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper we consider bilinear forms of matrix polynomials and show that these polynomials can be used to construct solutions for the problems of solving systems of linear algebraic equations, matrix inversion and finding extremal eigenvalues. An almost Optimal Monte Carlo (MAO) algorithm for computing bilinear forms of matrix polynomials is presented. Results for the computational costs of a balanced algorithm for computing the bilinear form of a matrix power is presented, i.e., an algorithm for which probability and systematic errors are of the same order, and this is compared with the computational cost for a corresponding deterministic method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a method for topological SLAM that specifically targets loop closing for edge-ordered graphs. Instead of using a heuristic approach to accept or reject loop closing, we propose a probabilistically grounded multi-hypothesis technique that relies on the incremental construction of a map/state hypothesis tree. Loop closing is introduced automatically within the tree expansion, and likely hypotheses are chosen based on their posterior probability after a sequence of sensor measurements. Careful pruning of the hypothesis tree keeps the growing number of hypotheses under control and a recursive formulation reduces storage and computational costs. Experiments are used to validate the approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background In order to provide insights into the complex biochemical processes inside a cell, modelling approaches must find a balance between achieving an adequate representation of the physical phenomena and keeping the associated computational cost within reasonable limits. This issue is particularly stressed when spatial inhomogeneities have a significant effect on system's behaviour. In such cases, a spatially-resolved stochastic method can better portray the biological reality, but the corresponding computer simulations can in turn be prohibitively expensive. Results We present a method that incorporates spatial information by means of tailored, probability distributed time-delays. These distributions can be directly obtained by single in silico or a suitable set of in vitro experiments and are subsequently fed into a delay stochastic simulation algorithm (DSSA), achieving a good compromise between computational costs and a much more accurate representation of spatial processes such as molecular diffusion and translocation between cell compartments. Additionally, we present a novel alternative approach based on delay differential equations (DDE) that can be used in scenarios of high molecular concentrations and low noise propagation. Conclusions Our proposed methodologies accurately capture and incorporate certain spatial processes into temporal stochastic and deterministic simulations, increasing their accuracy at low computational costs. This is of particular importance given that time spans of cellular processes are generally larger (possibly by several orders of magnitude) than those achievable by current spatially-resolved stochastic simulators. Hence, our methodology allows users to explore cellular scenarios under the effects of diffusion and stochasticity in time spans that were, until now, simply unfeasible. Our methodologies are supported by theoretical considerations on the different modelling regimes, i.e. spatial vs. delay-temporal, as indicated by the corresponding Master Equations and presented elsewhere.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper gives a modification of a class of stochastic Runge–Kutta methods proposed in a paper by Komori (2007). The slight modification can reduce the computational costs of the methods significantly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The study of the relationship between macroscopic traffic parameters, such as flow, speed and travel time, is essential to the understanding of the behaviour of freeway and arterial roads. However, the temporal dynamics of these parameters are difficult to model, especially for arterial roads, where the process of traffic change is driven by a variety of variables. The introduction of the Bluetooth technology into the transportation area has proven exceptionally useful for monitoring vehicular traffic, as it allows reliable estimation of travel times and traffic demands. In this work, we propose an approach based on Bayesian networks for analyzing and predicting the complex dynamics of flow or volume, based on travel time observations from Bluetooth sensors. The spatio-temporal relationship between volume and travel time is captured through a first-order transition model, and a univariate Gaussian sensor model. The two models are trained and tested on travel time and volume data, from an arterial link, collected over a period of six days. To reduce the computational costs of the inference tasks, volume is converted into a discrete variable. The discretization process is carried out through a Self-Organizing Map. Preliminary results show that a simple Bayesian network can effectively estimate and predict the complex temporal dynamics of arterial volumes from the travel time data. Not only is the model well suited to produce posterior distributions over single past, current and future states; but it also allows computing the estimations of joint distributions, over sequences of states. Furthermore, the Bayesian network can achieve excellent prediction, even when the stream of travel time observation is partially incomplete.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this research is to further our understanding of how and why individuals enter and leave coresidential relationships. We develop and estimate an economic model of nonmarital cohabitation, marriage, and divorce that is consistent with current data on the formation and dissolution of relationships. Jovanovic's (Journal of Political Economy 87 (1979), 972-90) theoretical matching model is extended to help explain household formation and dissolution behavior. Implications of the model reveal what factors influence the decision to start a relationship, what form this relationship will take, and the relative stability of the various types of unions. The structural parameters of the model are estimated using longitudinal data from a sample of female high school seniors from the United States. New numerical methods are developed to reduce computational costs associated with estimation. The empirical results have interesting interpretations given the structural model. They show that a significant cause of cohabitation is the need to learn about potential partners and to hedge against future bad shocks. The estimated parameters are used to conduct several comparative dynamic experiments. For example, we show that policy experiments changing the cost of divorce have little effect on relationship choices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For systems which can be decomposed into slow and fast subsystems, a near optimum linear state regulator consisting of two subsystem regulators can be developed. Depending upon the desired criteria, either a short term (fast controller) or a long term controller (slow controller) can be easily designed with minimum computational costs. Using this approach an example of a power system supplying a cyclic load is studied and the performance of the different controllers are compared.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The results are presented of applying multi-time scale analysis using the singular perturbation technique for long time simulation of power system problems. A linear system represented in state-space form can be decoupled into slow and fast subsystems. These subsystems can be simulated with different time steps and then recombined to obtain the system response. Simulation results with a two-time scale analysis of a power system show a large saving in computational costs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Various structural, dynamic and thermodynamic properties of water molecules confined in single-wall carbon nanotubes (CNTs) are investigated using both polarizable and non-polarizable water models. The inclusion of polarizability quantitatively affects the nature of hydrogen bonding, which governs many properties of confined water molecules. Polarizable water leads to tighter hydrogen bonding and makes the distance between neighboring water molecules shorter than that for non-polarizable water. Stronger hydrogen bonding also decreases the rotational entropy and makes the diffusion constant smaller than in TIP3P and TIP3PM water models. The reorientational dynamics of the water molecules is governed by a jump mechanism, the barrier for the jump being highest for the polarizable water model. Our results highlight the role of polarizability in governing the dynamics of confined water and demonstrate that the inclusion of polarizability is necessary to obtain agreement with the results of ab initio simulations for the distributions of waiting and jump times. The SPC/E water model is found to predict various water properties in close agreement with the results of polarizable water models with much lower computational costs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Desde a década de 1960, devido à pertinência para a indústria petrolífera, a simulação numérica de reservatórios de petróleo tornou-se uma ferramenta usual e uma intensa área de pesquisa. O principal objetivo da modelagem computacional e do uso de métodos numéricos, para a simulação de reservatórios de petróleo, é o de possibilitar um melhor gerenciamento do campo produtor, de maneira que haja uma maximização na recuperação de hidrocarbonetos. Este trabalho tem como objetivo principal paralelizar, empregando a interface de programação de aplicativo OpenMP (Open Multi-Processing), o método numérico utilizado na resolução do sistema algébrico resultante da discretização da equação que descreve o escoamento monofásico em um reservatório de gás, em termos da variável pressão. O conjunto de equações governantes é formado pela equação da continuidade, por uma expressão para o balanço da quantidade de movimento e por uma equação de estado. A Equação da Difusividade Hidráulica (EDH), para a variável pressão, é obtida a partir deste conjunto de equações fundamentais, sendo então discretizada pela utilização do Método de Diferenças Finitas, com a escolha por uma formulação implícita. Diferentes testes numéricos são realizados a fim de estudar a eficiência computacional das versões paralelizadas dos métodos iterativos de Jacobi, Gauss-Seidel, Sobre-relaxação Sucessiva, Gradientes Conjugados (CG), Gradiente Biconjugado (BiCG) e Gradiente Biconjugado Estabilizado (BiCGStab), visando a uma futura aplicação dos mesmos na simulação de reservatórios de gás. Ressalta-se que a presença de heterogeneidades na rocha reservatório e/ou às não-linearidades presentes na EDH para o escoamento de gás aumentam a necessidade de métodos eficientes do ponto de vista de custo computacional, como é o caso de estratégias usando OpenMP.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The accurate prediction of time-changing covariances is an important problem in the modeling of multivariate financial data. However, some of the most popular models suffer from a) overfitting problems and multiple local optima, b) failure to capture shifts in market conditions and c) large computational costs. To address these problems we introduce a novel dynamic model for time-changing covariances. Over-fitting and local optima are avoided by following a Bayesian approach instead of computing point estimates. Changes in market conditions are captured by assuming a diffusion process in parameter values, and finally computationally efficient and scalable inference is performed using particle filters. Experiments with financial data show excellent performance of the proposed method with respect to current standard models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Previous studies have reported that different schemes for coupling Monte Carlo (MC) neutron transport with burnup and thermal hydraulic feedbacks may potentially be numerically unstable. This issue can be resolved by application of implicit methods, such as the stochastic implicit mid-point (SIMP) methods. In order to assure numerical stability, the new methods do require additional computational effort. The instability issue however, is problem-dependent and does not necessarily occur in all cases. Therefore, blind application of the unconditionally stable coupling schemes, and thus incurring extra computational costs, may not always be necessary. In this paper, we attempt to develop an intelligent diagnostic mechanism, which will monitor numerical stability of the calculations and, if necessary, switch from simple and fast coupling scheme to more computationally expensive but unconditionally stable one. To illustrate this diagnostic mechanism, we performed a coupled burnup and TH analysis of a single BWR fuel assembly. The results indicate that the developed algorithm can be easily implemented in any MC based code for monitoring of numerical instabilities. The proposed monitoring method has negligible impact on the calculation time even for realistic 3D multi-region full core calculations. © 2014 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. We conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Credal networks relax the precise probability requirement of Bayesian networks, enabling a richer representation of uncertainty in the form of closed convex sets of probability measures. The increase in expressiveness comes at the expense of higher computational costs. In this paper, we present a new variable elimination algorithm for exactly computing posterior inferences in extensively specified credal networks, which is empirically shown to outperform a state-of-the-art algorithm. The algorithm is then turned into a provably good approximation scheme, that is, a procedure that for any input is guaranteed to return a solution not worse than the optimum by a given factor. Remarkably, we show that when the networks have bounded treewidth and bounded number of states per variable the approximation algorithm runs in time polynomial in the input size and in the inverse of the error factor, thus being the first known fully polynomial-time approximation scheme for inference in credal networks.