960 resultados para Melnikov chaos prediction theory
Resumo:
Most standard algorithms for prediction with expert advice depend on a parameter called the learning rate. This learning rate needs to be large enough to fit the data well, but small enough to prevent overfitting. For the exponential weights algorithm, a sequence of prior work has established theoretical guarantees for higher and higher data-dependent tunings of the learning rate, which allow for increasingly aggressive learning. But in practice such theoretical tunings often still perform worse (as measured by their regret) than ad hoc tuning with an even higher learning rate. To close the gap between theory and practice we introduce an approach to learn the learning rate. Up to a factor that is at most (poly)logarithmic in the number of experts and the inverse of the learning rate, our method performs as well as if we would know the empirically best learning rate from a large range that includes both conservative small values and values that are much higher than those for which formal guarantees were previously available. Our method employs a grid of learning rates, yet runs in linear time regardless of the size of the grid.
Resumo:
Newly licenced drivers are disproportionately represented in traffic injuries and crash statistics. Despite the implementation of countermeasures designed to improve safety, such as graduated driver licencing (GDL) schemes, many young drivers do not comply with road rules. This study used a reconceptualised deterrence theory framework to investigate young drivers’ perceptions of the enforcement of road rules in general and those more specifically related to GDL. A total of 236 drivers aged 17–24 completed a questionnaire assessing their perceptions of various deterrence mechanisms (personal and vicarious) and their compliance with both GDL-specific and general road rules. Hierarchical multiple regressions conducted to explore noncompliant behaviour revealed that, contrary to theoretical expectations, neither personal nor vicarious punishment experiences affected compliance in the expected direction. Instead, the most influential factors contributing to noncompliance were licence type (P2) and, counterintuitively, having previously been exposed to enforcement. Parental enforcement was also significant in the prediction of transient rule violations, but not fixed rule violations or overall noncompliance. Findings are discussed in light of several possibilities, including an increase in violations due to more time spent on the road, an ‘emboldening effect’ noted in prior studies and possible conceptual constraints regarding the deterrence variables examined in this study.
Resumo:
A quantum-spin-Hall (QSH) state was achieved experimentally, albeit at a low critical temperature because of the narrow band gap of the bulk material. Twodimensional topological insulators are critically important for realizing novel topological applications. Using density functional theory (DFT), we demonstrated that hydrogenated GaBi bilayers (HGaBi) form a stable topological insulator with a large nontrivial band gap of 0.320 eV, based on the state-of-the-art hybrid functional method, which is implementable for achieving QSH states at room temperature. The nontrivial topological property of the HGaBi lattice can also be confirmed from the appearance of gapless edge states in the nanoribbon structure. Our results provide a versatile platform for hosting nontrivial topological states usable for important nanoelectronic device applications.
Resumo:
This paper studies the problem of selecting users in an online social network for targeted advertising so as to maximize the adoption of a given product. In previous work, two families of models have been considered to address this problem: direct targeting and network-based targeting. The former approach targets users with the highest propensity to adopt the product, while the latter approach targets users with the highest influence potential – that is users whose adoption is most likely to be followed by subsequent adoptions by peers. This paper proposes a hybrid approach that combines a notion of propensity and a notion of influence into a single utility function. We show that targeting a fixed number of high-utility users results in more adoptions than targeting either highly influential users or users with high propensity.
Resumo:
The phenomenon of drop formation at conical tips under near zero flow conditions has been investigated using a theoretical approach. The analysis permits the prediction of drop profile and drop volume, until the onset of instability. A semiempirical approach based on the similarity of drop shapes has been adopted to predict the detaching drop volumes at conical tips. The effects of base diameter of the cone, cone angle, interfacial tension, and the densities of the drop and the surrounding fluid on the maximum and detached drop volumes are predicted.
Resumo:
A model has been developed to predict heat transfer rates and sizes of bubbles generated during nucleate pool boiling. This model assumes conduction and a natural convective heat transfer mechanism through the liquid layer under the bubble and transient conduction from the bulk liquid. The temperature of the bulk liquid in the vicinity of the bubble is obtained by assuming a turbulent natural convection process from the hot plate to the liquid bulk. The shape of the bubble is obtained by equilibrium analysis. The bubble departure condition is predicted by a force balance equation. Good agreement has been found between the bubble radii predicted by the present theory and the ones obtained experimentally.
Resumo:
Bacterial persistent infections are responsible for a significant amount of the human morbidity and mortality. Unlike acute bacterial infections, it is very difficult to treat persistent bacterial infections (e.g. tuberculosis). Knowledge about the location of pathogenic bacteria during persistent infection will help to treat such conditions by designing novel drugs which can reach such locations. In this study, events of bacterial persistent infections were analyzed using game theory. A game was defined where the pathogen and the host are the two players with a conflict of interest. Criteria for the establishment of Nash equilibrium were calculated for this game. This theoretical model, which is very simple and heuristic, predicts that during persistent infections pathogenic bacteria stay in both intracellular and extracellular compartments of the host. The result of this study implies that a bacterium should be able to survive in both intracellular and extracellular compartments of the host in order to cause persistent infections. This explains why persistent infections are more often caused by intracellular pathogens like Mycobacterium and Salmonella. Moreover, this prediction is in consistence with the results of previous experimental studies.
Resumo:
The determination of the overconsolidation ratio (OCR) of clay deposits is an important task in geotechnical engineering practice. This paper examines the potential of a support vector machine (SVM) for predicting the OCR of clays from piezocone penetration test data. SVM is a statistical learning theory based on a structural risk minimization principle that minimizes both error and weight terms. The five input variables used for the SVM model for prediction of OCR are the corrected cone resistance (qt), vertical total stress (sigmav), hydrostatic pore pressure (u0), pore pressure at the cone tip (u1), and the pore pressure just above the cone base (u2). Sensitivity analysis has been performed to investigate the relative importance of each of the input parameters. From the sensitivity analysis, it is clear that qt=primary in situ data influenced by OCR followed by sigmav, u0, u2, and u1. Comparison between SVM and some of the traditional interpretation methods is also presented. The results of this study have shown that the SVM approach has the potential to be a practical tool for determination of OCR.
Resumo:
The significance of treating rainfall as a chaotic system instead of a stochastic system for a better understanding of the underlying dynamics has been taken up by various studies recently. However, an important limitation of all these approaches is the dependence on a single method for identifying the chaotic nature and the parameters involved. Many of these approaches aim at only analyzing the chaotic nature and not its prediction. In the present study, an attempt is made to identify chaos using various techniques and prediction is also done by generating ensembles in order to quantify the uncertainty involved. Daily rainfall data of three regions with contrasting characteristics (mainly in the spatial area covered), Malaprabha, Mahanadi and All-India for the period 1955-2000 are used for the study. Auto-correlation and mutual information methods are used to determine the delay time for the phase space reconstruction. Optimum embedding dimension is determined using correlation dimension, false nearest neighbour algorithm and also nonlinear prediction methods. The low embedding dimensions obtained from these methods indicate the existence of low dimensional chaos in the three rainfall series. Correlation dimension method is done on th phase randomized and first derivative of the data series to check whether the saturation of the dimension is due to the inherent linear correlation structure or due to low dimensional dynamics. Positive Lyapunov exponents obtained prove the exponential divergence of the trajectories and hence the unpredictability. Surrogate data test is also done to further confirm the nonlinear structure of the rainfall series. A range of plausible parameters is used for generating an ensemble of predictions of rainfall for each year separately for the period 1996-2000 using the data till the preceding year. For analyzing the sensitiveness to initial conditions, predictions are done from two different months in a year viz., from the beginning of January and June. The reasonably good predictions obtained indicate the efficiency of the nonlinear prediction method for predicting the rainfall series. Also, the rank probability skill score and the rank histograms show that the ensembles generated are reliable with a good spread and skill. A comparison of results of the three regions indicates that although they are chaotic in nature, the spatial averaging over a large area can increase the dimension and improve the predictability, thus destroying the chaotic nature. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Masonry strength is dependent upon characteristics of the masonry unit,the mortar and the bond between them. Empirical formulae as well as analytical and finite element (FE) models have been developed to predict structural behaviour of masonry. This paper is focused on developing a three dimensional non-linear FE model based on micro-modelling approach to predict masonry prism compressive strength and crack pattern. The proposed FE model uses multi-linear stress-strain relationships to model the non-linear behaviour of solid masonry unit and the mortar. Willam-Warnke's five parameter failure theory developed for modelling the tri-axial behaviour of concrete has been adopted to model the failure of masonry materials. The post failure regime has been modelled by applying orthotropic constitutive equations based on the smeared crack approach. Compressive strength of the masonry prism predicted by the proposed FE model has been compared with experimental values as well as the values predicted by other failure theories and Eurocode formula. The crack pattern predicted by the FE model shows vertical splitting cracks in the prism. The FE model predicts the ultimate failure compressive stress close to 85 of the mean experimental compressive strength value.
Resumo:
A simple new series, using an expansion of the velocity profile in parabolic cylinder functions, has been developed to describe the nonlinear evolution of a steady, laminar, incompressible wake from a given arbitrary initial profile. The first term in this series is itself found to provide a very satisfactory prediction of the decay of the maximum velocity defect in the wake behind a flat plate or aft of the recirculation zone behind a symmetric blunt body. A detailed analysis, including higher order terms, has been made of the flat plate wake with a Blasius profile at the trailing edge. The same method yields, as a special case, complete results for the development of linearized wakes with arbitrary initial profile under the influence of arbitrary pressure gradients. Finally, for purposes of comparison, a simple approximate solution is obtained using momentum integral methods, and found to predict satisfactorily the decay of the maximum velocity defect. © 1970 Wolters-Noordhoff Publishing.
Resumo:
One of the most fundamental and widely accepted ideas in finance is that investors are compensated through higher returns for taking on non-diversifiable risk. Hence the quantification, modeling and prediction of risk have been, and still are one of the most prolific research areas in financial economics. It was recognized early on that there are predictable patterns in the variance of speculative prices. Later research has shown that there may also be systematic variation in the skewness and kurtosis of financial returns. Lacking in the literature so far, is an out-of-sample forecast evaluation of the potential benefits of these new more complicated models with time-varying higher moments. Such an evaluation is the topic of this dissertation. Essay 1 investigates the forecast performance of the GARCH (1,1) model when estimated with 9 different error distributions on Standard and Poor’s 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of variance from intra-day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. In Essay 2, by using 20 years of daily Standard and Poor 500 index returns, it is found that density forecasts are much improved by allowing for constant excess kurtosis but not improved by allowing for skewness. By allowing the kurtosis and skewness to be time varying the density forecasts are not further improved but on the contrary made slightly worse. In Essay 3 a new model incorporating conditional variance, skewness and kurtosis based on the Normal Inverse Gaussian (NIG) distribution is proposed. The new model and two previously used NIG models are evaluated by their Value at Risk (VaR) forecasts on a long series of daily Standard and Poor’s 500 returns. The results show that only the new model produces satisfactory VaR forecasts for both 1% and 5% VaR Taken together the results of the thesis show that kurtosis appears not to exhibit predictable time variation, whereas there is found some predictability in the skewness. However, the dynamic properties of the skewness are not completely captured by any of the models.
Resumo:
A microscopic theory of equilibrium solvation and solvation dynamics of a classical, polar, solute molecule in dipolar solvent is presented. Density functional theory is used to explicitly calculate the polarization structure around a solvated ion. The calculated solvent polarization structure is different from the continuum model prediction in several respects. The value of the polarization at the surface of the ion is less than the continuum value. The solvent polarization also exhibits small oscillations in space near the ion. We show that, under certain approximations, our linear equilibrium theory reduces to the nonlocal electrostatic theory, with the dielectric function (c(k)) of the liquid now wave vector (k) dependent. It is further shown that the nonlocal electrostatic estimate of solvation energy, with a microscopic c(k), is close to the estimate of linearized equilibrium theories of polar liquids. The study of solvation dynamics is based on a generalized Smoluchowski equation with a mean-field force term to take into account the effects of intermolecular interactions. This study incorporates the local distortion of the solvent structure near the ion and also the effects of the translational modes of the solvent molecules.The latter contribution, if significant, can considerably accelerate the relaxation of solvent polarization and can even give rise to a long time decay that agrees with the continuum model prediction. The significance of these results is discussed.
Resumo:
A molecular theory of dielectric relaxation in a dense binary dipolar liquid is presented. The theory takes into account the effects of intra- and interspecies intermolecular interactions. It is shown that the relaxation is, in general, nonexponential. In certain limits, we recover the biexponential form traditionally used to analyze the experimental data of dielectric relaxation in a binary mixture. However, the relaxation times are widely different from the prediction of the noninteracting rotational diffusion model of Debye for a binary system. Detailed numerical evaluation of the frequency-dependent dielectric function epsilon-(omega) is carried out by using the known analytic solution of the mean spherical approximation (MSA) model for the two-particle direct correlation function for a polar mixture. A microscopic expression for both wave vector (k) and frequency (omega) dependent dielectric function, epsilon-(k,omega), of a binary mixture is also presented. The theoretical predictions on epsilon-(omega) (= epsilon-(k = 0, omega)) have been compared with the available experimental results. In particular, the present theory offers a molecular explanation of the phenomenon of fusing of the two relaxation channels of the neat liquids, observed by Schallamach many years ago.
Resumo:
Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.