952 resultados para Ruin Probability
Resumo:
We consider secret sharing with binary shares. This model allows us to use the well developed theory of cryptographically strong boolean functions. We prove that for given secret sharing, the average cheating probability over all cheating and original vectors, i.e., ρ ¯= 1 n ⋅ 2 −n ∑ n c=1 ∑ α∈Vn ρ c,α , satisfies ρ ¯⩾ 1 2 , and the equality holds ⇔ ρc,α satisfies ρc,α = 1/2 for every cheating vector δc and every original vector α. In this case the secret sharing is said to be cheating immune. We further establish a relationship between cheating-immune secret sharing and cryptographic criteria of boolean functions. This enables us to construct cheating-immune secret sharing.
Resumo:
The occurrence of extreme water level events along low-lying, highly populated and/or developed coastlines can lead to devastating impacts on coastal infrastructure. Therefore it is very important that the probabilities of extreme water levels are accurately evaluated to inform flood and coastal management and for future planning. The aim of this study was to provide estimates of present day extreme total water level exceedance probabilities around the whole coastline of Australia, arising from combinations of mean sea level, astronomical tide and storm surges generated by both extra-tropical and tropical storms, but exclusive of surface gravity waves. The study has been undertaken in two main stages. In the first stage, a high-resolution (~10 km along the coast) hydrodynamic depth averaged model has been configured for the whole coastline of Australia using the Danish Hydraulics Institute’s Mike21 modelling suite of tools. The model has been forced with astronomical tidal levels, derived from the TPX07.2 global tidal model, and meteorological fields, from the US National Center for Environmental Prediction’s global reanalysis, to generate a 61-year (1949 to 2009) hindcast of water levels. This model output has been validated against measurements from 30 tide gauge sites around Australia with long records. At each of the model grid points located around the coast, time series of annual maxima and the several highest water levels for each year were derived from the multi-decadal water level hindcast and have been fitted to extreme value distributions to estimate exceedance probabilities. Stage 1 provided a reliable estimate of the present day total water level exceedance probabilities around southern Australia, which is mainly impacted by extra-tropical storms. However, as the meteorological fields used to force the hydrodynamic model only weakly include the effects of tropical cyclones the resultant water levels exceedance probabilities were underestimated around western, northern and north-eastern Australia at higher return periods. Even if the resolution of the meteorological forcing was adequate to represent tropical cyclone-induced surges, multi-decadal periods yielded insufficient instances of tropical cyclones to enable the use of traditional extreme value extrapolation techniques. Therefore, in the second stage of the study, a statistical model of tropical cyclone tracks and central pressures was developed using histroic observations. This model was then used to generate synthetic events that represented 10,000 years of cyclone activity for the Australia region, with characteristics based on the observed tropical cyclones over the last ~40 years. Wind and pressure fields, derived from these synthetic events using analytical profile models, were used to drive the hydrodynamic model to predict the associated storm surge response. A random time period was chosen, during the tropical cyclone season, and astronomical tidal forcing for this period was included to account for non-linear interactions between the tidal and surge components. For each model grid point around the coast, annual maximum total levels for these synthetic events were calculated and these were used to estimate exceedance probabilities. The exceedance probabilities from stages 1 and 2 were then combined to provide a single estimate of present day extreme water level probabilities around the whole coastline of Australia.
Resumo:
A well-known attack on RSA with low secret-exponent d was given by Wiener about 15 years ago. Wiener showed that using continued fractions, one can efficiently recover the secret-exponent d from the public key (N,e) as long as d < N 1/4. Interestingly, Wiener stated that his attack may sometimes also work when d is slightly larger than N 1/4. This raises the question of how much larger d can be: could the attack work with non-negligible probability for d=N 1/4 + ρ for some constant ρ > 0? We answer this question in the negative by proving a converse to Wiener’s result. Our result shows that, for any fixed ε > 0 and all sufficiently large modulus lengths, Wiener’s attack succeeds with negligible probability over a random choice of d < N δ (in an interval of size Ω(N δ )) as soon as δ > 1/4 + ε. Thus Wiener’s success bound d
Resumo:
Several recently proposed ciphers, for example Rijndael and Serpent, are built with layers of small S-boxes interconnected by linear key-dependent layers. Their security relies on the fact, that the classical methods of cryptanalysis (e.g. linear or differential attacks) are based on probabilistic characteristics, which makes their security grow exponentially with the number of rounds N r r. In this paper we study the security of such ciphers under an additional hypothesis: the S-box can be described by an overdefined system of algebraic equations (true with probability 1). We show that this is true for both Serpent (due to a small size of S-boxes) and Rijndael (due to unexpected algebraic properties). We study general methods known for solving overdefined systems of equations, such as XL from Eurocrypt’00, and show their inefficiency. Then we introduce a new method called XSL that uses the sparsity of the equations and their specific structure. The XSL attack uses only relations true with probability 1, and thus the security does not have to grow exponentially in the number of rounds. XSL has a parameter P, and from our estimations is seems that P should be a constant or grow very slowly with the number of rounds. The XSL attack would then be polynomial (or subexponential) in N r> , with a huge constant that is double-exponential in the size of the S-box. The exact complexity of such attacks is not known due to the redundant equations. Though the presented version of the XSL attack always gives always more than the exhaustive search for Rijndael, it seems to (marginally) break 256-bit Serpent. We suggest a new criterion for design of S-boxes in block ciphers: they should not be describable by a system of polynomial equations that is too small or too overdefined.
Resumo:
We present a distinguishing attack against SOBER-128 with linear masking. We found a linear approximation which has a bias of 2^− − 8.8 for the non-linear filter. The attack applies the observation made by Ekdahl and Johansson that there is a sequence of clocks for which the linear combination of some states vanishes. This linear dependency allows that the linear masking method can be applied. We also show that the bias of the distinguisher can be improved (or estimated more precisely) by considering quadratic terms of the approximation. The probability bias of the quadratic approximation used in the distinguisher is estimated to be equal to O(2^− − 51.8), so that we claim that SOBER-128 is distinguishable from truly random cipher by observing O(2^103.6) keystream words.
Resumo:
This article presents the field applications and validations for the controlled Monte Carlo data generation scheme. This scheme was previously derived to assist the Mahalanobis squared distance–based damage identification method to cope with data-shortage problems which often cause inadequate data multinormality and unreliable identification outcome. To do so, real-vibration datasets from two actual civil engineering structures with such data (and identification) problems are selected as the test objects which are then shown to be in need of enhancement to consolidate their conditions. By utilizing the robust probability measures of the data condition indices in controlled Monte Carlo data generation and statistical sensitivity analysis of the Mahalanobis squared distance computational system, well-conditioned synthetic data generated by an optimal controlled Monte Carlo data generation configurations can be unbiasedly evaluated against those generated by other set-ups and against the original data. The analysis results reconfirm that controlled Monte Carlo data generation is able to overcome the shortage of observations, improve the data multinormality and enhance the reliability of the Mahalanobis squared distance–based damage identification method particularly with respect to false-positive errors. The results also highlight the dynamic structure of controlled Monte Carlo data generation that makes this scheme well adaptive to any type of input data with any (original) distributional condition.
Resumo:
Textured silicon surfaces are widely used in manufacturing of solar cells due to increasing the light absorption probability and also the antireflection properties. However, these Si surfaces have a high density of surface defects that need to be passivated. In this study, the effect of the microscopic surface texture on the plasma surface passivation of solar cells is investigated. The movement of 105 H+ ions in the texture-modified plasma sheath is studied by Monte Carlo numerical simulation. The hydrogen ions are driven by the combined electric field of the plasma sheath and the textured surface. The ion dynamics is simulated, and the relative ion distribution over the textured substrate is presented. This distribution can be used to interpret the quality of the Si dangling bonds saturation and consequently, the direct plasma surface passivation.
Resumo:
Realizing the promise of molecularly targeted inhibitors for cancer therapy will require a new level of knowledge about how a drug target is wired into the control circuitry of a complex cellular network. Here we review general homeostatic principles of cellular networks that enable the cell to be resilient in the face of molecular perturbations, while at the same time being sensitive to subtle input signals. Insights into such mechanisms may facilitate the development of combination therapies that take advantage of the cellular control circuitry, with the aim of achieving higher efficacy at a lower drug dosage and with a reduced probability of drug-resistance development.
Resumo:
The results of comprehensive experimental studies of the operation, stability, and plasma parameters of the low-frequency (0.46 MHz) inductively coupled plasmas sustained by the internal oscillating rf current are reported. The rf plasma is generated by using a custom-designed configuration of the internal rf coil that comprises two perpendicular sets of eight currents in each direction. Various diagnostic tools, such as magnetic probes, optical emission spectroscopy, and an rf-compensated Langmuir probe were used to investigate the electromagnetic, optical, and global properties of the argon plasma in wide ranges of the applied rf power and gas feedstock pressure. It is found that the uniformity of the electromagnetic field inside the plasma reactor is improved as compared to the conventional sources of inductively coupled plasmas with the external flat coil configuration. A reasonable agreement between the experimental data and computed electromagnetic field topography inside the chamber is reported. The Langmuir probe measurements reveal that the spatial profiles of the electron density, the effective electron temperature, plasma potential, and electron energy distribution/probability functions feature a high degree of the radial and axial uniformity and a weak azimuthal dependence, which is consistent with the earlier theoretical predictions. As the input rf power increases, the azimuthal dependence of the global plasma parameters vanishes. The obtained results demonstrate that by introducing the internal oscillated rf currents one can noticeably improve the uniformity of electromagnetic field topography, rf power deposition, and the plasma density in the reactor.
Resumo:
Objective To summarise how costs and health benefits will change with the adoption of total laparoscopic hysterectomy compared to total abdominal hysterectomy for the treatment of early stage endometrial cancer. Design Cost-effectiveness modelling using the information from a randomised controlled trial. Participants Two hypothetical modelled cohorts of 1000 individuals undergoing total laparoscopic hysterectomy and total abdominal hysterectomy. Outcome measures Surgery costs; hospital bed days used; total healthcare costs; quality-adjusted life years; and net monetary benefits. Results For 1000 individuals receiving total laparoscopic hysterectomy surgery, the costs were $509 575 higher, 3548 hospital fewer bed days were used and total health services costs were reduced by $3 746 221. There were 39.13 more quality-adjusted life years for a 5 year period following surgery. Conclusions The adoption of total laparoscopic hysterectomy is almost certainly a good decision for health services policy makers. There is 100% probability that it will be cost saving to health services, a 86.8% probability that it will increase health benefits and a 99.5% chance that it returns net monetary benefits greater than zero.
Resumo:
Control and diagnostics of low-frequency (∼ 500 kHz) inductively coupled plasmas for chemical vapor deposition (CVD) of nano-composite carbon nitride-based films is reported. Relation between the discharge control parameters, plasma electron energy distribution/probability functions (EEDF/EEPF), and elemental composition in the deposited C-N based thin films is investigated. Langmuir probe technique is employed to monitor the plasma density and potential, effective electron temperature, and EEDFs/EEPFs in Ar + N2 + CH4 discharges. It is revealed that varying RF power and gas composition/pressure one can engineer the EEDFs/EEPFs to enhance the desired plasma-chemical gas-phase reactions thus controlling the film chemical structure. Auxiliary diagnostic tools for study of the RF power deposition, plasma composition, stability, and optical emission are discussed as well.
Resumo:
The paper investigates the design of secret sharing that is immune against cheating (as defined by the Tompa-Woll attack). We examine secret sharing with binary shares and secrets. Bounds on the probability of successful cheating are given for two cases. The first case relates to secret sharing based on bent functions and results in a non-perfect scheme. The second case considers perfect secret sharing built on highly nonlinear balanced Boolean functions.
Resumo:
The paper addresses the cheating prevention in secret sharing. We consider secret sharing with binary shares. The secret also is binary. This model allows us to use results and constructions from the well developed theory of cryptographically strong boolean functions. In particular, we prove that for given secret sharing, the average cheating probability over all cheating vectors and all original vectors, i.e., 1/n 2n ∑c=1...n ∑α∈V n ρc,α , denoted by ρ, satisfies ρ ≥ ½, and the equality holds if and only if ρc,α satisfies ρc,α= ½ for every cheating vector δc and every original vector α. In this case the secret sharing is said to be cheating immune. We further establish a relationship between cheating-immune secret sharing and cryptographic criteria of boolean functions.This enables us to construct cheating-immune secret sharing.
Resumo:
In the electricity market environment, load-serving entities (LSEs) will inevitably face risks in purchasing electricity because there are a plethora of uncertainties involved. To maximize profits and minimize risks, LSEs need to develop an optimal strategy to reasonably allocate the purchased electricity amount in different electricity markets such as the spot market, bilateral contract market, and options market. Because risks originate from uncertainties, an approach is presented to address the risk evaluation problem by the combined use of the lower partial moment and information entropy (LPME). The lower partial moment is used to measure the amount and probability of the loss, whereas the information entropy is used to represent the uncertainty of the loss. Electricity purchasing is a repeated procedure; therefore, the model presented represents a dynamic strategy. Under the chance-constrained programming framework, the developed optimization model minimizes the risk of the electricity purchasing portfolio in different markets because the actual profit of the LSE concerned is not less than the specified target under a required confidence level. Then, the particle swarm optimization (PSO) algorithm is employed to solve the optimization model. Finally, a sample example is used to illustrate the basic features of the developed model and method.
Resumo:
This thesis presents four essays in the political economy of elections and reforms. The first study exploits discontinuities around school entry cut-off dates to show that early childhood conditions can impact the probability to become a top-flight politician. The second study provides empirical estimates of the effect of sequential voting on turnout and bandwagon voting outside the laboratory. The third work describes a novel nonparametric strategy to identify tactical voting patterns directly from balloting results using British election data. Finally, a study is put forward that examines the political feasibility of reforms.