844 resultados para RANDOM-CLUSTER MODEL
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Resumo:
In the study of traffic safety, expected crash frequencies across sites are generally estimated via the negative binomial model, assuming time invariant safety. Since the time invariant safety assumption may be invalid, Hauer (1997) proposed a modified empirical Bayes (EB) method. Despite the modification, no attempts have been made to examine the generalisable form of the marginal distribution resulting from the modified EB framework. Because the hyper-parameters needed to apply the modified EB method are not readily available, an assessment is lacking on how accurately the modified EB method estimates safety in the presence of the time variant safety and regression-to-the-mean (RTM) effects. This study derives the closed form marginal distribution, and reveals that the marginal distribution in the modified EB method is equivalent to the negative multinomial (NM) distribution, which is essentially the same as the likelihood function used in the random effects Poisson model. As a result, this study shows that the gamma posterior distribution from the multivariate Poisson-gamma mixture can be estimated using the NM model or the random effects Poisson model. This study also shows that the estimation errors from the modified EB method are systematically smaller than those from the comparison group method by simultaneously accounting for the RTM and time variant safety effects. Hence, the modified EB method via the NM model is a generalisable method for estimating safety in the presence of the time variant safety and the RTM effects.
Resumo:
Poisson distribution has often been used for count like accident data. Negative Binomial (NB) distribution has been adopted in the count data to take care of the over-dispersion problem. However, Poisson and NB distributions are incapable of taking into account some unobserved heterogeneities due to spatial and temporal effects of accident data. To overcome this problem, Random Effect models have been developed. Again another challenge with existing traffic accident prediction models is the distribution of excess zero accident observations in some accident data. Although Zero-Inflated Poisson (ZIP) model is capable of handling the dual-state system in accident data with excess zero observations, it does not accommodate the within-location correlation and between-location correlation heterogeneities which are the basic motivations for the need of the Random Effect models. This paper proposes an effective way of fitting ZIP model with location specific random effects and for model calibration and assessment the Bayesian analysis is recommended.
Resumo:
Client puzzles are cryptographic problems that are neither easy nor hard to solve. Most puzzles are based on either number theoretic or hash inversions problems. Hash-based puzzles are very efficient but so far have been shown secure only in the random oracle model; number theoretic puzzles, while secure in the standard model, tend to be inefficient. In this paper, we solve the problem of constucting cryptographic puzzles that are secure int he standard model and are very efficient. We present an efficient number theoretic puzzle that satisfies the puzzle security definition of Chen et al. (ASIACRYPT 2009). To prove the security of our puzzle, we introduce a new variant of the interval discrete logarithm assumption which may be of independent interest, and show this new problem to be hard under reasonable assumptions. Our experimental results show that, for 512-bit modulus, the solution verification time of our proposed puzzle can be up to 50x and 89x faster than the Karame-Capkum puzzle and the Rivest et al.'s time-lock puzzle respectively. In particular, the solution verification tiem of our puzzle is only 1.4x slower than that of Chen et al.'s efficient hash based puzzle.
Resumo:
Cone-beam computed tomography (CBCT) has enormous potential to improve the accuracy of treatment delivery in image-guided radiotherapy (IGRT). To assist radiotherapists in interpreting these images, we use a Bayesian statistical model to label each voxel according to its tissue type. The rich sources of prior information in IGRT are incorporated into a hidden Markov random field model of the 3D image lattice. Tissue densities in the reference CT scan are estimated using inverse regression and then rescaled to approximate the corresponding CBCT intensity values. The treatment planning contours are combined with published studies of physiological variability to produce a spatial prior distribution for changes in the size, shape and position of the tumour volume and organs at risk. The voxel labels are estimated using iterated conditional modes. The accuracy of the method has been evaluated using 27 CBCT scans of an electron density phantom. The mean voxel-wise misclassification rate was 6.2\%, with Dice similarity coefficient of 0.73 for liver, muscle, breast and adipose tissue. By incorporating prior information, we are able to successfully segment CBCT images. This could be a viable approach for automated, online image analysis in radiotherapy.
Resumo:
We describe a short signature scheme that is strongly existentially unforgeable under an adaptive chosen message attack in the standard security model. Our construction works in groups equipped with an efficient bilinear map, or, more generally, an algorithm for the Decision Diffie-Hellman problem. The security of our scheme depends on a new intractability assumption we call Strong Diffie-Hellman (SDH), by analogy to the Strong RSA assumption with which it shares many properties. Signature generation in our system is fast and the resulting signatures are as short as DSA signatures for comparable security. We give a tight reduction proving that our scheme is secure in any group in which the SDH assumption holds, without relying on the random oracle model.
Resumo:
This study analyses and compares the cost efficiency of Japanese steam power generation companies using the fixed and random Bayesian frontier models. We show that it is essential to account for heterogeneity in modelling the performance of energy companies. Results from the model estimation also indicate that restricting CO2 emissions can lead to a decrease in total cost. The study finally discusses the efficiency variations between the energy companies under analysis, and elaborates on the managerial and policy implications of the results.
Resumo:
Policy makers, urban planners and economic geographers readily acknowledge the potential value of industrial clustering. Clusters attract policy makers’ interest because it is widely held that they are a way of connecting agglomeration to innovation and human capital to investment. Urban planners view clustering as a way of enticing creative human capital, the so-called ‘creative class’, that is, creative people are predisposed to live where there is a range of cultural infrastructure and amenities. Economists and geographers have contrived to promote clustering as a solution to stalled regional development. In the People’s Republic of China, over the past decade the cluster has become the default setting of the cultural and creative industries, the latter a composite term applied to the quantifiable outputs of artists, designers and media workers as well as related service sectors such as tourism, advertising and management. The thinking behind many cluster projects is to ‘pick winners’. In this sense the rapid expansion in the number of cultural and creative clusters in China over the past decade is not so very different from the early 1990s, a period that saw an outbreak of innovation parks, most of which inevitably failed to deliver measurable innovation and ultimately served as revenue-generating sources for district governments via real estate speculation. Since the early years of the first decade of the new millennium the cluster model has been pressed into the service of cultural development.
Resumo:
Vertebral fracture risk is a heritable complex trait. The aim of this study was to identify genetic susceptibility factors for osteoporotic vertebral fractures applying a genome-wide association study (GWAS) approach. The GWAS discovery was based on the Rotterdam Study, a population-based study of elderly Dutch individuals aged >55years; and comprising 329 cases and 2666 controls with radiographic scoring (McCloskey-Kanis) and genetic data. Replication of one top-associated SNP was pursued by de-novo genotyping of 15 independent studies across Europe, the United States, and Australia and one Asian study. Radiographic vertebral fracture assessment was performed using McCloskey-Kanis or Genant semi-quantitative definitions. SNPs were analyzed in relation to vertebral fracture using logistic regression models corrected for age and sex. Fixed effects inverse variance and Han-Eskin alternative random effects meta-analyses were applied. Genome-wide significance was set at p<5×10-8. In the discovery, a SNP (rs11645938) on chromosome 16q24 was associated with the risk for vertebral fractures at p=4.6×10-8. However, the association was not significant across 5720 cases and 21,791 controls from 14 studies. Fixed-effects meta-analysis summary estimate was 1.06 (95% CI: 0.98-1.14; p=0.17), displaying high degree of heterogeneity (I2=57%; Qhet p=0.0006). Under Han-Eskin alternative random effects model the summary effect was significant (p=0.0005). The SNP maps to a region previously found associated with lumbar spine bone mineral density (LS-BMD) in two large meta-analyses from the GEFOS consortium. A false positive association in the GWAS discovery cannot be excluded, yet, the low-powered setting of the discovery and replication settings (appropriate to identify risk effect size >1.25) may still be consistent with an effect size <1.10, more of the type expected in complex traits. Larger effort in studies with standardized phenotype definitions is needed to confirm or reject the involvement of this locus on the risk for vertebral fractures.
Resumo:
Random walk models are often used to interpret experimental observations of the motion of biological cells and molecules. A key aim in applying a random walk model to mimic an in vitro experiment is to estimate the Fickian diffusivity (or Fickian diffusion coefficient),D. However, many in vivo experiments are complicated by the fact that the motion of cells and molecules is hindered by the presence of obstacles. Crowded transport processes have been modeled using repeated stochastic simulations in which a motile agent undergoes a random walk on a lattice that is populated by immobile obstacles. Early studies considered the most straightforward case in which the motile agent and the obstacles are the same size. More recent studies considered stochastic random walk simulations describing the motion of an agent through an environment populated by obstacles of different shapes and sizes. Here, we build on previous simulation studies by analyzing a general class of lattice-based random walk models with agents and obstacles of various shapes and sizes. Our analysis provides exact calculations of the Fickian diffusivity, allowing us to draw conclusions about the role of the size, shape and density of the obstacles, as well as examining the role of the size and shape of the motile agent. Since our analysis is exact, we calculateDdirectly without the need for random walk simulations. In summary, we find that the shape, size and density of obstacles has a major influence on the exact Fickian diffusivity. Furthermore, our results indicate that the difference in diffusivity for symmetric and asymmetric obstacles is significant.
Resumo:
Potassium disilicate glass and melt have been investigated by using a new partial charge based potential model in which nonbridging oxygens are differentiated from bridging oxygens by their charges. The model reproduces the structural data pertaining to the coordination polyhedra around potassium and the various bond angle distributions excellently. The dynamics of the glass has been studied by using space and time correlation functions. It is found that K ions migrate by a diffusive mechanism in the melt and by hops below the glass transition temperature. They are also found to migrate largely through nonbridging oxygenrich sites in the silicate matrix, thus providing support to the predictions of the modified random network model.
Resumo:
Potassium disilicate glass and melt have been investigated by using anew partial charge based potential model in which nonbridging oxygens are differentiated from bridging oxygens by their charges. The model reproduces the structural data pertaining to the coordination polyhedra around potassium and the various bond angle distributions excellently. The dynamics of the glass has been studied by using space and time correlation functions. It is found that K ions migrate by a diffusive mechanism in the melt and by hops below the glass transition temperature. They are also found to migrate largely through nonbridging oxygen-rich sites in the silicate matrix, thus providing support to the predictions of the modified random network model.
Resumo:
We have imaged the H92alpha and H75alpha radio recombination line (RRL) emissions from the starburst galaxy NGC 253 with a resolution of similar to4 pc. The peak of the RRL emission at both frequencies coincides with the unresolved radio nucleus. Both lines observed toward the nucleus are extremely wide, with FWHMs of similar to200 km s(-1). Modeling the RRL and radio continuum data for the radio nucleus shows that the lines arise in gas whose density is similar to10(4) cm(-3) and mass is a few thousand M., which requires an ionizing flux of (6-20) x 10(51) photons s(-1). We consider a supernova remnant (SNR) expanding in a dense medium, a star cluster, and also an active galactic nucleus (AGN) as potential ionizing sources. Based on dynamical arguments, we rule out an SNR as a viable ionizing source. A star cluster model is considered, and the dynamics of the ionized gas in a stellar-wind driven structure are investigated. Such a model is only consistent with the properties of the ionized gas for a cluster younger than similar to10(5) yr. The existence of such a young cluster at the nucleus seems improbable. The third model assumes the ionizing source to be an AGN at the nucleus. In this model, it is shown that the observed X-ray flux is too weak to account for the required ionizing photon flux. However, the ionization requirement can be explained if the accretion disk is assumed to have a big blue bump in its spectrum. Hence, we favor an AGN at the nucleus as the source responsible for ionizing the observed RRLs. A hybrid model consisting of an inner advection-dominated accretion flow disk and an outer thin disk is suggested, which could explain the radio, UV, and X-ray luminosities of the nucleus.
Resumo:
We study the coverage in sensor networks having two types of nodes, sensor and backbone nodes. Each sensor is capable of transmitting information over relatively small distances. The backbone nodes collect information from the sensors. This information is processed and communicated over an ad-hoc network formed by the backbone nodes,which are capable of transmitting over much larger distances. We consider two modes of deployment of sensors, one a Poisson-Poisson cluster model and the other a dependently-thinned Poisson point process. We deduce limit laws for functionals of vacancy in both models using properties of association for random measures.
Resumo:
We study coverage in sensor networks having two types of nodes, namely, sensor nodes and backbone nodes. Each sensor is capable of transmitting information over relatively small distances. The backbone nodes collect information from the sensors. This information is processed and communicated over an ad hoc network formed by the backbone nodes, which are capable of transmitting over much larger distances. We consider two models of deployment for the sensor and backbone nodes. One is a PoissonPoisson cluster model and the other a dependently thinned Poisson point process. We deduce limit laws for functionals of vacancy in both models using properties of association for random measures.