989 resultados para lattice Boltzmann method
Resumo:
Detecting anomalies in the online social network is a significant task as it assists in revealing the useful and interesting information about the user behavior on the network. This paper proposes a rule-based hybrid method using graph theory, Fuzzy clustering and Fuzzy rules for modeling user relationships inherent in online-social-network and for identifying anomalies. Fuzzy C-Means clustering is used to cluster the data and Fuzzy inference engine is used to generate rules based on the cluster behavior. The proposed method is able to achieve improved accuracy for identifying anomalies in comparison to existing methods.
Resumo:
We present a technique for delegating a short lattice basis that has the advantage of keeping the lattice dimension unchanged upon delegation. Building on this result, we construct two new hierarchical identity-based encryption (HIBE) schemes, with and without random oracles. The resulting systems are very different from earlier lattice-based HIBEs and in some cases result in shorter ciphertexts and private keys. We prove security from classic lattice hardness assumptions.
Resumo:
We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors. One trapdoor enables the real system to generate short vectors in all lattices in the family. The other trapdoor enables the simulator to generate short vectors for all lattices in the family except for one. We extend this basic technique to an adaptively-secure IBE and a Hierarchical IBE.
Resumo:
We propose a framework for adaptive security from hard random lattices in the standard model. Our approach borrows from the recent Agrawal-Boneh-Boyen families of lattices, which can admit reliable and punctured trapdoors, respectively used in reality and in simulation. We extend this idea to make the simulation trapdoors cancel not for a specific forgery but on a non-negligible subset of the possible challenges. Conceptually, we build a compactly representable, large family of input-dependent “mixture” lattices, set up with trapdoors that “vanish” for a secret subset which we hope the forger will target. Technically, we tweak the lattice structure to achieve “naturally nice” distributions for arbitrary choices of subset size. The framework is very general. Here we obtain fully secure signatures, and also IBE, that are compact, simple, and elegant.
Resumo:
The notion of certificateless public-key encryption (CL-PKE) was introduced by Al-Riyami and Paterson in 2003 that avoids the drawbacks of both traditional PKI-based public-key encryption (i.e., establishing public-key infrastructure) and identity-based encryption (i.e., key escrow). So CL-PKE like identity-based encryption is certificate-free, and unlike identity-based encryption is key escrow-free. In this paper, we introduce simple and efficient CCA-secure CL-PKE based on (hierarchical) identity-based encryption. Our construction has both theoretical and practical interests. First, our generic transformation gives a new way of constructing CCA-secure CL-PKE. Second, instantiating our transformation using lattice-based primitives results in a more efficient CCA-secure CL-PKE than its counterpart introduced by Dent in 2008.
Resumo:
Dose-finding trials are a form of clinical data collection process in which the primary objective is to estimate an optimum dose of an investigational new drug when given to a patient. This thesis develops and explores three novel dose-finding design methodologies. All design methodologies presented in this thesis are pragmatic. They use statistical models, incorporate clinicians' prior knowledge efficiently, and prematurely stop a trial for safety or futility reasons. Designing actual dose-finding trials using these methodologies will minimize practical difficulties, improve efficiency of dose estimation, be flexible to stop early and reduce possible patient discomfort or harm.
Resumo:
2,2'-Biphenols are a large and diverse group of compounds with exceptional properties both as ligands and bioactive agents. Traditional methods for their synthesis by oxidative dimerisation are often problematic and lead to mixtures of ortho- and para-connected regioisomers. To compound these issues, an intermolecular dimerisation strategy is often inappropriate for the synthesis of heterodimers. The ‘acetal method’ provides a solution for these problems: stepwise tethering of two monomeric phenols enables heterodimer synthesis, enforces ortho regioselectivity and allows relatively facile and selective intramolecular reactions to take place. The resulting dibenzo[1,3]dioxepines have been analysed by quantum chemical calculations to obtain information about the activation barrier for ring flip between the enantiomers. Hydrolytic removal of the dioxepine acetal unit revealed the 2,2′-biphenol target.
Resumo:
Musculoskeletal pain is commonly reported by police officers. A potential cause of officer discomfort is a mismatch between vehicle seats and the method used for carrying appointments. Twenty-five police officers rated their discomfort while seated in: (1) a standard police vehicle seat, and (2) a vehicle seat custom-designed for police use. Discomfort was recorded in both seats while wearing police appointments on: (1) a traditional appointments belt, and (2) a load-bearing vest / belt combination (LBV). Sitting in the standard vehicle seat and carrying appointments on a traditional appointments belt were both associated with significantly elevated discomfort. Four vehicle seat features were most implicated as contributing to discomfort: back rest bolster prominence; lumbar region support; seat cushion width; and seat cushion bolster depth. Authorising the carriage of appointments using a LBV is a lower cost solution with potential to reduce officer discomfort. Furthermore, the introduction of custom-designed vehicle seats should be considered.
Resumo:
Spatially-explicit modelling of grassland classes is important to site-specific planning for improving grassland and environmental management over large areas. In this study, a climate-based grassland classification model, the Comprehensive and Sequential Classification System (CSCS) was integrated with spatially interpolated climate data to classify grassland in Gansu province, China. The study area is characterized by complex topographic features imposed by plateaus, high mountains, basins and deserts. To improve the quality of the interpolated climate data and the quality of the spatial classification over this complex topography, three linear regression methods, namely an analytic method based on multiple regression and residues (AMMRR), a modification of the AMMRR method through adding the effect of slope and aspect to the interpolation analysis (M-AMMRR) and a method which replaces the IDW approach for residue interpolation in M-AMMRR with an ordinary kriging approach (I-AMMRR), for interpolating climate variables were evaluated. The interpolation outcomes from the best interpolation method were then used in the CSCS model to classify the grassland in the study area. Climate variables interpolated included the annual cumulative temperature and annual total precipitation. The results indicated that the AMMRR and M-AMMRR methods generated acceptable climate surfaces but the best model fit and cross validation result were achieved by the I-AMMRR method. Twenty-six grassland classes were classified for the study area. The four grassland vegetation classes that covered more than half of the total study area were "cool temperate-arid temperate zonal semi-desert", "cool temperate-humid forest steppe and deciduous broad-leaved forest", "temperate-extra-arid temperate zonal desert", and "frigid per-humid rain tundra and alpine meadow". The vegetation classification map generated in this study provides spatial information on the locations and extents of the different grassland classes. This information can be used to facilitate government agencies' decision-making in land-use planning and environmental management, and for vegetation and biodiversity conservation. The information can also be used to assist land managers in the estimation of safe carrying capacities which will help to prevent overgrazing and land degradation.
Resumo:
Disjoint top-view networked cameras are among the most commonly utilized networks in many applications. One of the open questions for these cameras' study is the computation of extrinsic parameters (positions and orientations), named extrinsic calibration or localization of cameras. Current approaches either rely on strict assumptions of the object motion for accurate results or fail to provide results of high accuracy without the requirement of the object motion. To address these shortcomings, we present a location-constrained maximum a posteriori (LMAP) approach by applying known locations in the surveillance area, some of which would be passed by the object opportunistically. The LMAP approach formulates the problem as a joint inference of the extrinsic parameters and object trajectory based on the cameras' observations and the known locations. In addition, a new task-oriented evaluation metric, named MABR (the Maximum value of All image points' Back-projected localization errors' L2 norms Relative to the area of field of view), is presented to assess the quality of the calibration results in an indoor object tracking context. Finally, results herein demonstrate the superior performance of the proposed method over the state-of-the-art algorithm based on the presented MABR and classical evaluation metric in simulations and real experiments.
Resumo:
The invention of asymmetric encryption back in the seventies was a conceptual leap that vastly increased the expressive power of encryption of the times. For the first time, it allowed the sender of a message to designate the intended recipient in an cryptographic way, expressed as a “public key” that was related to but distinct from the “private key” that, alone, embodied the ability to decrypt. This made large-scale encryption a practical and scalable endeavour, and more than anything else—save the internet itself—led to the advent of electronic commerce as we know and practice it today.
Resumo:
This paper presents ongoing work toward constructing efficient completely non-malleable public-key encryption scheme based on lattices in the standard (common reference string) model. An encryption scheme is completely non-malleable if it requires attackers to have negligible advantage, even if they are allowed to transform the public key under which the related message is encrypted. Ventre and Visconti proposed two inefficient constructions of completely non-malleable schemes, one in the common reference string model using non-interactive zero-knowledge proofs, and another using interactive encryption schemes. Recently, two efficient public-key encryption schemes have been proposed, both of them are based on pairing identity-based encryption.
Resumo:
A sub‒domain smoothed Galerkin method is proposed to integrate the advantages of mesh‒free Galerkin method and FEM. Arbitrarily shaped sub‒domains are predefined in problems domain with mesh‒free nodes. In each sub‒domain, based on mesh‒free Galerkin weak formulation, the local discrete equation can be obtained by using the moving Kriging interpolation, which is similar to the discretization of the high‒order finite elements. Strain smoothing technique is subsequently applied to the nodal integration of sub‒domain by dividing the sub‒domain into several smoothing cells. Moreover, condensation of DOF can also be introduced into the local discrete equations to improve the computational efficiency. The global governing equations of present method are obtained on the basis of the scheme of FEM by assembling all local discrete equations of the sub‒domains. The mesh‒free properties of Galerkin method are retained in each sub‒domain. Several 2D elastic problems have been solved on the basis of this newly proposed method to validate its computational performance. These numerical examples proved that the newly proposed sub‒domain smoothed Galerkin method is a robust technique to solve solid mechanics problems based on its characteristics of high computational efficiency, good accuracy, and convergence.
Resumo:
Increasing the importance and use of infrastructures such as bridges, demands more effective structural health monitoring (SHM) systems. SHM has well addressed the damage detection issues through several methods such as modal strain energy (MSE). Many of the available MSE methods either have been validated for limited type of structures such as beams or their performance is not satisfactory. Therefore, it requires a further improvement and validation of them for different types of structures. In this study, an MSE method was mathematically improved to precisely quantify the structural damage at an early stage of formation. Initially, the MSE equation was accurately formulated considering the damaged stiffness and then it was used for derivation of a more accurate sensitivity matrix. Verification of the improved method was done through two plane structures: a steel truss bridge and a concrete frame bridge models that demonstrate the framework of a short- and medium-span of bridge samples. Two damage scenarios including single- and multiple-damage were considered to occur in each structure. Then, for each structure, both intact and damaged, modal analysis was performed using STRAND7. Effects of up to 5 per cent noise were also comprised. The simulated mode shapes and natural frequencies derived were then imported to a MATLAB code. The results indicate that the improved method converges fast and performs well in agreement with numerical assumptions with few computational cycles. In presence of some noise level, it performs quite well too. The findings of this study can be numerically extended to 2D infrastructures particularly short- and medium-span bridges to detect the damage and quantify it more accurately. The method is capable of providing a proper SHM that facilitates timely maintenance of bridges to minimise the possible loss of lives and properties.
Resumo:
This study used a homogeneous water-equivalent model of an electronic portal imaging device (EPID), contoured as a structure in a radiotherapy treatment plan, to produce reference dose images for comparison with in vivo EPID dosimetry images. Head and neck treatments were chosen as the focus of this study, due to the heterogeneous anatomies involved and the consequent difficulty of rapidly obtaining reliable reference dose images by other means. A phantom approximating the size and heterogeneity of a typical neck, with a maximum radiological thickness of 8.5 cm, was constructed for use in this study. This phantom was CT scanned and a simple treatment including five square test fields and one off-axis IMRT field was planned. In order to allow the treatment planning system to calculate dose in a model EPID positioned a distance downstream from the phantom to achieve a source-to-detector distance (SDD) of 150 cm, the CT images were padded with air and the phantom’s “body” contour was extended to encompass the EPID contour. Comparison of dose images obtained from treatment planning calculations and experimental irradiations showed good agreement, with more than 90% of points in all fields passing a gamma evaluation, at γ (3%, 3mm )Similar agreement was achieved when the phantom was over-written with air in the treatment plan and removed from the experimental beam, suggesting that water EPID model at 150 cm SDD is capable of providing accurate reference images for comparison with clinical IMRT treatment images, for patient anatomies with radiological thicknesses ranging from 0 up to approximately 9 cm. This methodology therefore has the potential to be used for in vivo dosimetry during treatments to tissues in the neck as well as the oral and nasal cavities, in the head-and-neck region.