126 resultados para Independent-particle shell model
Resumo:
NTRUEncrypt is a fast and practical lattice-based public-key encryption scheme, which has been standardized by IEEE, but until recently, its security analysis relied only on heuristic arguments. Recently, Stehlé and Steinfeld showed that a slight variant (that we call pNE) could be proven to be secure under chosen-plaintext attack (IND-CPA), assuming the hardness of worst-case problems in ideal lattices. We present a variant of pNE called NTRUCCA, that is IND-CCA2 secure in the standard model assuming the hardness of worst-case problems in ideal lattices, and only incurs a constant factor overhead in ciphertext and key length over the pNE scheme. To our knowledge, our result gives the first IND-CCA2 secure variant of NTRUEncrypt in the standard model, based on standard cryptographic assumptions. As an intermediate step, we present a construction for an All-But-One (ABO) lossy trapdoor function from pNE, which may be of independent interest. Our scheme uses the lossy trapdoor function framework of Peikert and Waters, which we generalize to the case of (k − 1)-of-k-correlated input distributions.
Resumo:
Fluid–Structure Interaction (FSI) problem is significant in science and engineering, which leads to challenges for computational mechanics. The coupled model of Finite Element and Smoothed Particle Hydrodynamics (FE-SPH) is a robust technique for simulation of FSI problems. However, two important steps of neighbor searching and contact searching in the coupled FE-SPH model are extremely time-consuming. Point-In-Box (PIB) searching algorithm has been developed by Swegle to improve the efficiency of searching. However, it has a shortcoming that efficiency of searching can be significantly affected by the distribution of points (nodes in FEM and particles in SPH). In this paper, in order to improve the efficiency of searching, a novel Striped-PIB (S-PIB) searching algorithm is proposed to overcome the shortcoming of PIB algorithm that caused by points distribution, and the two time-consuming steps of neighbor searching and contact searching are integrated into one searching step. The accuracy and efficiency of the newly developed searching algorithm is studied on by efficiency test and FSI problems. It has been found that the newly developed model can significantly improve the computational efficiency and it is believed to be a powerful tool for the FSI analysis.
Resumo:
Plant tissue has a complex cellular structure which is an aggregate of individual cells bonded by middle lamella. During drying processes, plant tissue undergoes extreme deformations which are mainly driven by moisture removal and turgor loss. Numerical modelling of this problem becomes challenging when conventional grid-based modelling techniques such as Finite Element Methods (FEM) and Finite Difference Methods (FDM) have grid-based limitations. This work presents a meshfree approach to model and simulate the deformations of plant tissues during drying. This method demonstrates the fundamental capabilities of meshfree methods in handling extreme deformations of multiphase systems. A simplified 2D tissue model is developed by aggregating individual cells while accounting for the stiffness of the middle lamella. Each individual cell is simply treated as consisting of two main components: cell fluid and cell wall. The cell fluid is modelled using Smoothed Particle Hydrodynamics (SPH) and the cell wall is modelled using a Discrete Element Method (DEM). During drying, moisture removal is accounted for by reduction of cell fluid and wall mass, which causes local shrinkage of cells eventually leading to tissue scale shrinkage. The cellular deformations are quantified using several cellular geometrical parameters and a favourably good agreement is observed when compared to experiments on apple tissue. The model is also capable of visually replicating dry tissue structures. The proposed model can be used as a step in developing complex tissue models to simulate extreme deformations during drying.
Resumo:
There is considerable interest internationally in developing product libraries to support the use of BIM. Product library initiatives are driven by national bodies, manufacturers and private companies who see their potential. A major issue with the production and distribution of product information for BIM is that separate library objects need to be produced for all of the different software systems that are going to use the library. This increases the cost of populating product libraries and also increases the difficulty in maintaining consistency between the representations for the different software over time. This paper describes a project which uses “software transformation” technology from the field of software engineering to support the definition of a single generic representation of a product which can then be automatically converted to the format required by receiving software. The paper covers the current state of implementation of the product library, the technology underlying the transformations for the currently supported software and the business model for creating a national library in Australia. This is placed within the context of other current product library systems to highlight the differences. The responsibilities of the various actors involved in supporting the product library are also discussed.
Resumo:
We have isolated a series of sublines of the hormone-dependent MCF-7 human breast cancer cell line after selection both in vivo and in vitro for growth in the presence of subphysiological concentrations of estrogens. These sublines represent a model system for study of the processes leading to hormonal autonomy. The cells form growing tumors in ovariectomized athymic nude mice in the absence of estrogen supplementation but retain some responsivity to estrogen as determined by stimulation of the rate of tumor growth in vivo and by induction of progesterone receptor. An ovarian-independent but hormone-responsive phenotype may occur early in the natural progression to hormone-independent and unresponsive growth in breast cancer. We observed no change in the affinity or decrease in the level of expression of estrogen receptors and progesterone receptors among the sublines and the parental cells. Epidermal growth factor receptors are not overexpressed in ovarian-independent cells. Thus, altered hormone receptor expression may be a late event in the acquisition of a hormone-independent and unresponsive phenotype. Sublines isolated by in vivo but not in vitro selection are more invasive than the parental cells both in vivo and across an artificial basement membrane in vitro. Thus, as yet unknown tumor-host interactions may be important in the development of an invasive phenotype. Furthermore, acquisition of the ovarian-independent and invasive phenotypes can occur independently.
Resumo:
Advanced grid stiffened composite cylindrical shell is widely adopted in advanced structures due to its exceptional mechanical properties. Buckling is a main failure mode of advanced grid stiffened structures in engineering, which calls for increasing attention. In this paper, the buckling response of advanced grid stiffened structure is investigated by three different means including equivalent stiffness model, finite element model and a hybrid model (H-model) that combines equivalent stiffness model with finite element model. Buckling experiment is carried out on an advanced grid stiffened structure to validate the efficiency of different modeling methods. Based on the comparison, the characteristics of different methods are independently evaluated. It is arguable that, by considering the defects of material, finite element model is a suitable numerical tool for the buckling analysis of advanced grid stiffened structures.
Resumo:
Purpose Ethnographic studies of cyber attacks typically aim to explain a particular profile of attackers in qualitative terms. The purpose of this paper is to formalise some of the approaches to build a Cyber Attacker Model Profile (CAMP) that can be used to characterise and predict cyber attacks. Design/methodology/approach The paper builds a model using social and economic independent or predictive variables from several eastern European countries and benchmarks indicators of cybercrime within the Australian financial services system. Findings The paper found a very strong link between perceived corruption and GDP in two distinct groups of countries – corruption in Russia was closely linked to the GDP of Belarus, Moldova and Russia, while corruption in Lithuania was linked to GDP in Estonia, Latvia, Lithuania and Ukraine. At the same time corruption in Russia and Ukraine were also closely linked. These results support previous research that indicates a strong link between been legitimate economy and the black economy in many countries of Eastern Europe and the Baltic states. The results of the regression analysis suggest that a highly skilled workforce which is mobile and working in an environment of high perceived corruption in the target countries is related to increases in cybercrime even within Australia. It is important to note that the data used for the dependent and independent variables were gathered over a seven year time period, which included large economic shocks such as the global financial crisis. Originality/value This is the first paper to use a modelling approach to directly show the relationship between various social, economic and demographic factors in the Baltic states and Eastern Europe, and the level of card skimming and card not present fraud in Australia.
Resumo:
A global, or averaged, model for complex low-pressure argon discharge plasmas containing dust grains is presented. The model consists of particle and power balance equations taking into account power loss on the dust grains and the discharge wall. The electron energy distribution is determined by a Boltzmann equation. The effects of the dust and the external conditions, such as the input power and neutral gas pressure, on the electron energy distribution, the electron temperature, the electron and ion number densities, and the dust charge are investigated. It is found that the dust subsystem can strongly affect the stationary state of the discharge by dynamically modifying the electron energy distribution, the electron temperature, the creation and loss of the plasma particles, as well as the power deposition. In particular, the power loss to the dust grains can take up a significant portion of the input power, often even exceeding the loss to the wall.
Resumo:
The issue of using informative priors for estimation of mixtures at multiple time points is examined. Several different informative priors and an independent prior are compared using samples of actual and simulated aerosol particle size distribution (PSD) data. Measurements of aerosol PSDs refer to the concentration of aerosol particles in terms of their size, which is typically multimodal in nature and collected at frequent time intervals. The use of informative priors is found to better identify component parameters at each time point and more clearly establish patterns in the parameters over time. Some caveats to this finding are discussed.
Resumo:
This paper presents a numerical model for understanding particle transport and deposition in metal foam heat exchangers. Two-dimensional steady and unsteady numerical simulations of a standard single row metal foam-wrapped tube bundle are performed for different particle size distributions, i.e. uniform and normal distributions. Effects of different particle sizes and fluid inlet velocities on the overall particle transport inside and outside the foam layer are also investigated. It was noted that the simplification made in the previously-published numerical works in the literature, e.g. uniform particle deposition in the foam, is not necessarily accurate at least for the cases considered here. The results highlight the preferential particle deposition areas both along the tube walls and inside the foam using a developed particle deposition likelihood matrix. This likelihood matrix is developed based on three criteria being particle local velocity, time spent in the foam, and volume fraction. It was noted that the particles tend to deposit near both front and rear stagnation points. The former is explained by the higher momentum and direct exposure of the particles to the foam while the latter only accommodate small particles which can be entrained in the recirculation region formed behind the foam-wrapped tubes.
Resumo:
The melting temperature of a nanoscaled particle is known to decrease as the curvature of the solid-melt interface increases. This relationship is most often modelled by a Gibbs--Thomson law, with the decrease in melting temperature proposed to be a product of the curvature of the solid-melt interface and the surface tension. Such a law must break down for sufficiently small particles, since the curvature becomes singular in the limit that the particle radius vanishes. Furthermore, the use of this law as a boundary condition for a Stefan-type continuum model is problematic because it leads to a physically unrealistic form of mathematical blow-up at a finite particle radius. By numerical simulation, we show that the inclusion of nonequilibrium interface kinetics in the Gibbs--Thomson law regularises the continuum model, so that the mathematical blow up is suppressed. As a result, the solution continues until complete melting, and the corresponding melting temperature remains finite for all time. The results of the adjusted model are consistent with experimental findings of abrupt melting of nanoscaled particles. This small-particle regime appears to be closely related to the problem of melting a superheated particle.
Resumo:
The along-track stereo images of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensor with 15 m resolution were used to generate Digital Elevation Model (DEM) on an area with low and near Mean Sea Level (MSL) elevation in Johor, Malaysia. The absolute DEM was generated by using the Rational Polynomial Coefficient (RPC) model which was run on ENVI 4.8 software. In order to generate the absolute DEM, 60 Ground Control Pointes (GCPs) with almost vertical accuracy less than 10 meter extracted from topographic map of the study area. The assessment was carried out on uncorrected and corrected DEM by utilizing dozens of Independent Check Points (ICPs). Consequently, the uncorrected DEM showed the RMSEz of ± 26.43 meter which was decreased to the RMSEz of ± 16.49 meter for the corrected DEM after post-processing. Overall, the corrected DEM of ASTER stereo images met the expectations.
Resumo:
Empirical evidence shows that repositories of business process models used in industrial practice contain significant amounts of duplication. This duplication arises for example when the repository covers multiple variants of the same processes or due to copy-pasting. Previous work has addressed the problem of efficiently retrieving exact clones that can be refactored into shared subprocess models. This article studies the broader problem of approximate clone detection in process models. The article proposes techniques for detecting clusters of approximate clones based on two well-known clustering algorithms: DBSCAN and Hi- erarchical Agglomerative Clustering (HAC). The article also defines a measure of standardizability of an approximate clone cluster, meaning the potential benefit of replacing the approximate clones with a single standardized subprocess. Experiments show that both techniques, in conjunction with the proposed standardizability measure, accurately retrieve clusters of approximate clones that originate from copy-pasting followed by independent modifications to the copied fragments. Additional experiments show that both techniques produce clusters that match those produced by human subjects and that are perceived to be standardizable.
Resumo:
A generalised bidding model is developed to calculate a bidder’s expected profit and auctioners expected revenue/payment for both a General Independent Value and Independent Private Value (IPV) kmth price sealed-bid auction (where the mth bidder wins at the kth bid payment) using a linear (affine) mark-up function. The Common Value (CV) assumption, and highbid and lowbid symmetric and asymmetric First Price Auctions and Second Price Auctions are included as special cases. The optimal n bidder symmetric analytical results are then provided for the uniform IPV and CV models in equilibrium. Final comments concern implications, the assumptions involved and prospects for further research.
Resumo:
There has been an increasing focus on the development of test methods to evaluate the durability performance of concrete. This paper contributes to this focus by presenting a study that evaluates the effect of water accessible porosity and oven-dry unit weight on the resistance of both normal and light-weight concrete to chloride-ion penetration. Based on the experimental results and regression analyses, empirical models are established to correlate the total charge passed and the chloride migration coefficient with the basic properties of concrete such as water accessible porosity, oven dry unit weight, and compressive strength. These equations can be broadly applied to both normal and lightweight aggregate concretes. The model was also validated by an independent set of experimental results from two different concrete mixtures. The model provides a very good estimate on the concrete’s durability performance in respect to the resistance to chloride ion penetration.