956 resultados para ecliptic curve based chameleon hashing
Resumo:
There exists a maximum in the products of the saturation properties such as T(p(c) - p) and p(T-c - T) in the vapour-liquid coexistence region for all liquids. The magnitudes of those maxima on the reduced coordinate system provide an insight to the molecular complexity of the liquid. It is shown that the gradients of the vapour pressure curve at temperatures where those maxima occur are directly given by simple relations involving the reduced pressures and temperatures at that point. A linear relation between the maximum values of those products of the form [p(r)(1 - T-r)](max) = 0.2095 - 0.2415 [T-r(1 - p(r))](max) has been found based on a study of 55 liquids ranging from non-polar monatomic cryogenic liquids to polar high boiling point liquids.
Resumo:
A method has been presented to establish the theoretical dispersion curve for performing the inverse analysis for the Rayleigh wave propagation. The proposed formulation is similar to the one available in literature, and is based on the finite difference formulation of the governing partial differential equations of motion. The method is framed in such a way that it ultimately leads to an Eigen value problem for which the solution can be obtained quite easily with respect to unknown frequency. The maximum absolute value of the vertical displacement at the ground surface is formed as the basis for deciding the governing mode of propagation. With the proposed technique, the numerical solutions were generated for a variety of problems, comprising of a number of different layers, associated with both ground and pavements. The results are found to be generally satisfactory. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Modeling the performance behavior of parallel applications to predict the execution times of the applications for larger problem sizes and number of processors has been an active area of research for several years. The existing curve fitting strategies for performance modeling utilize data from experiments that are conducted under uniform loading conditions. Hence the accuracy of these models degrade when the load conditions on the machines and network change. In this paper, we analyze a curve fitting model that attempts to predict execution times for any load conditions that may exist on the systems during application execution. Based on the experiments conducted with the model for a parallel eigenvalue problem, we propose a multi-dimensional curve-fitting model based on rational polynomials for performance predictions of parallel applications in non-dedicated environments. We used the rational polynomial based model to predict execution times for 2 other parallel applications on systems with large load dynamics. In all the cases, the model gave good predictions of execution times with average percentage prediction errors of less than 20%
Resumo:
The highest levels of security can be achieved through the use of more than one type of cryptographic algorithm for each security function. In this paper, the REDEFINE polymorphic architecture is presented as an architecture framework that can optimally support a varied set of crypto algorithms without losing high performance. The presented solution is capable of accelerating the advanced encryption standard (AES) and elliptic curve cryptography (ECC) cryptographic protocols, while still supporting different flavors of these algorithms as well as different underlying finite field sizes. The compelling feature of this cryptosystem is the ability to provide acceleration support for new field sizes as well as new (possibly proprietary) cryptographic algorithms decided upon after the cryptosystem is deployed.
Resumo:
Purpose: The authors aim at developing a pseudo-time, sub-optimal stochastic filtering approach based on a derivative free variant of the ensemble Kalman filter (EnKF) for solving the inverse problem of diffuse optical tomography (DOT) while making use of a shape based reconstruction strategy that enables representing a cross section of an inhomogeneous tumor boundary by a general closed curve. Methods: The optical parameter fields to be recovered are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions) and the EnKF is used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the pseudo-dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ``measurement'' equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. Results: In our numerical simulations, we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes (such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as mu(b)(a)=0.01mm(-1) and mu('b)(s)=1.0mm(-1), respectively. We also assume mu(a) = 0.02 mm(-1) within the inhomogeneity (for the single inhomogeneity case) and mu(a) = 0.02 and 0.03 mm(-1) (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown mu(a) from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. Conclusions: The PD-EnKF, which exhibits little sensitivity against variations in the fictitiously introduced noise processes, is also proven to be accurate and robust in recovering a spatial map of the absorption coefficient from DOT data. With the help of shape based representation of the inhomogeneities and an appropriate scaling of the CH expansion coefficients representing the boundary, we have been able to recover inhomogeneities representative of the shape of malignancies in medical diagnostic imaging. (C) 2012 American Association of Physicists in Medicine. [DOI: 10.1118/1.3679855]
Resumo:
We present a simple route for synthesis of Y2O3 for both photoluminescent (PL) and thermoluminescent (TL) applications. We show that by simply switching the fuel from ethylene di-amine tetracetic acid (EDTA) to its disodium derivative (Na-2-EDTA), we obtain a better photoluminescent material. On the other hand, use of EDTA aids in formation of Y2O3 which is a better thermoluminescent material. In both cases pure cubic nano-Y2O3 is obtained. For both the material systems, structural characterization, photoluminescence, thermoluminescence, and absorbance spectra are reported and analyzed. Use of EDTA results in nano Y2O3 with crystallite size similar to 10 nm. Crystallinity improves, and crystallite size is larger (similar to 30 nm) when Na-2-EDTA is used. TL response of Y2O3 nanophosphors prepared by both fuels is examined using UV radiation. Samples prepared with EDTA show well resolved glow curve at 140 degrees C, while samples prepared with Na-2-EDTA shows a glow curve at 155 degrees C. Effect of UV exposure time on TL characteristics is investigated. The TL kinetic parameters are also calculated using glow curve shape method. Results indicate that the TL behavior of both the samples follow a second order kinetic model. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
A micro-newton static force sensor is presented here as a packaged product. The sensor, which is based on the mechanics of deformable objects, consists of a compliant mechanism that amplifies the displacement caused by the force that is to be measured. The output displacement, captured using a digital microscope and analyzed using image processing techniques, is used to calculate the force using precalibrated force-displacement curve. Images are scanned in real time at a frequency of 15 frames per second and sampled at around half the scanning frequency. The sensor was built, packaged, calibrated, and tested. It has simulated and measured stiffness values of 2.60N/m and 2.57N/m, respectively. The smallest force it can reliably measure in the presence of noise is about 2 mu N over a range of 1.4mN. The off-the-shelf digital microscope aside, all of its other components are purely mechanical; they are inexpensive and can be easily made using simple machines. Another highlight of the sensor is that its movable and delicate components are easily replaceable. The sensor can be used in aqueous environment as it does not use electric, magnetic, thermal, or any other fields. Currently, it can only measure static forces or forces that vary at less than 1Hz because its response time and bandwidth are limited by the speed of imaging with a camera. With a universal serial bus (USB) connection of its digital microscope, custom-developed graphical user interface (GUI), and related software, the sensor is fully developed as a readily usable product.
Resumo:
Several papers have studied fault attacks on computing a pairing value e(P, Q), where P is a public point and Q is a secret point. In this paper, we observe that these attacks are in fact effective only on a small number of pairing-based protocols, and that too only when the protocols are implemented with specific symmetric pairings. We demonstrate the effectiveness of the fault attacks on a public-key encryption scheme, an identity-based encryption scheme, and an oblivious transfer protocol when implemented with a symmetric pairing derived from a supersingular elliptic curve with embedding degree 2.
Resumo:
The information-theoretic approach to security entails harnessing the correlated randomness available in nature to establish security. It uses tools from information theory and coding and yields provable security, even against an adversary with unbounded computational power. However, the feasibility of this approach in practice depends on the development of efficiently implementable schemes. In this paper, we review a special class of practical schemes for information-theoretic security that are based on 2-universal hash families. Specific cases of secret key agreement and wiretap coding are considered, and general themes are identified. The scheme presented for wiretap coding is modular and can be implemented easily by including an extra preprocessing layer over the existing transmission codes.
Resumo:
Reliable turbulent channel flow databases at several Reynolds numbers have been established by large eddy simulation (LES), with two of them validated by comparing with typical direct numerical simulation (DNS) results. Furthermore, the statistics, such as velocity profile, turbulent intensities and shear stress, were obtained as well as the temporal and spatial structure of turbulent bursts. Based on the LES databases available, the conditional sampling methods are used to detect the structures of burst events. A method to deterimine the grouping parameter from the probability distribution function (pdf) curve of the time separation between ejection events is proposed to avoid the errors in detected results. And thus, the dependence of average burst period on thresholds is considerably weakened. Meanwhile, the average burst-to-bed area ratios are detected. It is found that the Reynolds number exhibits little effect on the burst period and burst-to-bed area ratio.
Resumo:
This paper first presents a stochastic structural model to describe the random geometrical features of rock and soil aggregates. The stochastic structural model uses mixture ratio, rock size and rock shape to construct the microstructures of aggregates,and introduces two types of structural elements (block element and jointed element) and three types of material elements (rock element, soil element, and weaker jointed element)for this microstructure. Then, continuum-based discrete element method is used to study the deformation and failure mechanism of rock and soil aggregate through a series of loading tests. It is found that the stress-strain curve of rock and soil aggregates is nonlinear, and the failure is usually initialized from weaker jointed elements. Finally, some factors such as mixture ratio, rock size and rock shape are studied in detail. The numerical results are in good agreement with in situ test. Therefore, current model is effective for simulating the mechanical behaviors of rock and soil aggregates.
Resumo:
The stability of a soil slope is usually analyzed by limit equilibrium methods, in which the identification of the critical slip surface is of principal importance. In this study the spline curve in conjunction with a genetic algorithm is used to search the critical slip surface, and Spencer's method is employed to calculate the factor of safety. Three examples are presented to illustrate the reliability and efficiency of the method. Slip surfaces defined by a series of straight lines are compared with those defined by spline curves, and the results indicate that use of spline curves renders better results for a given number of slip surface nodal points comparing with the approximation using straight line segments.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
The von Bertalanffy growth function is used for length based analysis of growth and mortality patterns for management of fisheries. However, certain fish have growth patterns that the VBGF may not be able to describe adequately.e.g. the Acanthurus lineatus in Samoa. In such cases a two phase VBGF may be a useful approach.
Resumo:
An assessment of the total biomass of shortbelly rockfish (Sebastes jordani) off the central California coast is presented that is based on a spatially extensive but temporally restricted ichthyoplankton survey conducted during the 1991 spawning season. Contemporaneous samples of adults were obtained by trawl sampling in the study region. Daily larval production (7.56 × 1010 larvae/d) and the larval mortality rate (Z=0.11/d) during the cruise were estimated from a larval “catch curve,” wherein the logarithm of total age-specific larval abundance was regressed against larval age. For this analysis, larval age compositions at each of the 150 sample sites were determined by examination of otolith microstructure from subsampled larvae (n=2203), which were weighted by the polygonal Sette-Ahlstrom area surrounding each station. Female population weight-specific fecundity was estimated through a life table analysis that incorporated sex-specific differences in adult growth rate, female maturity, fecundity, and natural mortality (M). The resulting statistic (102.17 larvae/g) was insensitive to errors in estimating M and to the pattern of recruitment. Together, the two analyses indicated that a total biomass equal to 1366 metric tons (t)/d of age-1+ shortbelly rockfish (sexes combined) was needed to account for the observed level of spawning output during the cruise. Given the long-term seasonal distribution of spawning activity in the study area, as elucidated from a retrospective examination of California Cooperative Oceanic Fisheries Investigation (CalCOFI) ichthyoplankton samples from 1952 to 1984, the “daily” total biomass was expanded to an annual total of 67,392 t. An attempt to account for all sources of error in the derivation of this estimate was made by application of the delta-method, which yielded a coefficient of variation of 19%. The relatively high precision of this larval production method, and the rapidity with which an absolute biomass estimate can be obtained, establishes that, for some species of rockfish (Sebastes spp.), it is an attractive alternative to traditional age-structured stock assessments.