947 resultados para Nonnegative sine polynomial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information and communications technologies are a significant component of the healthcare domain and electronic health records play a major role within it. As a result, it is important that they are accepted en masse by healthcare professionals. How healthcare professionals perceive the usefulness of electronic health records and their attitudes towards them have been shown to have significant effects on their overall acceptance. This paper investigates the role of perceived usefulness and attitude on the intention to use electronic health records by future healthcare professionals using polynomial regression with response surface analysis. Results show that the relationship is more complex than predicted in prior research. The paper concludes that the predicting properties of the above determinants must be further investigated to clearly understand their role in predicting the intention to use electronic health records and in designing systems that are better adopted by healthcare professionals of the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a polynomial time algorithm is presented for solving the Eden problem for graph cellular automata. The algorithm is based on our neighborhood elimination operation which removes local neighborhood configurations which cannot be used in a pre-image of a given configuration. This paper presents a detailed derivation of our algorithm from first principles, and a detailed complexity and accuracy analysis is also given. In the case of time complexity, it is shown that the average case time complexity of the algorithm is \Theta(n^2), and the best and worst cases are \Omega(n) and O(n^3) respectively. This represents a vast improvement in the upper bound over current methods, without compromising average case performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An analytical evaluation of the higher ac harmonic components derived from large amplitude Fourier transformed voltammetry is provided for the reversible oxidation of ferrocenemethanol (FcMeOH) and oxidation of uric acid by an EEC mechanism in a pH 7.4 phosphate buffer at a glassy carbon (GC) electrode. The small background current in the analytically optimal fifth harmonic is predominantly attributed to faradaic current associated with the presence of electroactive functional groups on the GC electrode surface, rather than to capacitive current which dominates the background in the dc, and the initial three ac harmonics. The detection limits for the dc and the first to fifth harmonic ac components are 1.9, 5.89, 2.1, 2.5, 0.8, and 0.5 µM for FcMeOH, respectively, using a sine wave modulation of 100 mV at 21.46 Hz and a dc sweep rate of 111.76 mV s−1. Analytical performance then progressively deteriorates in the sixth and higher harmonics. For the determination of uric acid, the capacitive background current was enhanced and the reproducibility lowered by the presence of surface active uric acid, but the rapid overall 2e− rather than 1e– electron transfer process gives rise to a significantly enhanced fifth harmonic faradaic current which enabled a detection limit of 0.3 µM to be achieved which is similar to that reported using chemically modified electrodes. Resolution of overlapping voltammetric signals for a mixture of uric acid and dopamine is also achieved using higher fourth or fifth harmonic components, under very low background current conditions. The use of higher fourth and fifth harmonics exhibiting highly favorable faradaic to background (noise) current ratios should therefore be considered in analytical applications under circumstances where the electron transfer rate is fast.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose Contrast adaptation has been speculated to be an error signal for emmetropization. Myopic children exhibit higher contrast adaptation than emmetropic children. This study aimed to determine whether contrast adaptation varies with the type of text viewed by emmetropic and myopic young adults. Methods Baseline contrast sensitivity was determined in 25 emmetropic and 25 spectacle-corrected myopic young adults for 0.5, 1.2, 2.7, 4.4, and 6.2 cycles per degree (cpd) horizontal sine wave gratings. The adults spent periods looking at a 6.2 cpd high-contrast horizontal grating and reading lines of English and Chinese text (these texts comprised 1.2 cpd row and 6 cpd stroke frequencies). The effects of these near tasks on contrast sensitivity were determined, with decreases in sensitivity indicating contrast adaptation. Results Contrast adaptation was affected by the near task (F2,672 = 43.0; P < 0.001). Adaptation was greater for the grating task (0.13 ± 0.17 log unit, averaged across all frequencies) than reading tasks, but there was no significant difference between the two reading tasks (English 0.05 ± 0.13 log unit versus Chinese 0.04 ± 0.13 log unit). The myopic group showed significantly greater adaptation (by 0.04, 0.04, and 0.05 log units for English, Chinese, and grating tasks, respectively) than the emmetropic group (F1,48 = 5.0; P = 0.03). Conclusions In young adults, reading Chinese text induced similar contrast adaptation as reading English text. Myopes exhibited greater contrast adaptation than emmetropes. Contrast adaptation, independent of text type, might be associated with myopia development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Irradiance profile around the receiver tube (RT) of a parabolic trough collector (PTC) is a key effect of optical performance that affects the overall energy performance of the collector. Thermal performance evaluation of the RT relies on the appropriate determination of the irradiance profile. This article explains a technique in which empirical equations were developed to calculate the local irradiance as a function of angular location of the RT of a standard PTC using a vigorously verified Monte Carlo ray tracing model. A large range of test conditions including daily normal insolation, spectral selective coatings and glass envelop conditions were selected from the published data by Dudley et al. [1] for the job. The R2 values of the equations are excellent that vary in between 0.9857 and 0.9999. Therefore, these equations can be used confidently to produce realistic non-uniform boundary heat flux profile around the RT at normal incidence for conjugate heat transfer analyses of the collector. Required values in the equations are daily normal insolation, and the spectral selective properties of the collector components. Since the equations are polynomial functions, data processing software can be employed to calculate the flux profile very easily and quickly. The ultimate goal of this research is to make the concentrating solar power technology cost competitive with conventional energy technology facilitating its ongoing research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose To investigate the differences between and variations across time in corneal topography and ocular wavefront aberrations in young Singaporean myopes and emmetropes. Methods We used a videokeratoscope and wavefront sensor to measure the ocular surface topography and wavefront aberrations of the total eye optics in the morning, mid-day and late afternoon on two separate days. Topography data were used to derive the corneal surface wavefront aberrations. Both the corneal and total wavefronts were analysed up to the 4th radial order of the Zernike polynomial expansion, and were centred on the entrance pupil (5 mm). The participants included 12 young progressing myopes, 13 young stable myopes and 15 young age-matched emmetropes. Results For all subjects considered together there were significant changes in some of the aberrations terms across the day, such as spherical aberration ( ) and vertical coma ( ) (repeated measures ANOVA, p<0.05). The magnitude of positive spherical aberration ( ) was significantly lower in the progressing myope group than that of the stable myopes (p=0.04) and emmetrope group (p=0.02). There were also significant interactions between refractive group and time of day for with/against-the-rule astigmatism ( ). Significantly lower 4th order RMS of ocular wavefront aberrations were found in the progressing myope group compared with the stable myopes and emmetropes (p<0.01). Conclusions These differences and variations in the corneal and total aberrations may have significance for our understanding of refractive error development and for clinical applications requiring accurate wavefront measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In hyper competition, firms that are agile: sensing and responding better to customer requirements tend to be more successful and achieve supernormal profits. In spite of the widely accepted importance of customer agility, research is limited on this construct. The limited research also has predominantly focussed on the firm’s perspective of agility. However, we propose that the customers are better positioned to determine how well a firm is responding to their requirements (aka a firm’s customer agility). Taking the customers’ stand point, we address the issue of sense and respond alignment in two perspectives-matching and mediating. Based on data collected from customers in a field study, we tested hypothesis pertaining to the two methods of alignment using polynomial regression and response surface methodology. The results provide a good explanation for the role of both forms of alignment on customer satisfaction. Implication for research and practice are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We construct two efficient Identity-Based Encryption (IBE) systems that admit selective-identity security reductions without random oracles in groups equipped with a bilinear map. Selective-identity secure IBE is a slightly weaker security model than the standard security model for IBE. In this model the adversary must commit ahead of time to the identity that it intends to attack, whereas in an adaptive-identity attack the adversary is allowed to choose this identity adaptively. Our first system—BB1—is based on the well studied decisional bilinear Diffie–Hellman assumption, and extends naturally to systems with hierarchical identities, or HIBE. Our second system—BB2—is based on a stronger assumption which we call the Bilinear Diffie–Hellman Inversion assumption and provides another approach to building IBE systems. Our first system, BB1, is very versatile and well suited for practical applications: the basic hierarchical construction can be efficiently secured against chosen-ciphertext attacks, and further extended to support efficient non-interactive threshold decryption, among others, all without using random oracles. Both systems, BB1 and BB2, can be modified generically to provide “full” IBE security (i.e., against adaptive-identity attacks), either using random oracles, or in the standard model at the expense of a non-polynomial but easy-to-compensate security reduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, several classes of permutation polynomials of the form (x2 + x + δ)s + x over F2m have been discovered. They are related to Kloosterman sums. In this paper, the permutation behavior of polynomials of the form (xp − x + δ)s + L(x) over Fpm is investigated, where L(x) is a linearized polynomial with coefficients in Fp. Six classes of permutation polynomials on F2m are derived. Three classes of permutation polynomials over F3m are also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the multicast stream authentication problem when an opponent can drop, reorder and introduce data packets into the communication channel. In such a model, packet overhead and computing efficiency are two parameters to be taken into account when designing a multicast stream protocol. In this paper, we propose to use two families of erasure codes to deal with this problem, namely, rateless codes and maximum distance separable codes. Our constructions will have the following advantages. First, our packet overhead will be small. Second, the number of signature verifications to be performed at the receiver is O(1). Third, every receiver will be able to recover all the original data packets emitted by the sender despite losses and injection occurred during the transmission of information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivated by the need of private set operations in a distributed environment, we extend the two-party private matching problem proposed by Freedman, Nissim and Pinkas (FNP) at Eurocrypt’04 to the distributed setting. By using a secret sharing scheme, we provide a distributed solution of the FNP private matching called the distributed private matching. In our distributed private matching scheme, we use a polynomial to represent one party’s dataset as in FNP and then distribute the polynomial to multiple servers. We extend our solution to the distributed set intersection and the cardinality of the intersection, and further we show how to apply the distributed private matching in order to compute distributed subset relation. Our work extends the primitives of private matching and set intersection by Freedman et al. Our distributed construction might be of great value when the dataset is outsourced and its privacy is the main concern. In such cases, our distributed solutions keep the utility of those set operations while the dataset privacy is not compromised. Comparing with previous works, we achieve a more efficient solution in terms of computation. All protocols constructed in this paper are provably secure against a semi-honest adversary under the Decisional Diffie-Hellman assumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a method for the estimation of thrust model parameters of uninhabited airborne systems using specific flight tests. Particular tests are proposed to simplify the estimation. The proposed estimation method is based on three steps. The first step uses a regression model in which the thrust is assumed constant. This allows us to obtain biased initial estimates of the aerodynamic coeficients of the surge model. In the second step, a robust nonlinear state estimator is implemented using the initial parameter estimates, and the model is augmented by considering the thrust as random walk. In the third step, the estimate of the thrust obtained by the observer is used to fit a polynomial model in terms of the propeller advanced ratio. We consider a numerical example based on Monte-Carlo simulations to quantify the sampling properties of the proposed estimator given realistic flight conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relationship between temperature and mortality is non-linear and the effect estimates depend on the threshold temperatures selected. However, little is known about whether threshold temperatures differ with age or cause of deaths in the Southern Hemisphere. We conducted polynomial distributed lag non-linear models to assess the threshold temperatures for mortality from all ages (Dall), aged from 15 to 64 (D15-64), 65- 84(D65-84), ≥85 years (D85+), respiratory (RD) and cardiovascular diseases (CVD) in Brisbane, Australia, 1996–2004. We examined both hot and cold thresholds, and the lags of up to 15 days for cold effects and 3 days for hot effects. Results show that for the current day, the cold threshold was 20°C and the hot threshold was 28°C for the groups of Dall, D15-64 and D85+. The cold threshold was higher (23°C) for the group of D65-84 and lower (21°C) for the group of CVD. The hot threshold was higher (29°C) for the group of D65-84 and lower (27°C) for the group of RD. Compared to the current day, for the cold effects of up to 15-day lags, the threshold was lower for the group of D15-64, and the thresholds were higher for the groups of D65-84, D85+, RD and CVD; while for the hot effects of 3-day lags, the threshold was higher for the group of D15-64 and the thresholds were lower for the groups of D65-84 and RD. Temperature thresholds appeared to differ with age and death categories. The elderly and deaths from RD and CVD were more sensitive to temperature stress than the adult group. These findings may have implications in the assessment of temperature-related mortality and development of weather/health warning systems.