972 resultados para Strictly hyperbolic polynomial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose To investigate the differences between and variations across time in corneal topography and ocular wavefront aberrations in young Singaporean myopes and emmetropes. Methods We used a videokeratoscope and wavefront sensor to measure the ocular surface topography and wavefront aberrations of the total eye optics in the morning, mid-day and late afternoon on two separate days. Topography data were used to derive the corneal surface wavefront aberrations. Both the corneal and total wavefronts were analysed up to the 4th radial order of the Zernike polynomial expansion, and were centred on the entrance pupil (5 mm). The participants included 12 young progressing myopes, 13 young stable myopes and 15 young age-matched emmetropes. Results For all subjects considered together there were significant changes in some of the aberrations terms across the day, such as spherical aberration ( ) and vertical coma ( ) (repeated measures ANOVA, p<0.05). The magnitude of positive spherical aberration ( ) was significantly lower in the progressing myope group than that of the stable myopes (p=0.04) and emmetrope group (p=0.02). There were also significant interactions between refractive group and time of day for with/against-the-rule astigmatism ( ). Significantly lower 4th order RMS of ocular wavefront aberrations were found in the progressing myope group compared with the stable myopes and emmetropes (p<0.01). Conclusions These differences and variations in the corneal and total aberrations may have significance for our understanding of refractive error development and for clinical applications requiring accurate wavefront measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In hyper competition, firms that are agile: sensing and responding better to customer requirements tend to be more successful and achieve supernormal profits. In spite of the widely accepted importance of customer agility, research is limited on this construct. The limited research also has predominantly focussed on the firm’s perspective of agility. However, we propose that the customers are better positioned to determine how well a firm is responding to their requirements (aka a firm’s customer agility). Taking the customers’ stand point, we address the issue of sense and respond alignment in two perspectives-matching and mediating. Based on data collected from customers in a field study, we tested hypothesis pertaining to the two methods of alignment using polynomial regression and response surface methodology. The results provide a good explanation for the role of both forms of alignment on customer satisfaction. Implication for research and practice are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We construct two efficient Identity-Based Encryption (IBE) systems that admit selective-identity security reductions without random oracles in groups equipped with a bilinear map. Selective-identity secure IBE is a slightly weaker security model than the standard security model for IBE. In this model the adversary must commit ahead of time to the identity that it intends to attack, whereas in an adaptive-identity attack the adversary is allowed to choose this identity adaptively. Our first system—BB1—is based on the well studied decisional bilinear Diffie–Hellman assumption, and extends naturally to systems with hierarchical identities, or HIBE. Our second system—BB2—is based on a stronger assumption which we call the Bilinear Diffie–Hellman Inversion assumption and provides another approach to building IBE systems. Our first system, BB1, is very versatile and well suited for practical applications: the basic hierarchical construction can be efficiently secured against chosen-ciphertext attacks, and further extended to support efficient non-interactive threshold decryption, among others, all without using random oracles. Both systems, BB1 and BB2, can be modified generically to provide “full” IBE security (i.e., against adaptive-identity attacks), either using random oracles, or in the standard model at the expense of a non-polynomial but easy-to-compensate security reduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, several classes of permutation polynomials of the form (x2 + x + δ)s + x over F2m have been discovered. They are related to Kloosterman sums. In this paper, the permutation behavior of polynomials of the form (xp − x + δ)s + L(x) over Fpm is investigated, where L(x) is a linearized polynomial with coefficients in Fp. Six classes of permutation polynomials on F2m are derived. Three classes of permutation polynomials over F3m are also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the multicast stream authentication problem when an opponent can drop, reorder and introduce data packets into the communication channel. In such a model, packet overhead and computing efficiency are two parameters to be taken into account when designing a multicast stream protocol. In this paper, we propose to use two families of erasure codes to deal with this problem, namely, rateless codes and maximum distance separable codes. Our constructions will have the following advantages. First, our packet overhead will be small. Second, the number of signature verifications to be performed at the receiver is O(1). Third, every receiver will be able to recover all the original data packets emitted by the sender despite losses and injection occurred during the transmission of information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivated by the need of private set operations in a distributed environment, we extend the two-party private matching problem proposed by Freedman, Nissim and Pinkas (FNP) at Eurocrypt’04 to the distributed setting. By using a secret sharing scheme, we provide a distributed solution of the FNP private matching called the distributed private matching. In our distributed private matching scheme, we use a polynomial to represent one party’s dataset as in FNP and then distribute the polynomial to multiple servers. We extend our solution to the distributed set intersection and the cardinality of the intersection, and further we show how to apply the distributed private matching in order to compute distributed subset relation. Our work extends the primitives of private matching and set intersection by Freedman et al. Our distributed construction might be of great value when the dataset is outsourced and its privacy is the main concern. In such cases, our distributed solutions keep the utility of those set operations while the dataset privacy is not compromised. Comparing with previous works, we achieve a more efficient solution in terms of computation. All protocols constructed in this paper are provably secure against a semi-honest adversary under the Decisional Diffie-Hellman assumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Finite element method (FEM) relies on an approximate function to fit into a governing equation and minimizes the residual error in the integral sense in order to generate solutions for the boundary value problems (nodal solutions). Because of this FEM does not show simultaneous capacities for accurate displacement and force solutions at node and along an element, especially when under the element loads, which is of much ubiquity. If the displacement and force solutions are strictly confined to an element’s or member’s ends (nodal response), the structural safety along an element (member) is inevitably ignored, which can definitely hinder the design of a structure for both serviceability and ultimate limit states. Although the continuous element deflection and force solutions can be transformed into the discrete nodal solutions by mesh refinement of an element (member), this setback can also hinder the effective and efficient structural assessment as well as the whole-domain accuracy for structural safety of a structure. To this end, this paper presents an effective, robust, applicable and innovative approach to generate accurate nodal and element solutions in both fields of displacement and force, in which the salient and unique features embodies its versatility in applications for the structures to account for the accurate linear and second-order elastic displacement and force solutions along an element continuously as well as at its nodes. The significance of this paper is on shifting the nodal responses (robust global system analysis) into both nodal and element responses (sophisticated element formulation).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The last few years have brought an increasing interest in the chemistry of rite interstellar and circumstellar environs. Many of the molecular species discovered in remote galactic regions have been dubbed 'non-terrestrial' because of their unique structures (Thaddeus et al, 1993). These findings have provided a challenge to chemists in many differing fields to attempt to generate these unusual species in the laboratory of particular recent interest have been the unsaturated hydrocarbon families, CnH and CnH2, which have been pursued by a number of diverse methodologies. A wine range of heterocumulenes, including CnO, HCnO, CnN, HCnN, CnS, HCnS, CnSi and HCnSi have also provided intriguing targets for laboratory experiments. Strictly the term cumulene refers to a class of compounds that possess a series of adjacent double bonds, with allene representing the simplest example (H2C=C=CH2). However for many of the non-terrestrial molecules presented here, the carbon chain cannot be described in terms of a single simple valence structure, and so we use the terms cumulene and heterocumulene in a more general sense: to describe molecular species that contain an unsaturated polycarbon chain. Mass spectrometry has proved an invaluable tool in the quest for interstellar cumulenes and heterocumulenes in the laboratory it has the ability in its many forms, to (i) generate charged analogs of these species in the gas phase, (ii) probe their connectivity, ion chemistry, and thermochemistry, and (iii) in some cases, elucidate the neutrals themselves. Here, we will discuss the progress of these studies to this time. (C) 1999 John Wiley & Sons, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a method for the estimation of thrust model parameters of uninhabited airborne systems using specific flight tests. Particular tests are proposed to simplify the estimation. The proposed estimation method is based on three steps. The first step uses a regression model in which the thrust is assumed constant. This allows us to obtain biased initial estimates of the aerodynamic coeficients of the surge model. In the second step, a robust nonlinear state estimator is implemented using the initial parameter estimates, and the model is augmented by considering the thrust as random walk. In the third step, the estimate of the thrust obtained by the observer is used to fit a polynomial model in terms of the propeller advanced ratio. We consider a numerical example based on Monte-Carlo simulations to quantify the sampling properties of the proposed estimator given realistic flight conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a framework for the design of a joint motion controller and a control allocation strategy for dynamic positioning of marine vehicles. The key aspects of the proposed designs are a systematic approach to deal with actuator saturation and to inform the motion controller about saturation. The proposed system uses a mapping that translates the actuator constraint sets into constraint sets at the motion controller level. Hence, while the motion controller addresses the constraints, the control allocation algorithm can solve an unconstrained optimisation problem. The constrained control design is approached using a multivariable anti-wind-up strategy for strictly proper controllers. This is applicable to the implementation of PI and PID type of motion controllers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relationship between temperature and mortality is non-linear and the effect estimates depend on the threshold temperatures selected. However, little is known about whether threshold temperatures differ with age or cause of deaths in the Southern Hemisphere. We conducted polynomial distributed lag non-linear models to assess the threshold temperatures for mortality from all ages (Dall), aged from 15 to 64 (D15-64), 65- 84(D65-84), ≥85 years (D85+), respiratory (RD) and cardiovascular diseases (CVD) in Brisbane, Australia, 1996–2004. We examined both hot and cold thresholds, and the lags of up to 15 days for cold effects and 3 days for hot effects. Results show that for the current day, the cold threshold was 20°C and the hot threshold was 28°C for the groups of Dall, D15-64 and D85+. The cold threshold was higher (23°C) for the group of D65-84 and lower (21°C) for the group of CVD. The hot threshold was higher (29°C) for the group of D65-84 and lower (27°C) for the group of RD. Compared to the current day, for the cold effects of up to 15-day lags, the threshold was lower for the group of D15-64, and the thresholds were higher for the groups of D65-84, D85+, RD and CVD; while for the hot effects of 3-day lags, the threshold was higher for the group of D15-64 and the thresholds were lower for the groups of D65-84 and RD. Temperature thresholds appeared to differ with age and death categories. The elderly and deaths from RD and CVD were more sensitive to temperature stress than the adult group. These findings may have implications in the assessment of temperature-related mortality and development of weather/health warning systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information and communications technologies are a significant component of the healthcare domain, and electronic health records play a major role in it. Therefore, it is important that they are accepted en masse by healthcare professionals. How healthcare professionals perceive the usefulness of electronic health records and their attitudes towards them have been shown to have significant effects on the overall acceptance in many healthcare systems around the world. This paper investigates the role of perceived usefulness and attitude on the intention to use electronic health records by future healthcare professionals using polynomial regression with response surface analysis. Results show that the relationships between these variables are more complex than predicted in prior research. The paper concludes that the properties of the above determinants must be further investigated to clearly understand: (i) their role in predicting the intention to use electronic health records; and (ii) in designing systems that are better adopted by healthcare professionals of the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We first classify the state-of-the-art stream authentication problem in the multicast environment and group them into Signing and MAC approaches. A new approach for authenticating digital streams using Threshold Techniques is introduced. The new approach main advantages are in tolerating packet loss, up to a threshold number, and having a minimum space overhead. It is most suitable for multicast applications running over lossy, unreliable communication channels while, in same time, are pertain the security requirements. We use linear equations based on Lagrange polynomial interpolation and Combinatorial Design methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several recently proposed ciphers, for example Rijndael and Serpent, are built with layers of small S-boxes interconnected by linear key-dependent layers. Their security relies on the fact, that the classical methods of cryptanalysis (e.g. linear or differential attacks) are based on probabilistic characteristics, which makes their security grow exponentially with the number of rounds N r r. In this paper we study the security of such ciphers under an additional hypothesis: the S-box can be described by an overdefined system of algebraic equations (true with probability 1). We show that this is true for both Serpent (due to a small size of S-boxes) and Rijndael (due to unexpected algebraic properties). We study general methods known for solving overdefined systems of equations, such as XL from Eurocrypt’00, and show their inefficiency. Then we introduce a new method called XSL that uses the sparsity of the equations and their specific structure. The XSL attack uses only relations true with probability 1, and thus the security does not have to grow exponentially in the number of rounds. XSL has a parameter P, and from our estimations is seems that P should be a constant or grow very slowly with the number of rounds. The XSL attack would then be polynomial (or subexponential) in N r> , with a huge constant that is double-exponential in the size of the S-box. The exact complexity of such attacks is not known due to the redundant equations. Though the presented version of the XSL attack always gives always more than the exhaustive search for Rijndael, it seems to (marginally) break 256-bit Serpent. We suggest a new criterion for design of S-boxes in block ciphers: they should not be describable by a system of polynomial equations that is too small or too overdefined.