92 resultados para Irreducible polynomial
Resumo:
In this work, a Langevin dynamics model of the diffusion of water in articular cartilage was developed. Numerical simulations of the translational dynamics of water molecules and their interaction with collagen fibers were used to study the quantitative relationship between the organization of the collagen fiber network and the diffusion tensor of water in model cartilage. Langevin dynamics was used to simulate water diffusion in both ordered and partially disordered cartilage models. In addition, an analytical approach was developed to estimate the diffusion tensor for a network comprising a given distribution of fiber orientations. The key findings are that (1) an approximately linear relationship was observed between collagen volume fraction and the fractional anisotropy of the diffusion tensor in fiber networks of a given degree of alignment, (2) for any given fiber volume fraction, fractional anisotropy follows a fiber alignment dependency similar to the square of the second Legendre polynomial of cos(θ), with the minimum anisotropy occurring at approximately the magic angle (θMA), and (3) a decrease in the principal eigenvalue and an increase in the transverse eigenvalues is observed as the fiber orientation angle θ progresses from 0◦ to 90◦. The corresponding diffusion ellipsoids are prolate for θ < θMA, spherical for θ ≈ θMA, and oblate for θ > θMA. Expansion of the model to include discrimination between the combined effects of alignment disorder and collagen fiber volume fraction on the diffusion tensor is discussed.
Resumo:
In order to drive sustainable financial profitability, service firms make significant investments in creating service environments that consumers will prefer over the environments of their competitors. To date, servicescape research is over-focused on understanding consumers’ emotional and physiological responses to servicescape attributes, rather than taking a holistic view of how consumers cognitively interpret servicescapes. This thesis argues that consumers will cognitively ascribe symbolic meanings to servicescapes and then evaluate if those meanings are congruent with their sense of Self in order to form a preference for a servicescape. Consequently, this thesis takes a Self Theory approach to servicescape symbolism to address the following broad research question: How do ascribed symbolic meanings influence servicescape preference? Using a three-study, mixed-method approach, this thesis investigates the symbolic meanings consumers ascribe to servicescapes and empirically tests whether the joint effects of congruence between consumer Self and the symbolic meanings ascribed to servicescapes influence consumers’ servicescape preference. First, Study One identifies the symbolic meanings ascribed to salient servicescape attributes using a combination of repertory tests and laddering techniques within 19 semi-structured individual depth interviews. Study Two modifies an existing scale to create a symbolic servicescape meaning scale in order to measure the symbolic meanings ascribed to servicescapes. Finally, Study Three utilises the Self-Congruity Model to empirically examine the joint effects of consumer Self and servicescape on consumers’ preference for servicescapes. Using polynomial regression with response surface analysis, 14 joint effect models demonstrate that both Self-Servicescape incongruity and congruity influence consumers’ preference for servicescapes. Combined, the findings of three studies suggest that the symbolic meanings ascribed to servicescapes and their (in)congruities with consumers’ sense of self can be used to predict consumers’ preferences for servicescapes. These findings have several key theoretical and practical contributions to services marketing.
Resumo:
We consider the problem of how to maximize secure connectivity of multi-hop wireless ad hoc networks after deployment. Two approaches, based on graph augmentation problems with nonlinear edge costs, are formulated. The first one is based on establishing a secret key using only the links that are already secured by secret keys. This problem is in NP-hard and does not accept polynomial time approximation scheme PTAS since minimum cutsets to be augmented do not admit constant costs. The second one is based of increasing the power level between a pair of nodes that has a secret key to enable them physically connect. This problem can be formulated as the optimal key establishment problem with interference constraints with bi-objectives: (i) maximizing the concurrent key establishment flow, (ii) minimizing the cost. We show that both problems are NP-hard and MAX-SNP (i.e., it is NP-hard to approximate them within a factor of 1 + e for e > 0 ) with a reduction to MAX3SAT problem. Thus, we design and implement a fully distributed algorithm for authenticated key establishment in wireless sensor networks where each sensor knows only its one- hop neighborhood. Our witness based approaches find witnesses in multi-hop neighborhood to authenticate the key establishment between two sensor nodes which do not share a key and which are not connected through a secure path.
Resumo:
We consider the problem of maximizing the secure connectivity in wireless ad hoc networks, and analyze complexity of the post-deployment key establishment process constrained by physical layer properties such as connectivity, energy consumption and interference. Two approaches, based on graph augmentation problems with nonlinear edge costs, are formulated. The first one is based on establishing a secret key using only the links that are already secured by shared keys. This problem is in NP-hard and does not accept polynomial time approximation scheme PTAS since minimum cutsets to be augmented do not admit constant costs. The second one extends the first problem by increasing the power level between a pair of nodes that has a secret key to enable them physically connect. This problem can be formulated as the optimal key establishment problem with interference constraints with bi-objectives: (i) maximizing the concurrent key establishment flow, (ii) minimizing the cost. We prove that both problems are NP-hard and MAX-SNP with a reduction to MAX3SAT problem.
Resumo:
Information and communications technologies are a significant component of the healthcare domain and electronic health records play a major role within it. As a result, it is important that they are accepted en masse by healthcare professionals. How healthcare professionals perceive the usefulness of electronic health records and their attitudes towards them have been shown to have significant effects on their overall acceptance. This paper investigates the role of perceived usefulness and attitude on the intention to use electronic health records by future healthcare professionals using polynomial regression with response surface analysis. Results show that the relationship is more complex than predicted in prior research. The paper concludes that the predicting properties of the above determinants must be further investigated to clearly understand their role in predicting the intention to use electronic health records and in designing systems that are better adopted by healthcare professionals of the future.
Resumo:
In this paper, a polynomial time algorithm is presented for solving the Eden problem for graph cellular automata. The algorithm is based on our neighborhood elimination operation which removes local neighborhood configurations which cannot be used in a pre-image of a given configuration. This paper presents a detailed derivation of our algorithm from first principles, and a detailed complexity and accuracy analysis is also given. In the case of time complexity, it is shown that the average case time complexity of the algorithm is \Theta(n^2), and the best and worst cases are \Omega(n) and O(n^3) respectively. This represents a vast improvement in the upper bound over current methods, without compromising average case performance.
Resumo:
In the expanding literature on creative practice research, art and design are often described as a unified field. They are bracketed together (art-and-design), referred to as interchangeable terms (art/design), and nested together, as if the practices of one domain encompass the other. However it is possible to establish substantial differences in research approaches. In this chapter we argue that core distinctions arise out of the goals of the research, intentions invested in the resulting “artefacts” (creative works, products, events), and the knowledge claims made for the research outcomes. Moreover, these fundamental differences give rise to a number of contingent attributes of the research such as the forming contexts, methodological approaches, and ways of evidencing and reporting new knowledge. We do not strictly ascribe these differences to disciplinary contexts. Rather, we use the terms effective practice research and evocative practice research to describe the spirit of the two distinctive research paradigms we identify. In short, effective practice research (often pursued in design fields) seeks a solution (or resolution) to a problem identified with a particular community, and it produces an artefact that addresses this problem by effecting change (making a situation, product or process more efficient or effective in some way). On the other hand, evocative practice research (often pursued by creative arts fields) is driven by individual pre-occupations, cultural concerns or human experience more broadly. It produces artefacts that evoke affect and resonance, and are poetically irreducible in meaning. We cite recent examples of creative research projects that illustrate the distinctions we identify. We then go on to describe projects that integrate these modes of research. In this way, we map out a creative research spectrum, with distinct poles as well as multiple hybrid possibilities. The hybrid projects we reference are not presented as evidence an undifferentiated field. Instead, we argue that they integrate research modes in deliberate, purposeful and distinctive ways: employing effective practice research methods in the production of evocative artefacts or harnessing evocative (as well as effective) research paradigms to effect change.
Resumo:
Irradiance profile around the receiver tube (RT) of a parabolic trough collector (PTC) is a key effect of optical performance that affects the overall energy performance of the collector. Thermal performance evaluation of the RT relies on the appropriate determination of the irradiance profile. This article explains a technique in which empirical equations were developed to calculate the local irradiance as a function of angular location of the RT of a standard PTC using a vigorously verified Monte Carlo ray tracing model. A large range of test conditions including daily normal insolation, spectral selective coatings and glass envelop conditions were selected from the published data by Dudley et al. [1] for the job. The R2 values of the equations are excellent that vary in between 0.9857 and 0.9999. Therefore, these equations can be used confidently to produce realistic non-uniform boundary heat flux profile around the RT at normal incidence for conjugate heat transfer analyses of the collector. Required values in the equations are daily normal insolation, and the spectral selective properties of the collector components. Since the equations are polynomial functions, data processing software can be employed to calculate the flux profile very easily and quickly. The ultimate goal of this research is to make the concentrating solar power technology cost competitive with conventional energy technology facilitating its ongoing research.
Resumo:
Purpose To investigate the differences between and variations across time in corneal topography and ocular wavefront aberrations in young Singaporean myopes and emmetropes. Methods We used a videokeratoscope and wavefront sensor to measure the ocular surface topography and wavefront aberrations of the total eye optics in the morning, mid-day and late afternoon on two separate days. Topography data were used to derive the corneal surface wavefront aberrations. Both the corneal and total wavefronts were analysed up to the 4th radial order of the Zernike polynomial expansion, and were centred on the entrance pupil (5 mm). The participants included 12 young progressing myopes, 13 young stable myopes and 15 young age-matched emmetropes. Results For all subjects considered together there were significant changes in some of the aberrations terms across the day, such as spherical aberration ( ) and vertical coma ( ) (repeated measures ANOVA, p<0.05). The magnitude of positive spherical aberration ( ) was significantly lower in the progressing myope group than that of the stable myopes (p=0.04) and emmetrope group (p=0.02). There were also significant interactions between refractive group and time of day for with/against-the-rule astigmatism ( ). Significantly lower 4th order RMS of ocular wavefront aberrations were found in the progressing myope group compared with the stable myopes and emmetropes (p<0.01). Conclusions These differences and variations in the corneal and total aberrations may have significance for our understanding of refractive error development and for clinical applications requiring accurate wavefront measurements.
Resumo:
In hyper competition, firms that are agile: sensing and responding better to customer requirements tend to be more successful and achieve supernormal profits. In spite of the widely accepted importance of customer agility, research is limited on this construct. The limited research also has predominantly focussed on the firm’s perspective of agility. However, we propose that the customers are better positioned to determine how well a firm is responding to their requirements (aka a firm’s customer agility). Taking the customers’ stand point, we address the issue of sense and respond alignment in two perspectives-matching and mediating. Based on data collected from customers in a field study, we tested hypothesis pertaining to the two methods of alignment using polynomial regression and response surface methodology. The results provide a good explanation for the role of both forms of alignment on customer satisfaction. Implication for research and practice are discussed.
Resumo:
We construct two efficient Identity-Based Encryption (IBE) systems that admit selective-identity security reductions without random oracles in groups equipped with a bilinear map. Selective-identity secure IBE is a slightly weaker security model than the standard security model for IBE. In this model the adversary must commit ahead of time to the identity that it intends to attack, whereas in an adaptive-identity attack the adversary is allowed to choose this identity adaptively. Our first system—BB1—is based on the well studied decisional bilinear Diffie–Hellman assumption, and extends naturally to systems with hierarchical identities, or HIBE. Our second system—BB2—is based on a stronger assumption which we call the Bilinear Diffie–Hellman Inversion assumption and provides another approach to building IBE systems. Our first system, BB1, is very versatile and well suited for practical applications: the basic hierarchical construction can be efficiently secured against chosen-ciphertext attacks, and further extended to support efficient non-interactive threshold decryption, among others, all without using random oracles. Both systems, BB1 and BB2, can be modified generically to provide “full” IBE security (i.e., against adaptive-identity attacks), either using random oracles, or in the standard model at the expense of a non-polynomial but easy-to-compensate security reduction.
Resumo:
Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.
Resumo:
Recently, several classes of permutation polynomials of the form (x2 + x + δ)s + x over F2m have been discovered. They are related to Kloosterman sums. In this paper, the permutation behavior of polynomials of the form (xp − x + δ)s + L(x) over Fpm is investigated, where L(x) is a linearized polynomial with coefficients in Fp. Six classes of permutation polynomials on F2m are derived. Three classes of permutation polynomials over F3m are also presented.
Resumo:
We study the multicast stream authentication problem when an opponent can drop, reorder and introduce data packets into the communication channel. In such a model, packet overhead and computing efficiency are two parameters to be taken into account when designing a multicast stream protocol. In this paper, we propose to use two families of erasure codes to deal with this problem, namely, rateless codes and maximum distance separable codes. Our constructions will have the following advantages. First, our packet overhead will be small. Second, the number of signature verifications to be performed at the receiver is O(1). Third, every receiver will be able to recover all the original data packets emitted by the sender despite losses and injection occurred during the transmission of information.
Resumo:
Motivated by the need of private set operations in a distributed environment, we extend the two-party private matching problem proposed by Freedman, Nissim and Pinkas (FNP) at Eurocrypt’04 to the distributed setting. By using a secret sharing scheme, we provide a distributed solution of the FNP private matching called the distributed private matching. In our distributed private matching scheme, we use a polynomial to represent one party’s dataset as in FNP and then distribute the polynomial to multiple servers. We extend our solution to the distributed set intersection and the cardinality of the intersection, and further we show how to apply the distributed private matching in order to compute distributed subset relation. Our work extends the primitives of private matching and set intersection by Freedman et al. Our distributed construction might be of great value when the dataset is outsourced and its privacy is the main concern. In such cases, our distributed solutions keep the utility of those set operations while the dataset privacy is not compromised. Comparing with previous works, we achieve a more efficient solution in terms of computation. All protocols constructed in this paper are provably secure against a semi-honest adversary under the Decisional Diffie-Hellman assumption.