983 resultados para gap bilinear diffie hellman problem
Resumo:
Flow patterns and aerodynamic characteristics behind three side-by-side square cylinders has been found depending upon the unequal gap spacing (g1 = s1/d and g2 = s2/d) between the three cylinders and the Reynolds number (Re) using the Lattice Boltzmann method. The effect of Reynolds numbers on the flow behind three cylinders are numerically studied for 75 ≤ Re ≤ 175 and chosen unequal gap spacings such as (g1, g2) = (1.5, 1), (3, 4) and (7, 6). We also investigate the effect of g2 while keeping g1 fixed for Re = 150. It is found that a Reynolds number have a strong effect on the flow at small unequal gap spacing (g1, g2) = (1.5, 1.0). It is also found that the secondary cylinder interaction frequency significantly contributes for unequal gap spacing for all chosen Reynolds numbers. It is observed that at intermediate unequal gap spacing (g1, g2) = (3, 4) the primary vortex shedding frequency plays a major role and the effect of secondary cylinder interaction frequencies almost disappear. Some vortices merge near the exit and as a result small modulation found in drag and lift coefficients. This means that with the increase in the Reynolds numbers and unequal gap spacing shows weakens wakes interaction between the cylinders. At large unequal gap spacing (g1, g2) = (7, 6) the flow is fully periodic and no small modulation found in drag and lift coefficients signals. It is found that the jet flows for unequal gap spacing strongly influenced the wake interaction by varying the Reynolds number. These unequal gap spacing separate wake patterns for different Reynolds numbers: flip-flopping, in-phase and anti-phase modulation synchronized, in-phase and anti-phase synchronized. It is also observed that in case of equal gap spacing between the cylinders the effect of gap spacing is stronger than the Reynolds number. On the other hand, in case of unequal gap spacing between the cylinders the wake patterns strongly depends on both unequal gap spacing and Reynolds number. The vorticity contour visualization, time history analysis of drag and lift coefficients, power spectrum analysis of lift coefficient and force statistics are systematically discussed for all chosen unequal gap spacings and Reynolds numbers to fully understand this valuable and practical problem.
Resumo:
In this study I consider what kind of perspective on the mind body problem is taken and can be taken by a philosophical position called non-reductive physicalism. Many positions fall under this label. The form of non-reductive physicalism which I discuss is in essential respects the position taken by Donald Davidson (1917-2003) and Georg Henrik von Wright (1916-2003). I defend their positions and discuss the unrecognized similarities between their views. Non-reductive physicalism combines two theses: (a) Everything that exists is physical; (b) Mental phenomena cannot be reduced to the states of the brain. This means that according to non-reductive physicalism the mental aspect of humans (be it a soul, mind, or spirit) is an irreducible part of the human condition. Also Davidson and von Wright claim that, in some important sense, the mental aspect of a human being does not reduce to the physical aspect, that there is a gap between these aspects that cannot be closed. I claim that their arguments for this conclusion are convincing. I also argue that whereas von Wright and Davidson give interesting arguments for the irreducibility of the mental, their physicalism is unwarranted. These philosophers do not give good reasons for believing that reality is thoroughly physical. Notwithstanding the materialistic consensus in the contemporary philosophy of mind the ontology of mind is still an uncharted territory where real breakthroughs are not to be expected until a radically new ontological position is developed. The third main claim of this work is that the problem of mental causation cannot be solved from the Davidsonian - von Wrightian perspective. The problem of mental causation is the problem of how mental phenomena like beliefs can cause physical movements of the body. As I see it, the essential point of non-reductive physicalism - the irreducibility of the mental - and the problem of mental causation are closely related. If mental phenomena do not reduce to causally effective states of the brain, then what justifies the belief that mental phenomena have causal powers? If mental causes do not reduce to physical causes, then how to tell when - or whether - the mental causes in terms of which human actions are explained are actually effective? I argue that this - how to decide when mental causes really are effective - is the real problem of mental causation. The motivation to explore and defend a non-reductive position stems from the belief that reductive physicalism leads to serious ethical problems. My claim is that Davidson's and von Wright's ultimate reason to defend a non-reductive view comes back to their belief that a reductive understanding of human nature would be a narrow and possibly harmful perspective. The final conclusion of my thesis is that von Wright's and Davidson's positions provide a starting point from which the current scientistic philosophy of mind can be critically further explored in the future.
Resumo:
In this paper, we present a growing and pruning radial basis function based no-reference (NR) image quality model for JPEG-coded images. The quality of the images are estimated without referring to their original images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity factors such as edge amplitude, edge length, background activity and background luminance. Image quality estimation involves computation of functional relationship between HVS features and subjective test scores. Here, the problem of quality estimation is transformed to a function approximation problem and solved using GAP-RBF network. GAP-RBF network uses sequential learning algorithm to approximate the functional relationship. The computational complexity and memory requirement are less in GAP-RBF algorithm compared to other batch learning algorithms. Also, the GAP-RBF algorithm finds a compact image quality model and does not require retraining when the new image samples are presented. Experimental results prove that the GAP-RBF image quality model does emulate the mean opinion score (MOS). The subjective test results of the proposed metric are compared with JPEG no-reference image quality index as well as full-reference structural similarity image quality index and it is observed to outperform both.
Resumo:
Proper formulation of stress-strain relations, particularly in tension-compression situations for isotropic biomodulus materials, is an unresolved problem. Ambartsumyan's model [8] and Jones' weighted compliance matrix model [9] do not satisfy the principle of coordinate invariance. Shapiro's first stress invariant model [10] is too simple a model to describe the behavior of real materials. In fact, Rigbi [13] has raised a question about the compatibility of bimodularity with isotropy in a solid. Medri [2] has opined that linear principal strain-principal stress relations are fictitious, and warned that the bilinear approximation of uniaxial stress-strain behavior leads to ill-working bimodulus material model under combined loading. In the present work, a general bilinear constitutive model has been presented and described in biaxial principal stress plane with zonewise linear principal strain-principal stress relations. Elastic coefficients in the model are characterized based on the signs of (i) principal stresses, (ii) principal strains, and (iii) on the value of strain energy component ratio ER greater than or less than unity. The last criterion is used in tension-compression and compression-tension situations to account for different shear moduli in pure shear stress and pure shear strain states as well as unequal cross compliances.
Resumo:
The Cubic Sieve Method for solving the Discrete Logarithm Problem in prime fields requires a nontrivial solution to the Cubic Sieve Congruence (CSC) x(3) equivalent to y(2)z (mod p), where p is a given prime number. A nontrivial solution must also satisfy x(3) not equal y(2)z and 1 <= x, y, z < p(alpha), where alpha is a given real number such that 1/3 < alpha <= 1/2. The CSC problem is to find an efficient algorithm to obtain a nontrivial solution to CSC. CSC can be parametrized as x equivalent to v(2)z (mod p) and y equivalent to v(3)z (mod p). In this paper, we give a deterministic polynomial-time (O(ln(3) p) bit-operations) algorithm to determine, for a given v, a nontrivial solution to CSC, if one exists. Previously it took (O) over tilde (p(alpha)) time in the worst case to determine this. We relate the CSC problem to the gap problem of fractional part sequences, where we need to determine the non-negative integers N satisfying the fractional part inequality {theta N} < phi (theta and phi are given real numbers). The correspondence between the CSC problem and the gap problem is that determining the parameter z in the former problem corresponds to determining N in the latter problem. We also show in the alpha = 1/2 case of CSC that for a certain class of primes the CSC problem can be solved deterministically in <(O)over tilde>(p(1/3)) time compared to the previous best of (O) over tilde (p(1/2)). It is empirically observed that about one out of three primes is covered by the above class. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
This paper suggests a method for identification in the v-gap metric. For a finite number of frequency response samples, a problem for identification in the v-gap metric is formulated and an approximate solution is described. It uses an iterative technique for obtaining an L2-gap approximation. Each stage of the iteration involves solving an LMI optimisation. Given a known stabilising controller and the L2-gap approximation, it is shown how to derive a v-gap approximation.
Resumo:
Using the theory of Eliashberg and Nambu for strong-coupling superconductors, we have calculated the gap function for a model superconductor and a selection of real superconductors includong the elements Al, Sn, Tl, Nb, In, Pb and Hg and one alloy, Bi2Tl. We have determined thetemperature-dependent gap edge in each and found that in materials with weak electron-phonon ($\lambda 1.20$), not only is the gap edge double valued but it also departs significantly from the BCS form and develops a shoulderlike structure which may, in some cases, denote a gap edge exceeding the $T = 0$ value. These computational results support the insights obtained by Leavens in an analytic consideration of the general problem. Both the shoulder and double value arise from a common origin seated in the form of the gap function in strong coupled materials at finite temperatures. From the calculated gap function, we can determine the densities of states in the materials and the form of the tunneling current-voltage characteristics for junctions with these materials as electroddes. By way of illustration, results are shown for the contrasting cases of Sn ($\lambda=0.74$) and Hg ($\lambad=1.63$). The reported results are distinct in several ways from BCS predictions and provide an incentive determinative experimental studies with techniques such as tunneling and far infrared absorption.
Resumo:
To explore the idea of education to close the ingenuity gap I use Thomas Homer-Dixon's work to define ingenuity. The notion that the supply of ingenuity to solve our technical and social problems is not keeping pace with the ingenuity required to solve those problems is called the ingenuity gap. Man-made technological developments are increasing the density, intensity, and pace of globalisation. People must reorganise decision-making organisations and problem-solving methods to pragmatically combat the growing ingenuity gap. John Dewey's work illustrates the fundamental attitudes for the thinking and judgment associated with educating for ingenuity. Howard Gardner's idea that truth, beauty, and morality ought to form the core values and tenets of the philosophy of educating for ingenuity is integral to this thesis. The act of teaching facilitates the invitation to the communication necessary to foster ingenuity. John Novak-discusses the five relationships of educational leadership that enhance an environment of ingenuity. The International Baccalaureate (IB) is an existing model of global education, one that defines some of the school experiences and academic development of core values of educating for ingenuity. Expanding upon the structure of the IB and other research within this thesis, I speculate upon what my school, where educating for ingenuity so as to close the ingenuity gap is the goal, would be like.
Resumo:
This thesis explores the debate and issues regarding the status of visual ;,iferellces in the optical writings of Rene Descartes, George Berkeley and James 1. Gibson. It gathers arguments from across their works and synthesizes an account of visual depthperception that accurately reflects the larger, metaphysical implications of their philosophical theories. Chapters 1 and 2 address the Cartesian and Berkelean theories of depth-perception, respectively. For Descartes and Berkeley the debate can be put in the following way: How is it possible that we experience objects as appearing outside of us, at various distances, if objects appear inside of us, in the representations of the individual's mind? Thus, the Descartes-Berkeley component of the debate takes place exclusively within a representationalist setting. Representational theories of depthperception are rooted in the scientific discovery that objects project a merely twodimensional patchwork of forms on the retina. I call this the "flat image" problem. This poses the problem of depth in terms of a difference between two- and three-dimensional orders (i.e., a gap to be bridged by one inferential procedure or another). Chapter 3 addresses Gibson's ecological response to the debate. Gibson argues that the perceiver cannot be flattened out into a passive, two-dimensional sensory surface. Perception is possible precisely because the body and the environment already have depth. Accordingly, the problem cannot be reduced to a gap between two- and threedimensional givens, a gap crossed with a projective geometry. The crucial difference is not one of a dimensional degree. Chapter 3 explores this theme and attempts to excavate the empirical and philosophical suppositions that lead Descartes and Berkeley to their respective theories of indirect perception. Gibson argues that the notion of visual inference, which is necessary to substantiate representational theories of indirect perception, is highly problematic. To elucidate this point, the thesis steps into the representationalist tradition, in order to show that problems that arise within it demand a tum toward Gibson's information-based doctrine of ecological specificity (which is to say, the theory of direct perception). Chapter 3 concludes with a careful examination of Gibsonian affordallces as the sole objects of direct perceptual experience. The final section provides an account of affordances that locates the moving, perceiving body at the heart of the experience of depth; an experience which emerges in the dynamical structures that cross the body and the world.
Resumo:
Le problème de localisation-routage avec capacités (PLRC) apparaît comme un problème clé dans la conception de réseaux de distribution de marchandises. Il généralisele problème de localisation avec capacités (PLC) ainsi que le problème de tournées de véhicules à multiples dépôts (PTVMD), le premier en ajoutant des décisions liées au routage et le deuxième en ajoutant des décisions liées à la localisation des dépôts. Dans cette thèse on dévelope des outils pour résoudre le PLRC à l’aide de la programmation mathématique. Dans le chapitre 3, on introduit trois nouveaux modèles pour le PLRC basés sur des flots de véhicules et des flots de commodités, et on montre comment ceux-ci dominent, en termes de la qualité de la borne inférieure, la formulation originale à deux indices [19]. Des nouvelles inégalités valides ont été dévelopées et ajoutées aux modèles, de même que des inégalités connues. De nouveaux algorithmes de séparation ont aussi été dévelopés qui dans la plupart de cas généralisent ceux trouvés dans la litterature. Les résultats numériques montrent que ces modèles de flot sont en fait utiles pour résoudre des instances de petite à moyenne taille. Dans le chapitre 4, on présente une nouvelle méthode de génération de colonnes basée sur une formulation de partition d’ensemble. Le sous-problème consiste en un problème de plus court chemin avec capacités (PCCC). En particulier, on utilise une relaxation de ce problème dans laquelle il est possible de produire des routes avec des cycles de longueur trois ou plus. Ceci est complété par des nouvelles coupes qui permettent de réduire encore davantage le saut d’intégralité en même temps que de défavoriser l’apparition de cycles dans les routes. Ces résultats suggèrent que cette méthode fournit la meilleure méthode exacte pour le PLRC. Dans le chapitre 5, on introduit une nouvelle méthode heuristique pour le PLRC. Premièrement, on démarre une méthode randomisée de type GRASP pour trouver un premier ensemble de solutions de bonne qualité. Les solutions de cet ensemble sont alors combinées de façon à les améliorer. Finalement, on démarre une méthode de type détruir et réparer basée sur la résolution d’un nouveau modèle de localisation et réaffectation qui généralise le problème de réaffectaction [48].
Resumo:
Analysis by reduction is a method used in linguistics for checking the correctness of sentences of natural languages. This method is modelled by restarting automata. All types of restarting automata considered in the literature up to now accept at least the deterministic context-free languages. Here we introduce and study a new type of restarting automaton, the so-called t-RL-automaton, which is an RL-automaton that is rather restricted in that it has a window of size one only, and that it works under a minimal acceptance condition. On the other hand, it is allowed to perform up to t rewrite (that is, delete) steps per cycle. Here we study the gap-complexity of these automata. The membership problem for a language that is accepted by a t-RL-automaton with a bounded number of gaps can be solved in polynomial time. On the other hand, t-RL-automata with an unbounded number of gaps accept NP-complete languages.
Resumo:
In this paper we present error analysis for a Monte Carlo algorithm for evaluating bilinear forms of matrix powers. An almost Optimal Monte Carlo (MAO) algorithm for solving this problem is formulated. Results for the structure of the probability error are presented and the construction of robust and interpolation Monte Carlo algorithms are discussed. Results are presented comparing the performance of the Monte Carlo algorithm with that of a corresponding deterministic algorithm. The two algorithms are tested on a well balanced matrix and then the effects of perturbing this matrix, by small and large amounts, is studied.
Resumo:
1. Comparative analyses are used to address the key question of what makes a species more prone to extinction by exploring the links between vulnerability and intrinsic species’ traits and/or extrinsic factors. This approach requires comprehensive species data but information is rarely available for all species of interest. As a result comparative analyses often rely on subsets of relatively few species that are assumed to be representative samples of the overall studied group. 2. Our study challenges this assumption and quantifies the taxonomic, spatial, and data type biases associated with the quantity of data available for 5415 mammalian species using the freely available life-history database PanTHERIA. 3. Moreover, we explore how existing biases influence results of comparative analyses of extinction risk by using subsets of data that attempt to correct for detected biases. In particular, we focus on links between four species’ traits commonly linked to vulnerability (distribution range area, adult body mass, population density and gestation length) and conduct univariate and multivariate analyses to understand how biases affect model predictions. 4. Our results show important biases in data availability with c.22% of mammals completely lacking data. Missing data, which appear to be not missing at random, occur frequently in all traits (14–99% of cases missing). Data availability is explained by intrinsic traits, with larger mammals occupying bigger range areas being the best studied. Importantly, we find that existing biases affect the results of comparative analyses by overestimating the risk of extinction and changing which traits are identified as important predictors. 5. Our results raise concerns over our ability to draw general conclusions regarding what makes a species more prone to extinction. Missing data represent a prevalent problem in comparative analyses, and unfortunately, because data are not missing at random, conventional approaches to fill data gaps, are not valid or present important challenges. These results show the importance of making appropriate inferences from comparative analyses by focusing on the subset of species for which data are available. Ultimately, addressing the data bias problem requires greater investment in data collection and dissemination, as well as the development of methodological approaches to effectively correct existing biases.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The BCS superconductivity to Bose condensation crossover problem is studied in two dimensions in S, P, and D waves, for a simple anisotropic pairing, with a finite-range separable potential at zero temperature. The gap parameter and the chemical potential as a function of Cooper-pair binding B c exhibit universal scaling. In the BCS limit the results for coherence length ξ and the critical temperature T c are appropriate for highT c cuprate superconductors and also exhibit universal scaling as a function of B c.