926 resultados para Number Theory
Resumo:
We present a general formalism for deriving bounds on the shape parameters of the weak and electromagnetic form factors using as input correlators calculated from perturbative QCD, and exploiting analyticity and unitarily. The values resulting from the symmetries of QCD at low energies or from lattice calculations at special points inside the analyticity domain can be included in an exact way. We write down the general solution of the corresponding Meiman problem for an arbitrary number of interior constraints and the integral equations that allow one to include the phase of the form factor along a part of the unitarity cut. A formalism that includes the phase and some information on the modulus along a part of the cut is also given. For illustration we present constraints on the slope and curvature of the K-l3 scalar form factor and discuss our findings in some detail. The techniques are useful for checking the consistency of various inputs and for controlling the parameterizations of the form factors entering precision predictions in flavor physics.
Resumo:
Let G(V, E) be a simple, undirected graph where V is the set of vertices and E is the set of edges. A b-dimensional cube is a Cartesian product l(1) x l(2) x ... x l(b), where each l(i) is a closed interval of unit length on the real line. The cub/city of G, denoted by cub(G), is the minimum positive integer b such that the vertices in G can be mapped to axis parallel b-dimensional cubes in such a way that two vertices are adjacent in G if and only if their assigned cubes intersect. An interval graph is a graph that can be represented as the intersection of intervals on the real line-i.e. the vertices of an interval graph can be mapped to intervals on the real line such that two vertices are adjacent if and only if their corresponding intervals overlap. Suppose S(m) denotes a star graph on m+1 nodes. We define claw number psi(G) of the graph to be the largest positive integer m such that S(m) is an induced subgraph of G. It can be easily shown that the cubicity of any graph is at least log(2) psi(G)]. In this article, we show that for an interval graph G log(2) psi(G)-]<= cub(G)<=log(2) psi(G)]+2. It is not clear whether the upper bound of log(2) psi(G)]+2 is tight: till now we are unable to find any interval graph with cub(G)> (log(2)psi(G)]. We also show that for an interval graph G, cub(G) <= log(2) alpha], where alpha is the independence number of G. Therefore, in the special case of psi(G)=alpha, cub(G) is exactly log(2) alpha(2)]. The concept of cubicity can be generalized by considering boxes instead of cubes. A b-dimensional box is a Cartesian product l(1) x l(2) x ... x l(b), where each I is a closed interval on the real line. The boxicity of a graph, denoted box(G), is the minimum k such that G is the intersection graph of k-dimensional boxes. It is clear that box(G)<= cub(G). From the above result, it follows that for any graph G, cub(G) <= box(G)log(2) alpha]. (C) 2010 Wiley Periodicals, Inc. J Graph Theory 65: 323-333, 2010
Resumo:
The modern subject is what we can call a self-subjecting individual. This is someone in whose inner reality has been implanted a more permanent governability, a governability that works inside the agent. Michel Foucault s genealogy of the modern subject is the history of its constitution by power practices. By a flight of imagination, suppose that this history is not an evolving social structure or cultural phenomenon, but one of those insects (moth) whose life cycle consists of three stages or moments: crawling larva, encapsulated pupa, and flying adult. Foucault s history of power-practices presents the same kind of miracle of total metamorphosis. The main forces in the general field of power can be apprehended through a generalisation of three rationalities functioning side-by-side in the plurality of different practices of power: domination, normalisation and the law. Domination is a force functioning by the rationality of reason of state: the state s essence is power, power is firm domination over people, and people are the state s resource by which the state s strength is measured. Normalisation is a force that takes hold on people from the inside of society: it imposes society s own reality its empirical verity as a norm on people through silently working jurisdictional operations that exclude pathological individuals too far from the average of the population as a whole. The law is a counterforce to both domination and normalisation. Accounting for elements of legal practice as omnihistorical is not possible without a view of the general field of power. Without this view, and only in terms of the operations and tactical manoeuvres of the practice of law, nothing of the kind can be seen: the only thing that practice manifests is constant change itself. However, the backdrop of law s tacit dimension that is, the power-relations between law, domination and normalisation allows one to see more. In the general field of power, the function of law is exactly to maintain the constant possibility of change. Whereas domination and normalisation would stabilise society, the law makes it move. The European individual has a reality as a problem. What is a problem? A problem is something that allows entry into the field of thought, said Foucault. To be a problem, it is necessary for certain number of factors to have made it uncertain, to have made it lose familiarity, or to have provoked a certain number of difficulties around it . Entering the field of thought through problematisations of the European individual human forms, power and knowledge one is able to glimpse the historical backgrounds of our present being. These were produced, and then again buried, in intersections between practices of power and games of truth. In the problem of the European individual one has suitable circumstances that bring to light forces that have passed through the individual through centuries.
Resumo:
Measurements of the electrical resistivity of thin potassium wires at temperatures near 1 K have revealed a minimum in the resistivity as a function of temperature. By proposing that the electrons in these wires have undergone localization, albeit with large localization length, and that inelastic-scattering events destroy the coherence of that state, we can explain both the magnitude and shape of the temperature-dependent resistivity data. Localization of electrons in these wires is to be expected because, due to the high purity of the potassium, the elastic mean free path is comparable to the diameters of the thinnest samples, making the Thouless length lT (or inelastic diffusion length) much larger than the diameter, so that the wire is effectively one dimensional. The inelastic events effectively break the wire into a series of localized segments, whose resistances can be added to obtain the total resistance of the wire. The ensemble-averaged resistance for all possible segmented wires, weighted with a Poisson distribution of inelastic-scattering lengths along the wire, yields a length dependence for the resistance that is proportional to [L3/lin(T)], provided that lin(T)?L, where L is the sample length and lin(T) is some effective temperature-dependent one-dimensional inelastic-scattering length. A more sophisticated approach using a Poisson distribution in inelastic-scattering times, which takes into account the diffusive motion of the electrons along the wire through the Thouless length, yields a length- and temperature-dependent resistivity proportional to (L/lT)4 under appropriate conditions. Inelastic-scattering lifetimes are inferred from the temperature-dependent bulk resistivities (i.e., those of thicker, effectively three-dimensional samples), assuming that a minimum amount of energy must be exchanged for a collision to be effective in destroying the phase coherence of the localized state. If the dominant inelastic mechanism is electron-electron scattering, then our result, given the appropriate choice of the channel number parameter, is consistent with the data. If electron-phason scattering were of comparable importance, then our results would remain consistent. However, the inelastic-scattering lifetime inferred from bulk resistivity data is too short. This is because the electron-phason mechanism dominates in the inelastic-scattering rate, although the two mechanisms may be of comparable importance for the bulk resistivity. Possible reasons why the electron-phason mechanism might be less effective in thin wires than in bulk are discussed.
Resumo:
Numerous reports from several parts of the world have confirmed that on calm clear nights a minimum in air temperature can occur just above ground, at heights of the order of $\frac{1}{2}$ m or less. This phenomenon, first observed by Ramdas & Atmanathan (1932), carries the associated paradox of an apparently unstable layer that sustains itself for several hours, and has not so far been satisfactorily explained. We formulate here a theory that considers energy balance between radiation, conduction and free or forced convection in humid air, with surface temperature, humidity and wind incorporated into an appropriate mathematical model as parameters. A complete numerical solution of the coupled air-soil problem is used to validate an approach that specifies the surface temperature boundary condition through a cooling rate parameter. Utilizing a flux-emissivity scheme for computing radiative transfer, the model is numerically solved for various values of turbulent friction velocity. It is shown that a lifted minimum is predicted by the model for values of ground emissivity not too close to unity, and for sufficiently low surface cooling rates and eddy transport. Agreement with observation for reasonable values of the parameters is demonstrated. A heuristic argument is offered to show that radiation substantially increases the critical Rayleigh number for convection, thus circumventing or weakening Rayleigh-Benard instability. The model highlights the key role played by two parameters generally ignored in explanations of the phenomenon, namely surface emissivity and soil thermal conductivity, and shows that it is unnecessary to invoke the presence of such particulate constituents as haze to produce a lifted minimum.
Resumo:
We propose and develop here a phenomenological Ginzburg-Landau-like theory of cuprate high-temperature superconductivity. The free energy of a cuprate superconductor is expressed as a functional F of the complex spin-singlet pair amplitude psi(ij) equivalent to psi(m) = Delta(m) exp(i phi(m)), where i and j are nearest-neighbor sites of the square planar Cu lattice in which the superconductivity is believed to primarily reside, and m labels the site located at the center of the bond between i and j. The system is modeled as a weakly coupled stack of such planes. We hypothesize a simple form FDelta, phi] = Sigma(m)A Delta(2)(m) + (B/2)Delta(4)(m)] + C Sigma(< mn >) Delta(m) Delta(n) cos(phi(m) - phi(n)) for the functional, where m and n are nearest-neighbor sites on the bond-center lattice. This form is analogous to the original continuum Ginzburg-Landau free-energy functional; the coefficients A, B, and C are determined from comparison with experiments. A combination of analytic approximations, numerical minimization, and Monte Carlo simulations is used to work out a number of consequences of the proposed functional for specific choices of A, B, and C as functions of hole density x and temperature T. There can be a rapid crossover of
Resumo:
This paper deals with the use of Stem theory as applied to a clay-water electrolyte system, which is more realistic to understand the force system at micro level man the Gouy-Chapman theory. The influence of the Stern layer on potential-distance relationship has been presented quantitatively for certain specified clay-water systems and the results are compared with the Gouy-Chapman model. A detailed parametric study concerning the number of adsorption spots on the clay platelet, the thickness of the Stern layer, specific adsorption potential and the value of dielectric constant of the pore fluid in the Stern layer, was carried out. This study investigates that the potential obtained at any distance using the Stern theory is higher than that obtained by the Gouy-Chapman theory. The hydrated size of the ion is found to have a significant influence on the potential-distance relationship for a given clay, pore fluid characteristics and valence of the exchangeable ion.
Resumo:
The problem of electromagnetic wave propagation in a rectangular waveguide containing a thick iris is considered for its complete solution by reducing it to two suitable integral equations, one of which is of the first kind and the other is of the second kind. These integral equations are solved approximately, by using truncated Fourier series for the unknown functions. The reflection coefficient is computed numerically from the two integral equation approaches, and almost the same numerical results are obtained. This is also depicted graphically against the wave number and compared with thin iris results, which are computed by using complementary formulations coupled with Galerkin approximations. While the reflection coefficient for a thin iris steadily increases with the wave number, for a thick iris it fluctuates and zero reflection occurs. The number of zeros of the reflection coefficient for a thick iris increases with the thickness. Thus a thick iris becomes completely transparent for some discrete wave numbers. This phenomenon may be significant in the modelling of rectangular waveguides.
Resumo:
The particle and fluid velocity fluctuations in a turbulent gas-particle suspension are studied experimentally using two-dimensional particle image velocimetry with the objective of comparing the experiments with the predictions of fluctuating force simulations. Since the fluctuating force simulations employ force distributions which do not incorporate the modification of fluid turbulence due to the particles, it is of importance to quantify the turbulence modification in the experiments. For experiments carried out at a low volume fraction of 9.15 x 10(-5) (mass loading is 0.19), where the viscous relaxation time is small compared with the time between collisions, it is found that the gas-phase turbulence is not significantly modified by the presence of particles. Owing to this, quantitative agreement is obtained between the results of experiments and fluctuating force simulations for the mean velocity and the root mean square of the fluctuating velocity, provided that the polydispersity in the particle size is incorporated in the simulations. This is because the polydispersity results in a variation in the terminal velocity of the particles which could induce collisions and generate fluctuations; this mechanism is absent if all of the particles are of equal size. It is found that there is some variation in the particle mean velocity very close to the wall depending on the wall-collision model used in the simulations, and agreement with experiments is obtained only when the tangential wall-particle coefficient of restitution is 0.7. The mean particle velocity is in quantitative agreement for locations more than 10 wall units from the wall of the channel. However, there are systematic differences between the simulations and theory for the particle concentrations, possibly due to inadequate control over the particle feeding at the entrance. The particle velocity distributions are compared both at the centre of the channel and near the wall, and the shape of the distribution function near the wall obtained in experiments is accurately predicted by the simulations. At the centre, there is some discrepancy between simulations and experiment for the distribution of the fluctuating velocity in the flow direction, where the simulations predict a bi-modal distribution whereas only a single maximum is observed in the experiments, although both distributions are skewed towards negative fluctuating velocities. At a much higher particle mass loading of 1.7, where the time between collisions is smaller than the viscous relaxation time, there is a significant increase in the turbulent velocity fluctuations by similar to 1-2 orders of magnitude. Therefore, it becomes necessary to incorporate the modified fluid-phase intensity in the fluctuating force simulation; with this modification, the mean and mean-square fluctuating velocities are within 20-30% of the experimental values.
Resumo:
This article presents the buckling analysis of orthotropic nanoplates such as graphene using the two-variable refined plate theory and nonlocal small-scale effects. The two-variable refined plate theory takes account of transverse shear effects and parabolic distribution of the transverse shear strains through the thickness of the plate, hence it is unnecessary to use shear correction factors. Nonlocal governing equations of motion for the monolayer graphene are derived from the principle of virtual displacements. The closed-form solution for buckling load of a simply supported rectangular orthotropic nanoplate subjected to in-plane loading has been obtained by using the Navier's method. Numerical results obtained by the present theory are compared with first-order shear deformation theory for various shear correction factors. It has been proven that the nondimensional buckling load of the orthotropic nanoplate is always smaller than that of the isotropic nanoplate. It is also shown that small-scale effects contribute significantly to the mechanical behavior of orthotropic graphene sheets and cannot be neglected. Further, buckling load decreases with the increase of the nonlocal scale parameter value. The effects of the mode number, compression ratio and aspect ratio on the buckling load of the orthotropic nanoplate are also captured and discussed in detail. The results presented in this work may provide useful guidance for design and development of orthotropic graphene based nanodevices that make use of the buckling properties of orthotropic nanoplates.
Resumo:
An attempt is made to study the two dimensional (2D) effective electron mass (EEM) in quantum wells (Qws), inversion layers (ILs) and NIPI superlattices of Kane type semiconductors in the presence of strong external photoexcitation on the basis of a newly formulated electron dispersion laws within the framework of k.p. formalism. It has been found, taking InAs and InSb as examples, that the EEM in Qws, ILs and superlattices increases with increasing concentration, light intensity and wavelength of the incident light waves, respectively and the numerical magnitudes in each case is band structure dependent. The EEM in ILs is quantum number dependent exhibiting quantum jumps for specified values of the surface electric field and in NIPI superlattices; the same is the function of Fermi energy and the subband index characterizing such 2D structures. The appearance of the humps of the respective curves is due to the redistribution of the electrons among the quantized energy levels when the quantum numbers corresponding to the highest occupied level changes from one fixed value to the others. Although the EEM varies in various manners with all the variables as evident from all the curves, the rates of variations totally depend on the specific dispersion relation of the particular 2D structure. Under certain limiting conditions, all the results as derived in this paper get transformed into well known formulas of the EEM and the electron statistics in the absence of external photo-excitation and thus confirming the compatibility test. The results of this paper find three applications in the field of microstructures. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
High temperature superconductivity in the cuprates remains one of the most widely investigated, constantly surprising and poorly understood phenomena in physics. Here, we describe briefly a new phenomenological theory inspired by the celebrated description of superconductivity due to Ginzburg and Landau and believed to describe its essence. This posits a free energy functional for the superconductor in terms of a complex order parameter characterizing it. We propose that there is, for superconducting cuprates, a similar functional of the complex, in plane, nearest neighbor spin singlet bond (or Cooper) pair amplitude psi(ij). Further, we suggest that a crucial part of it is a (short range) positive interaction between nearest neighbor bond pairs, of strength J'. Such an interaction leads to nonzero long wavelength phase stiffness or superconductive long range order, with the observed d-wave symmetry, below a temperature T-c similar to zJ' where z is the number of nearest neighbors; d-wave superconductivity is thus an emergent, collective consequence. Using the functional, we calculate a large range of properties, e. g., the pseudogap transition temperature T* as a function of hole doping x, the transition curve T-c(x), the superfluid stiffness rho(s)(x, T), the specific heat (without and with a magnetic field) due to the fluctuating pair degrees of freedom and the zero temperature vortex structure. We find remarkable agreement with experiment. We also calculate the self-energy of electrons hopping on the square cuprate lattice and coupled to electrons of nearly opposite momenta via inevitable long wavelength Cooper pair fluctuations formed of these electrons. The ensuing results for electron spectral density are successfully compared with recent experimental results for angle resolved photo emission spectroscopy (ARPES), and comprehensively explain strange features such as temperature dependent Fermi arcs above T-c and the ``bending'' of the superconducting gap below T-c.
Resumo:
The van der Waals and Platteuw (vdVVP) theory has been successfully used to model the thermodynamics of gas hydrates. However, earlier studies have shown that this could be due to the presence of a large number of adjustable parameters whose values are obtained through regression with experimental data. To test this assertion, we carry out a systematic and rigorous study of the performance of various models of vdWP theory that have been proposed over the years. The hydrate phase equilibrium data used for this study is obtained from Monte Carlo molecular simulations of methane hydrates. The parameters of the vdWP theory are regressed from this equilibrium data and compared with their true values obtained directly from simulations. This comparison reveals that (i) methane-water interactions beyond the first cage and methane-methane interactions make a significant contribution to the partition function and thus cannot be neglected, (ii) the rigorous Monte Carlo integration should be used to evaluate the Langmuir constant instead of the spherical smoothed cell approximation, (iii) the parameter values describing the methane-water interactions cannot be correctly regressed from the equilibrium data using the vdVVP theory in its present form, (iv) the regressed empty hydrate property values closely match their true values irrespective of the level of rigor in the theory, and (v) the flexibility of the water lattice forming the hydrate phase needs to be incorporated in the vdWP theory. Since methane is among the simplest of hydrate forming molecules, the conclusions from this study should also hold true for more complicated hydrate guest molecules.
Resumo:
The rainbow connection number of a connected graph is the minimum number of colors needed to color its edges, so that every pair of its vertices is connected by at least one path in which no two edges are colored the same. In this article we show that for every connected graph on n vertices with minimum degree delta, the rainbow connection number is upper bounded by 3n/(delta + 1) + 3. This solves an open problem from Schiermeyer (Combinatorial Algorithms, Springer, Berlin/Hiedelberg, 2009, pp. 432437), improving the previously best known bound of 20n/delta (J Graph Theory 63 (2010), 185191). This bound is tight up to additive factors by a construction mentioned in Caro et al. (Electr J Combin 15(R57) (2008), 1). As an intermediate step we obtain an upper bound of 3n/(delta + 1) - 2 on the size of a connected two-step dominating set in a connected graph of order n and minimum degree d. This bound is tight up to an additive constant of 2. This result may be of independent interest. We also show that for every connected graph G with minimum degree at least 2, the rainbow connection number, rc(G), is upper bounded by Gc(G) + 2, where Gc(G) is the connected domination number of G. Bounds of the form diameter(G)?rc(G)?diameter(G) + c, 1?c?4, for many special graph classes follow as easy corollaries from this result. This includes interval graphs, asteroidal triple-free graphs, circular arc graphs, threshold graphs, and chain graphs all with minimum degree delta at least 2 and connected. We also show that every bridge-less chordal graph G has rc(G)?3.radius(G). In most of these cases, we also demonstrate the tightness of the bounds.
Resumo:
Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.