995 resultados para quasi-arithmetic means


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show here a 2(Omega(root d.log N)) size lower bound for homogeneous depth four arithmetic formulas. That is, we give an explicit family of polynomials of degree d on N variables (with N = d(3) in our case) with 0, 1-coefficients such that for any representation of a polynomial f in this family of the form f = Sigma(i) Pi(j) Q(ij), where the Q(ij)'s are homogeneous polynomials (recall that a polynomial is said to be homogeneous if all its monomials have the same degree), it must hold that Sigma(i,j) (Number of monomials of Q(ij)) >= 2(Omega(root d.log N)). The above mentioned family, which we refer to as the Nisan-Wigderson design-based family of polynomials, is in the complexity class VNP. Our work builds on the recent lower bound results 1], 2], 3], 4], 5] and yields an improved quantitative bound as compared to the quasi-polynomial lower bound of 6] and the N-Omega(log log (N)) lower bound in the independent work of 7].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Potential energy can be approximated by ‘‘pair-functional’’ potentials which is composed of pair potentials and embedding energy. Pair potentials are grouped according to discrete directions of atomic bonds such that each group is represented by an orientational component. Meanwhile, another kind of component, the volumetric one is derived from embedding energy. Damage and fracture are the changing and breaking of atomic bonds at the most fundamental level and have been reflected by the changing of these components’ properties. Therefore, material is treated as a component assembly, and its constitutive equations are formed by means of assembling these two kinds of components’ response functions. This material model is referred to as the component assembling model. Theoretical analysis and numerical computing indicate that the proposed model has the capacity of reproducing some results satisfactorily, with the advantages of physical explicitness and intrinsic induced anisotropy, etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Damage-induced anisotropy of quasi-brittle materials is investigated using component assembling model in this study. Damage-induced anisotropy is one significant character of quasi-brittle materials coupled with nonlinearity and strain softening. Formulation of such complicated phenomena is a difficult problem till now. The present model is based on the component assembling concept, where constitutive equations of materials are formed by means of assembling two kinds of components' response functions. These two kinds of components, orientational and volumetric ones, are abstracted based on pair-functional potentials and the Cauchy - Born rule. Moreover, macroscopic damage of quasi-brittle materials can be reflected by stiffness changing of orientational components, which represent grouped atomic bonds along discrete directions. Simultaneously, anisotropic characters are captured by the naturally directional property of the orientational component. Initial damage surface in the axial-shear stress space is calculated and analyzed. Furthermore, the anisotropic quasi-brittle damage behaviors of concrete under uniaxial, proportional, and nonproportional combined loading are analyzed to elucidate the utility and limitations of the present damage model. The numerical results show good agreement with the experimental data and predicted results of the classical anisotropic damage models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Damage-induced anisotropy of quasi-brittle materials is investigated using component assembling model in this study. Damage-induced anisotropy is one significant character of quasi-brittle materials coupled with nonlinearity and strain softening. Formulation of such complicated phenomena is a difficult problem till now. The present model is based on the component assembling concept, where constitutive equations of materials are formed by means of assembling two kinds of components' response functions. These two kinds of components, orientational and volumetric ones, are abstracted based on pair-functional potentials and the Cauchy - Born rule. Moreover, macroscopic damage of quasi-brittle materials can be reflected by stiffness changing of orientational components, which represent grouped atomic bonds along discrete directions. Simultaneously, anisotropic characters are captured by the naturally directional property of the orientational component. Initial damage surface in the axial-shear stress space is calculated and analyzed. Furthermore, the anisotropic quasi-brittle damage behaviors of concrete under uniaxial, proportional, and nonproportional combined loading are analyzed to elucidate the utility and limitations of the present damage model. The numerical results show good agreement with the experimental data and predicted results of the classical anisotropic damage models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the energy spectrum of ground state and quasi-particle excitation spectrum of hard-core bosons, which behave very much like spinless noninteracting fermions, in optical lattices by means of the perturbation expansion and Bogoliubov approach. The results show that the energy spectrum has a single band structure, and the energy is lower near zero momentum; the excitation spectrum gives corresponding energy gap, and the system is in Mott-insulating state at Tonks limit. The analytic result of energy spectrum is in good agreement with that calculated in terms of Green's function at strong correlation limit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mavron, Vassili; McDonough, T.P.; Schrikhande, M.S., (2003) 'Quasi -symmetric designs with good blocks and intersection number one', Designs Codes and Cryptography 28(2) pp.147-162 RAE2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interaction between supernova ejecta and circumstellar matter, arising from previous episodes of mass loss, provides us with a means of constraining the progenitors of supernovae. Radio observations of a number of supernovae show quasi-periodic deviations from a strict power-law decline at late times. Although several possibilities have been put forward to explain these modulations, no single explanation has proven to be entirely satisfactory. Here we suggest that Luminous blue variables undergoing S-Doradus type variations give rise to enhanced phases of mass loss that are imprinted on the immediate environment of the exploding star as a series of density enhancements. The variations in mass loss arise from changes in the ionization balance of Fe, the dominant ion that drives the wind. With this idea, we find that both the recurrence timescale of the variability and the amplitude of the modulations are in line with the observations. Our scenario thus provides a natural, single-star explanation for the observed behaviour that is, in fact, expected on theoretical grounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent efforts in the finite element modelling of delamination have concentrated on the development of cohesive interface elements. These are characterised by a bilinear constitutive law, where there is an initial high positive stiffness until a threshold stress level is reached, followed by a negative tangent stiffness representing softening (or damage evolution). Complete decohesion occurs when the amount of work done per unit area of crack surface is equal to a critical strain energy release rate. It is difficult to achieve a stable, oscillation-free solution beyond the onset of damage, using standard implicit quasi-static methods, unless a very refined mesh is used. In the present paper, a new solution strategy is proposed based on a pseudo-transient formulation and demonstrated through the modelling of a double cantilever beam undergoing Mode I delamination. A detailed analysis into the sensitivity of the user-defined parameters is also presented. Comparisons with other published solutions using a quasi-static formulation show that the pseudo-transient formulation gives improved accuracy and oscillation-free results with coarser meshes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La compression des données est la technique informatique qui vise à réduire la taille de l’information pour minimiser l’espace de stockage nécessaire et accélérer la transmission des données dans les réseaux à bande passante limitée. Plusieurs techniques de compression telles que LZ77 et ses variantes souffrent d’un problème que nous appelons la redondance causée par la multiplicité d’encodages. La multiplicité d’encodages (ME) signifie que les données sources peuvent être encodées de différentes manières. Dans son cas le plus simple, ME se produit lorsqu’une technique de compression a la possibilité, au cours du processus d’encodage, de coder un symbole de différentes manières. La technique de compression par recyclage de bits a été introduite par D. Dubé et V. Beaudoin pour minimiser la redondance causée par ME. Des variantes de recyclage de bits ont été appliquées à LZ77 et les résultats expérimentaux obtenus conduisent à une meilleure compression (une réduction d’environ 9% de la taille des fichiers qui ont été compressés par Gzip en exploitant ME). Dubé et Beaudoin ont souligné que leur technique pourrait ne pas minimiser parfaitement la redondance causée par ME, car elle est construite sur la base du codage de Huffman qui n’a pas la capacité de traiter des mots de code (codewords) de longueurs fractionnaires, c’est-à-dire qu’elle permet de générer des mots de code de longueurs intégrales. En outre, le recyclage de bits s’appuie sur le codage de Huffman (HuBR) qui impose des contraintes supplémentaires pour éviter certaines situations qui diminuent sa performance. Contrairement aux codes de Huffman, le codage arithmétique (AC) peut manipuler des mots de code de longueurs fractionnaires. De plus, durant ces dernières décennies, les codes arithmétiques ont attiré plusieurs chercheurs vu qu’ils sont plus puissants et plus souples que les codes de Huffman. Par conséquent, ce travail vise à adapter le recyclage des bits pour les codes arithmétiques afin d’améliorer l’efficacité du codage et sa flexibilité. Nous avons abordé ce problème à travers nos quatre contributions (publiées). Ces contributions sont présentées dans cette thèse et peuvent être résumées comme suit. Premièrement, nous proposons une nouvelle technique utilisée pour adapter le recyclage de bits qui s’appuie sur les codes de Huffman (HuBR) au codage arithmétique. Cette technique est nommée recyclage de bits basé sur les codes arithmétiques (ACBR). Elle décrit le cadriciel et les principes de l’adaptation du HuBR à l’ACBR. Nous présentons aussi l’analyse théorique nécessaire pour estimer la redondance qui peut être réduite à l’aide de HuBR et ACBR pour les applications qui souffrent de ME. Cette analyse démontre que ACBR réalise un recyclage parfait dans tous les cas, tandis que HuBR ne réalise de telles performances que dans des cas très spécifiques. Deuxièmement, le problème de la technique ACBR précitée, c’est qu’elle requiert des calculs à précision arbitraire. Cela nécessite des ressources illimitées (ou infinies). Afin de bénéficier de cette dernière, nous proposons une nouvelle version à précision finie. Ladite technique devienne ainsi efficace et applicable sur les ordinateurs avec les registres classiques de taille fixe et peut être facilement interfacée avec les applications qui souffrent de ME. Troisièmement, nous proposons l’utilisation de HuBR et ACBR comme un moyen pour réduire la redondance afin d’obtenir un code binaire variable à fixe. Nous avons prouvé théoriquement et expérimentalement que les deux techniques permettent d’obtenir une amélioration significative (moins de redondance). À cet égard, ACBR surpasse HuBR et fournit une classe plus étendue des sources binaires qui pouvant bénéficier d’un dictionnaire pluriellement analysable. En outre, nous montrons qu’ACBR est plus souple que HuBR dans la pratique. Quatrièmement, nous utilisons HuBR pour réduire la redondance des codes équilibrés générés par l’algorithme de Knuth. Afin de comparer les performances de HuBR et ACBR, les résultats théoriques correspondants de HuBR et d’ACBR sont présentés. Les résultats montrent que les deux techniques réalisent presque la même réduction de redondance sur les codes équilibrés générés par l’algorithme de Knuth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project is a quasi-experimental study involving eight classrooms in two senior elementary schools in St. Catharines, Ontario which received a Project Business Program and were pre- and post-tested to determine the growth of knowledge acquisition in the area of business concepts. Four classrooms received a Project Business treatment while four classrooms acted as a control. The Project Business Program is sponsored by Junior Achievement of Canada; it occurred during a twelveweek period, February to May 1981, and is run by business consultants who, through Action, Dialogue and Career Exploration, teach children about economics and business related topics. The consultants were matched with teacher co-ordinators in whose classrooms they taught and with whom they discussed field trips, students, lesson planning, etc. The statistical analysis of pre- and post-test means revealed a significant statistical growth in the area of knowledge acquisition on the part of those students who received the Project Business Program. This confirms that Project Business makes a difference. A search of the literature appears to advocate economic programs like Project Business, whfch are broadly based, relevant and processoriented. This program recommends itself as a model for other areas of co-operative curricular interactions and as a bridge to future trends and as a result several fruitful areas of research are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se realizó estudio cuasi experimental con el fin de comparar el efecto sobre la carga física de una intervención tecnológica y en la organización del trabajo en trabajadores en el cargo de horneros en la tarea de extracción de coque en Colombia. Se midió la carga física mediante frecuencia cardiaca e índice de costo cardiaco relativo en una población de trabajadores expuestos (37) y no expuestos (66) a una intervención tecnológica. La monitorización de la frecuencia cardiaca se realizó con 7 pulsímetros Polar RS 800cx debidamente calibrados. Las variables numéricas se describieron con base en la media aritmética, su desviación estándar, y el rango. Para evaluar la diferencia entre las medias de los grupos con respecto a la frecuencia cardiaca en reposo, media, máxima, índice de costo cardiaco relativo, gasto energético de trabajo se aplicó análisis de varianza de una vía. Se estableció a priori un nivel de significación estadística α = 0,05. Se encontraron diferencias estadísticamente significativas en el comportamiento de la frecuencia cardiaca media, frecuencia cardiaca máxima e índice de costo cardiaco relativo, entre los grupos de estudio. Se concluyó que este estudio valida la frecuencia cardiaca como una variable sensible para la medición del riesgo por carga física y a su utilidad en la evaluación intervenciones ergonómica. El estudio demostró que la intervención ergonómica logró controlar la carga física con una disminución significativa la frecuencia cardiaca, en el grupo de intervención.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Es desenvolupa una eina de disseny per l'anàlisi de la tolerància al dany en composites. L'eina pot predir el inici i la propagació de fisures interlaminars. També pot ser utilitzada per avaluar i planificar la necessitat de reparar o reemplaçar components durant la seva vida útil. El model desenvolupat pot ser utilitzat tan per simular càrregues estàtiques com de fatiga. El model proposat és un model de dany termodinàmicament consistent que permet simular la delaminació en composites sota càrregues variables. El model es formula dins el context de la Mecànica del Dany, fent ús dels models de zona cohesiva. Es presenta un metodologia per determinar els paràmetres del model constitutiu que permet utilitzar malles d'elements finits més bastes de les que es poden usar típicament. Finalment, el model és també capaç de simular la delaminació produïda per càrregues de fatiga.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interannual variability of the stratospheric polar vortex during winter in both hemispheres is observed to correlate strongly with the phase of the quasi-biennial oscillation (QBO) in tropical stratospheric winds. It follows that the lack of a spontaneously generated QBO in most atmospheric general circulation models (AGCMs) adversely affects the nature of polar variability in such models. This study examines QBO–vortex coupling in an AGCM in which a QBO is spontaneously induced by resolved and parameterized waves. The QBO–vortex coupling in the AGCM compares favorably to that seen in reanalysis data [from the 40-yr ECMWF Re-Analysis (ERA-40)], provided that careful attention is given to the definition of QBO phase. A phase angle representation of the QBO is employed that is based on the two leading empirical orthogonal functions of equatorial zonal wind vertical profiles. This yields a QBO phase that serves as a proxy for the vertical structure of equatorial winds over the whole depth of the stratosphere and thus provides a means of subsampling the data to select QBO phases with similar vertical profiles of equatorial zonal wind. Using this subsampling, it is found that the QBO phase that induces the strongest polar vortex response in early winter differs from that which induces the strongest late-winter vortex response. This is true in both hemispheres and for both the AGCM and ERA-40. It follows that the strength and timing of QBO influence on the vortex may be affected by the partial seasonal synchronization of QBO phase transitions that occurs both in observations and in the model. This provides a mechanism by which changes in the strength of QBO–vortex correlations may exhibit variability on decadal time scales. In the model, such behavior occurs in the absence of external forcings or interannual variations in sea surface temperatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method to solve a quasi-geostrophic two-layer model including the variation of static stability is presented. The divergent part of the wind is incorporated by means of an iterative procedure. The procedure is rather fast and the time of computation is only 60–70% longer than for the usual two-layer model. The method of solution is justified by the conservation of the difference between the gross static stability and the kinetic energy. To eliminate the side-boundary conditions the experiments have been performed on a zonal channel model. The investigation falls mainly into three parts: The first part (section 5) contains a discussion of the significance of some physically inconsistent approximations. It is shown that physical inconsistencies are rather serious and for these inconsistent models which were studied the total kinetic energy increased faster than the gross static stability. In the next part (section 6) we are studying the effect of a Jacobian difference operator which conserves the total kinetic energy. The use of this operator in two-layer models will give a slight improvement but probably does not have any practical use in short periodic forecasts. It is also shown that the energy-conservative operator will change the wave-speed in an erroneous way if the wave-number or the grid-length is large in the meridional direction. In the final part (section 7) we investigate the behaviour of baroclinic waves for some different initial states and for two energy-consistent models, one with constant and one with variable static stability. According to the linear theory the waves adjust rather rapidly in such a way that the temperature wave will lag behind the pressure wave independent of the initial configuration. Thus, both models give rise to a baroclinic development even if the initial state is quasi-barotropic. The effect of the variation of static stability is very small, qualitative differences in the development are only observed during the first 12 hours. For an amplifying wave we will get a stabilization over the troughs and an instabilization over the ridges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the problem of averaging values on lattices, and in particular on discrete product lattices. This problem arises in image processing when several color values given in RGB, HSL, or another coding scheme, need to be combined. We show how the arithmetic mean and the median can be constructed by minimizing appropriate penalties. We also discuss which of them coincide with the Cartesian product of the standard mean and median.