946 resultados para Clique irreducible graphs
Resumo:
The magnetic structures and the magnetic phase transitions in the Mn-doped orthoferrite TbFeO3 studied using neutron powder diffraction are reported. Magnetic phase transitions are identified at T-N(Fe/Mn) approximate to 295K where a paramagnetic-to-antiferromagnetic transition occurs in the Fe/Mn sublattice, T-SR(Fe/Mn) approximate to 26K where a spin-reorientation transition occurs in the Fe/Mn sublattice and T-N(R) approximate to 2K where Tb-ordering starts to manifest. At 295 K, the magnetic structure of the Fe/Mn sublattice in TbFe0.5Mn0.5O3 belongs to the irreducible representation Gamma(4) (G(x)A(y)F(z) or Pb'n'm). A mixed-domain structure of (Gamma(1) + Gamma(4)) is found at 250K which remains stable down to the spin re-orientation transition at T-SR(Fe/Mn) approximate to 26K. Below 26K and above 250 K, the majority phase (>80%) is that of Gamma(4). Below 10K the high-temperature phase Gamma(4) remains stable till 2K. At 2 K, Tb develops a magnetic moment value of 0.6(2) mu(B)/f.u. and orders long-range in F-z compatible with the Gamma(4) representation. Our study confirms the magnetic phase transitions reported already in a single crystal of TbFe0.5Mn0.5O3 and, in addition, reveals the presence of mixed magnetic domains. The ratio of these magnetic domains as a function of temperature is estimated from Rietveld refinement of neutron diffraction data. Indications of short-range magnetic correlations are present in the low-Q region of the neutron diffraction patterns at T < T-SR(Fe/Mn). These results should motivate further experimental work devoted to measure electric polarization and magnetocapacitance of TbFe0.5Mn0.5O3. (C) 2016 AIP Publishing LLC.
Resumo:
The effectiveness of Oliver & Pharr's (O&P's) method, Cheng & Cheng's (C&C's) method, and a new method developed by our group for estimating Young's modulus and hardness based on instrumented indentation was evaluated for the case of yield stress to reduced Young's modulus ratio (sigma(y)/E-r) >= 4.55 x 10(-4) and hardening coefficient (n) <= 0.45. Dimensional theorem and finite element simulations were applied to produce reference results for this purpose. Both O&P's and C&C's methods overestimated the Young's modulus under some conditions, whereas the error can be controlled within +/- 16% if the formulation was modified with appropriate correction functions. Similar modification was not introduced to our method for determining Young's modulus, while the maximum error of results was around +/- 13%. The errors of hardness values obtained from all the three methods could be even larger and were irreducible with any correction scheme. It is therefore suggested that when hardness values of different materials are concerned, relative comparison of the data obtained from a single standard measurement technique would be more practically useful. It is noted that the ranges of error derived from the analysis could be different if different ranges of material parameters sigma(y)/E-r and n are considered.
Resumo:
The low-density parity check codes whose performance is closest to the Shannon limit are `Gallager codes' based on irregular graphs. We compare alternative methods for constructing these graphs and present two results. First, we find a `super-Poisson' construction which gives a small improvement in empirical performance over a random construction. Second, whereas Gallager codes normally take N2 time to encode, we investigate constructions of regular and irregular Gallager codes that allow more rapid encoding and have smaller memory requirements in the encoder. We find that these `fast encoding' Gallager codes have equally good performance.
Resumo:
We report weaknesses in two algebraic constructions of low-density parity-check codes based on expander graphs. The Margulis construction gives a code with near-codewords, which cause problems for the sum-product decoder; The Ramanujan-Margulis construction gives a code with low-weight codewords, which produce an error-floor. © 2004 Elsevier B.V.
Resumo:
We report weaknesses in two algebraic constructions of low-density parity-check codes based on expander graphs. The Margulis construction gives a code with near-codewords, which cause problems for the sum-product decoder; The Ramanujan-Margulis construction gives a code with low-weight codewords, which produce an error-floor. ©2003 Published by Elsevier Science B. V.
Resumo:
Resumen: Michael Behe y William Dembski son dos de los líderes de la Teoría del Diseño Inteligente, una propuesta surgida como respuesta a los modelos evolucionistas y anti-finalistas prevalentes en ciertos ambientes académicos e intelectuales, especialmente del mundo anglosajón. Las especulaciones de Behe descansan en el concepto de “sistema de complejidad irreductible”, entendido como un conjunto ordenado de partes cuya funcionalidad depende estrictamente de su indemnidad estructural, y que su origen resulta, por tanto, refractario a explicaciones gradualistas. Estos sistemas, según Behe, están presentes en los vivientes, lo que permitiría inferir que ellos no son el producto de mecanismos ciegos y azarosos, sino el resultado de un diseño. Dembski, por su parte, ha abordado el problema desde una perspectiva más cuantitativa, desarrollando un algoritmo probabilístico conocido como “filtro explicatorio”, que permitiría, según el autor, inferir científicamente la presencia de un diseño, tanto en entidades artificiales como naturales. Trascendiendo las descalificaciones del neodarwinismo, examinamos la propuesta de estos autores desde los fundamentos filosóficos de la escuela tomista. A nuestro parecer, hay en el trabajo de estos autores algunas intuiciones valiosas, las que sin embargo suelen pasar desapercibidas por la escasa formalidad en que vienen presentadas, y por la aproximación eminentemente mecanicista y artefactual con que ambos enfrentan la cuestión. Es precisamente a la explicitación de tales intuiciones a las que se dirige el artículo.
Resumo:
A set of scaling criteria of a polymer flooding reservoir is derived from the governing equations, which involve gravity and capillary force, compressibility of water, oil, and rock, non-Newtonian behavior of the polymer solution, absorption, dispersion, and diffusion, etc. A numerical approach to quantify the dominance degree of each dimensionless parameter is proposed. With this approach, the sensitivity factor of each dimensionless parameter is evaluated. The results show that in polymer flooding, the order of the sensitivity factor ranges from 10(-5) to 10(0) and the dominant dimensionless parameters are generally the ratio of the oil permeability under the condition of the irreducible water saturation to water permeability under the condition of residual oil saturation, density, and viscosity ratios between water and oil, the reduced initial oleic phase saturation and the shear rate exponent of the polymer solution. It is also revealed that the dominant dimensionless parameters may be different from case to case. The effect of some physical variables, such as oil viscosity, injection rate, and permeability, on the dominance degree of the dimensionless parameters is analyzed and the dominant ones are determined for different cases.
Resumo:
[ES] El presente manual describe el manejo de grafos de forma interactiva en el entorno 3D que proporciona el programa Xglore (http://sourceforge.net/projects/xglore/). Forma parte del proyecto “Nerthusv2: Base de datos léxica en 3D del inglés antiguo” patrocinado por el Ministerio de Ciencia e Innovación (nº: FFI08-04448/FILO).
Resumo:
This report summarizes municipal use of water in 138 selected municipalities in Florida as of December 1970 and includes the following: 1) Tabulation of data on water-use for each listed municipality; 2) tabulation of chemical analyses of water for each listed municipality; and 3) graphs of pumpage, included when available. Also included are selected recent references relating to geology, hydrology, and water resources of those areas in which the municipalities are located. (218 page document)
Resumo:
Direcciones de correo electrónico de las autoras: Edurne Ortiz de Elguea (txintxe1989@holmail.com) y Priscila García (pelukina06@hotmail.com).
Resumo:
The study of complex networks has attracted the attention of the scientific community for many obvious reasons. A vast number of systems, from the brain to ecosystems, power grid, and the Internet, can be represented as large complex networks, i.e, assemblies of many interacting components with nontrivial topological properties. The link between these components can describe a global behaviour such as the Internet traffic, electricity supply service, market trend, etc. One of the most relevant topological feature of graphs representing these complex systems is community structure which aims to identify the modules and, possibly, their hierarchical organization, by only using the information encoded in the graph topology. Deciphering network community structure is not only important in order to characterize the graph topologically, but gives some information both on the formation of the network and on its functionality.
Resumo:
In this thesis we propose a new approach to deduction methods for temporal logic. Our proposal is based on an inductive definition of eventualities that is different from the usual one. On the basis of this non-customary inductive definition for eventualities, we first provide dual systems of tableaux and sequents for Propositional Linear-time Temporal Logic (PLTL). Then, we adapt the deductive approach introduced by means of these dual tableau and sequent systems to the resolution framework and we present a clausal temporal resolution method for PLTL. Finally, we make use of this new clausal temporal resolution method for establishing logical foundations for declarative temporal logic programming languages. The key element in the deduction systems for temporal logic is to deal with eventualities and hidden invariants that may prevent the fulfillment of eventualities. Different ways of addressing this issue can be found in the works on deduction systems for temporal logic. Traditional tableau systems for temporal logic generate an auxiliary graph in a first pass.Then, in a second pass, unsatisfiable nodes are pruned. In particular, the second pass must check whether the eventualities are fulfilled. The one-pass tableau calculus introduced by S. Schwendimann requires an additional handling of information in order to detect cyclic branches that contain unfulfilled eventualities. Regarding traditional sequent calculi for temporal logic, the issue of eventualities and hidden invariants is tackled by making use of a kind of inference rules (mainly, invariant-based rules or infinitary rules) that complicates their automation. A remarkable consequence of using either a two-pass approach based on auxiliary graphs or aone-pass approach that requires an additional handling of information in the tableau framework, and either invariant-based rules or infinitary rules in the sequent framework, is that temporal logic fails to carry out the classical correspondence between tableaux and sequents. In this thesis, we first provide a one-pass tableau method TTM that instead of a graph obtains a cyclic tree to decide whether a set of PLTL-formulas is satisfiable. In TTM tableaux are classical-like. For unsatisfiable sets of formulas, TTM produces tableaux whose leaves contain a formula and its negation. In the case of satisfiable sets of formulas, TTM builds tableaux where each fully expanded open branch characterizes a collection of models for the set of formulas in the root. The tableau method TTM is complete and yields a decision procedure for PLTL. This tableau method is directly associated to a one-sided sequent calculus called TTC. Since TTM is free from all the structural rules that hinder the mechanization of deduction, e.g. weakening and contraction, then the resulting sequent calculus TTC is also free from this kind of structural rules. In particular, TTC is free of any kind of cut, including invariant-based cut. From the deduction system TTC, we obtain a two-sided sequent calculus GTC that preserves all these good freeness properties and is finitary, sound and complete for PLTL. Therefore, we show that the classical correspondence between tableaux and sequent calculi can be extended to temporal logic. The most fruitful approach in the literature on resolution methods for temporal logic, which was started with the seminal paper of M. Fisher, deals with PLTL and requires to generate invariants for performing resolution on eventualities. In this thesis, we present a new approach to resolution for PLTL. The main novelty of our approach is that we do not generate invariants for performing resolution on eventualities. Our method is based on the dual methods of tableaux and sequents for PLTL mentioned above. Our resolution method involves translation into a clausal normal form that is a direct extension of classical CNF. We first show that any PLTL-formula can be transformed into this clausal normal form. Then, we present our temporal resolution method, called TRS-resolution, that extends classical propositional resolution. Finally, we prove that TRS-resolution is sound and complete. In fact, it finishes for any input formula deciding its satisfiability, hence it gives rise to a new decision procedure for PLTL. In the field of temporal logic programming, the declarative proposals that provide a completeness result do not allow eventualities, whereas the proposals that follow the imperative future approach either restrict the use of eventualities or deal with them by calculating an upper bound based on the small model property for PLTL. In the latter, when the length of a derivation reaches the upper bound, the derivation is given up and backtracking is used to try another possible derivation. In this thesis we present a declarative propositional temporal logic programming language, called TeDiLog, that is a combination of the temporal and disjunctive paradigms in Logic Programming. We establish the logical foundations of our proposal by formally defining operational and logical semantics for TeDiLog and by proving their equivalence. Since TeDiLog is, syntactically, a sublanguage of PLTL, the logical semantics of TeDiLog is supported by PLTL logical consequence. The operational semantics of TeDiLog is based on TRS-resolution. TeDiLog allows both eventualities and always-formulas to occur in clause heads and also in clause bodies. To the best of our knowledge, TeDiLog is the first declarative temporal logic programming language that achieves this high degree of expressiveness. Since the tableau method presented in this thesis is able to detect that the fulfillment of an eventuality is prevented by a hidden invariant without checking for it by means of an extra process, since our finitary sequent calculi do not include invariant-based rules and since our resolution method dispenses with invariant generation, we say that our deduction methods are invariant-free.
Resumo:
This atlas presents information on fish eggs and temperature data collected from broadscale ichthyoplankton surveys conducted off the U.S. northeast coast from ]977 to 1987. Distribution and abundance information is provided for 33 taxa in the form of graphs and contoured egg-density maps by month and survey. Comments are included on interannual and interseasonal trends in spawning intensity. Data on 14 additional but less numerous taxa are provided in tabular form. (PDF file contains 316 pages.)
Resumo:
ENGLISH: The rate of growth of tropical tunas has been studied by various investigators using diverse methods. Hayashi (1957) examined methods to determine the age of tunas by interpreting growth patterns on the bony or hard parts, but the results proved unreliable. Moore (1951), Hennemuth (1961), and Davidoff (1963) studied the age and growth of yellowfin tuna by the analysis of size frequency distributions. Schaefer, Chatwin and Broadhead (1961), and Fink (ms.), estimated the rate of growth of yellowfin tuna from tagging data; their estimates gave a somewhat slower rate of growth than that obtained by the study of length-frequency distributions. For the yellowfin tuna, modal groups representing age groups can be identified and followed for relatively long periods of time in length-frequency graphs. This may not be possible, however, for other tropical tunas where the modal groups may not represent identifiable age groups; this appears to be the case for skipjack tuna (Schaefer, 1962). It is necessary, therefore, to devise a method of estimating the growth rates of such species without identifying the year classes. The technique described in this study, hereafter called the "increment technique", employs the measurement of the change in length per unit of time, with respect to mean body length, without the identification of year classes. This technique is applied here as a method of estimating the growth rate of yellowfin tuna from the entire Eastern Tropical Pacific, and from the Commission's northern statistical areas (Areas 01-04 and 08) as shown in Figure 1. The growth rates of yellowfin tuna from Area 02 (Hennemuth, 1961) and from the northern areas (Davidoff, 1963) have been described by the technique of tracing modal progressions of year classes, hereafter termed the "year class technique". The growth rate analyses performed by both techniques apply to the segment of the population which is captured by tuna fishing vessels. The results obtained by both methods are compared in this report. SPANISH: La tasa del crecimiento de los atunes tropicales ha sido estudiada por varios investigadores quienes usaron diversos métodos. Hayashi (1957) examinó los métodos para determinar la edad de los atunes interpretando las marcas del crecimiento de las partes óseas o duras, pero los resultados no han demostrado eficacia. Moore (1951), Hennemuth (1961) y Davidoff (1963) estudiaron la edad y el crecimiento del atún aleta amarilla por medio del análisis de las distribuciones de la frecuencia de tamaños. Schaefer, Chatwin y Broadhead (1961) y Fink (Ms.), estimaron la tasa del crecimiento del atún aleta amarilla valiéndose de los datos de la marcación de los peces; ambos estimaron una tasa del crecimiento algo más lenta que la que se obtiene mediante el estudio de las distribuciones de la frecuencia de longitudes. Para el atún aleta amarilla, los grupos modales que representan grupos de edad pueden ser identificados y seguidos durante períodos de tiempo relativamente largos en los gráficos de la frecuencia de longitudes. Sin embargo, ésto puede no ser posible para otros atunes tropicales para los cuales los grupos modales posiblemente no representan grupos de edad identificables; este parece ser el caso para el barrilete (Schaefer, 1962). Consecuentemente, es necesario idear un método para estimar las tasas del crecimiento de las mencionadas especies sin necesidad de identificar las clases anuales. La técnica descrita en este estudio, en adelante llamada la "técnica incremental", emplea la medida del cambio en la longitud por unidad de tiempo, con respecto al promedio de la longitud corporal, sin tener que identificar las clases anuales. Esta técnica se aplica aquí como un método para estimar la tasa del crecimiento del atún aleta amarilla de todo el Pacífico Oriental Tropical, y de las áreas estadísticas norteñas de la Comisión (Areas 01-04 y 08), como se muestra en la Figura 1. Las tasas del crecimiento del atún aleta amarilla del Area 02 (Hennemuth, 1961) y de las áreas del norte (Davidoff, 1963), han sido descritas por medio de una técnica que consiste en delinear las progresiones modales de las clases anuales, en adelante llamada la "técnica de la clase anual". Los análisis de la tasa del crecimiento llevados a cabo por ambas técnicas se refieren al segmento de la población capturada por embarcaciones pesqueras de atún. Los resultados obtenidos por ambos métodos se comparan en este informe.
Resumo:
Growth data obtained from a ten-year collection of scales from Maryland freshwater fish is presented, in this report,in graphs and tables especially designed to be useful for Maryland fishery management. (PDF contains 40 pages)