958 resultados para PARTITION
Resumo:
We formulate a natural model of loops and isolated vertices for arbitrary planar graphs, which we call the monopole-dimer model. We show that the partition function of this model can be expressed as a determinant. We then extend the method of Kasteleyn and Temperley-Fisher to calculate the partition function exactly in the case of rectangular grids. This partition function turns out to be a square of a polynomial with positive integer coefficients when the grid lengths are even. Finally, we analyse this formula in the infinite volume limit and show that the local monopole density, free energy and entropy can be expressed in terms of well-known elliptic functions. Our technique is a novel determinantal formula for the partition function of a model of isolated vertices and loops for arbitrary graphs.
Resumo:
We study the free fermion theory in 1+1 dimensions deformed by chemical potentials for holomorphic, conserved currents at finite temperature and on a spatial circle. For a spin-three chemical potential mu, the deformation is related at high temperatures to a higher spin black hole in hs0] theory on AdS(3) spacetime. We calculate the order mu(2) corrections to the single interval Renyi and entanglement entropies on the torus using the bosonized formulation. A consistent result, satisfying all checks, emerges upon carefully accounting for both perturbative and winding mode contributions in the bosonized language. The order mu(2) corrections involve integrals that are finite but potentially sensitive to contact term singularities. We propose and apply a prescription for defining such integrals which matches the Hamiltonian picture and passes several non-trivial checks for both thermal corrections and the Renyi entropies at this order. The thermal corrections are given by a weight six quasi-modular form, whilst the Renyi entropies are controlled by quasi-elliptic functions of the interval length with modular weight six. We also point out the well known connection between the perturbative expansion of the partition function in powers of the spin-three chemical potential and the Gross-Taylor genus expansion of large-N Yang-Mills theory on the torus. We note the absence of winding mode contributions in this connection, which suggests qualitatively different entanglement entropies for the two systems.
Resumo:
The present paper reports a new class of Co based superalloys that has gamma-gamma' microstructure and exhibits much lower density compared to other commercially available Co superalloys including Co-Al-W based alloys. The basic composition is Co-10Al-5Mo (at%) with addition of 2 at% Ta for stabilization of gamma' phase. The gamma-gamma' microstructure evolves through solutionising and aging treatment. Using first principles calculations, we observe that Ta plays a crucial role in stabilizing gamma' phase. By addition of Ta in the basic stoichiometric composition Co-3(Al, Mo), the enthalpy of formation (Delta H-f) of L1(2) structure (gamma' phase) becomes more negative in comparison to DO19 structure. The All of the L12 structure becomes further more negative by the occupancy of Ni and Ti atoms in the lattice suggesting an increase in the stability of the gamma' precipitates. Among large number of alloys studied experimentally, the paper presents results of detailed investigations on Co-10Al-5Mo-2Ta, Co-30Ni-10Al-5Mo-2Ta and Co-30Ni-10Al-5Mo-2Ta-2Ti. To evaluate the role alloying elements, atom probe tomography investigations were carried out to obtain partition coefficients for the constituent elements. The results show strong partitioning of Ni, Al, Ta and Ti in ordered gamma' precipitates. 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.
Resumo:
Despite significant advances in recent years, structure-from-motion (SfM) pipelines suffer from two important drawbacks. Apart from requiring significant computational power to solve the large-scale computations involved, such pipelines sometimes fail to correctly reconstruct when the accumulated error in incremental reconstruction is large or when the number of 3D to 2D correspondences are insufficient. In this paper we present a novel approach to mitigate the above-mentioned drawbacks. Using an image match graph based on matching features we partition the image data set into smaller sets or components which are reconstructed independently. Following such reconstructions we utilise the available epipolar relationships that connect images across components to correctly align the individual reconstructions in a global frame of reference. This results in both a significant speed up of at least one order of magnitude and also mitigates the problems of reconstruction failures with a marginal loss in accuracy. The effectiveness of our approach is demonstrated on some large-scale real world data sets.
Resumo:
Homogeneous temperature regions are necessary for use in hydrometeorological studies. The regions are often delineated by analysing statistics derived from time series of maximum, minimum or mean temperature, rather than attributes influencing temperature. This practice cannot yield meaningful regions in data-sparse areas. Further, independent validation of the delineated regions for homogeneity in temperature is not possible, as temperature records form the basis to arrive at the regions. To address these issues, a two-stage clustering approach is proposed in this study to delineate homogeneous temperature regions. First stage of the approach involves (1) determining correlation structure between observed temperature over the study area and possible predictors (large-scale atmospheric variables) influencing the temperature and (2) using the correlation structure as the basis to delineate sites in the study area into clusters. Second stage of the approach involves analysis on each of the clusters to (1) identify potential predictors (large-scale atmospheric variables) influencing temperature at sites in the cluster and (2) partition the cluster into homogeneous fuzzy temperature regions using the identified potential predictors. Application of the proposed approach to India yielded 28 homogeneous regions that were demonstrated to be effective when compared to an alternate set of 6 regions that were previously delineated over the study area. Intersite cross-correlations of monthly maximum and minimum temperatures in the existing regions were found to be weak and negative for several months, which is undesirable. This problem was not found in the case of regions delineated using the proposed approach. Utility of the proposed regions in arriving at estimates of potential evapotranspiration for ungauged locations in the study area is demonstrated.
Resumo:
We study N = 2 compactifications of heterotic string theory on the CHL orbifold (K3 x T-2)/Z(N) with N = 2, 3, 5, 7. Z(N) acts as an automorphism on K3 together with a shift of 1/N along one of the circles of T-2. These compactifications generalize the example of the heterotic string on K3 x T-2 studied in the context of dualities in string theories. We evaluate the new supersymmetric index for these theories and show that their expansion can be written in terms of the McKay-Thompson series associated with the Z(N) automorphism embedded in the Mathieu group M-24. We then evaluate the difference in one-loop threshold corrections to the non-Abelian gauge couplings with Wilson lines and show that their moduli dependence is captured by Siegel modular forms related to dyon partition functions of N = 4 string theories.
Resumo:
We study N = 2 compactifications of heterotic string theory on the CHL orbifold (K3 x T-2)/Z(N) with N = 2, 3, 5, 7. Z(N) acts as an automorphism on K3 together with a shift of 1/N along one of the circles of T-2. These compactifications generalize the example of the heterotic string on K3 x T-2 studied in the context of dualities in string theories. We evaluate the new supersymmetric index for these theories and show that their expansion can be written in terms of the McKay-Thompson series associated with the Z(N) automorphism embedded in the Mathieu group M-24. We then evaluate the difference in one-loop threshold corrections to the non-Abelian gauge couplings with Wilson lines and show that their moduli dependence is captured by Siegel modular forms related to dyon partition functions of N = 4 string theories.
Resumo:
The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models. The quality of the generative RBM is measured through the average log-likelihood on test data. Due to the high computational complexity of evaluating the partition function, exact calculation of test log-likelihood is very difficult. In recent years some estimation methods are suggested for approximate computation of test log-likelihood. In this paper we present an empirical comparison of the main estimation methods, namely, the AIS algorithm for estimating the partition function, the CSL method for directly estimating the log-likelihood, and the RAISE algorithm that combines these two ideas.
Resumo:
The polyhedral model provides an expressive intermediate representation that is convenient for the analysis and subsequent transformation of affine loop nests. Several heuristics exist for achieving complex program transformations in this model. However, there is also considerable scope to utilize this model to tackle the problem of automatic memory footprint optimization. In this paper, we present a new automatic storage optimization technique which can be used to achieve both intra-array as well as inter-array storage reuse with a pre-determined schedule for the computation. Our approach works by finding statement-wise storage partitioning hyper planes that partition a unified global array space so that values with overlapping live ranges are not mapped to the same partition. Our heuristic is driven by a fourfold objective function which not only minimizes the dimensionality and storage requirements of arrays required for each high-level statement, but also maximizes inter statement storage reuse. The storage mappings obtained using our heuristic can be asymptotically better than those obtained by any existing technique. We implement our technique and demonstrate its practical impact by evaluating its effectiveness on several benchmarks chosen from the domains of image processing, stencil computations, and high-performance computing.
Resumo:
13 p.
“Deborah Numbers”, Coupling Multiple Space and Time Scales and Governing Damage Evolution to Failure
Resumo:
Two different spatial levels are involved concerning damage accumulation to eventual failure. nucleation and growth rates of microdamage nN* and V*. It is found that the trans-scale length ratio c*/L does not directly affect the process. Instead, two independent dimensionless numbers: the trans-scale one * * ( V*)including the * **5 * N c V including mesoscopic parameters only, play the key role in the process of damage accumulation to failure. The above implies that there are three time scales involved in the process: the macroscopic imposed time scale tim = /a and two meso-scopic time scales, nucleation and growth of damage, (* *4) N N t =1 n c and tV=c*/V*. Clearly, the dimensionless number De*=tV/tim refers to the ratio of microdamage growth time scale over the macroscopically imposed time scale. So, analogous to the definition of Deborah number as the ratio of relaxation time over external one in rheology. Let De be the imposed Deborah number while De represents the competition and coupling between the microdamage growth and the macroscopically imposed wave loading. In stress-wave induced tensile failure (spallation) De* < 1, this means that microdamage has enough time to grow during the macroscopic wave loading. Thus, the microdamage growth appears to be the predominate mechanism governing the failure. Moreover, the dimensionless number D* = tV/tN characterizes the ratio of two intrinsic mesoscopic time scales: growth over nucleation. Similarly let D be the “intrinsic Deborah number”. Both time scales are relevant to intrinsic relaxation rather than imposed one. Furthermore, the intrinsic Deborah number D* implies a certain characteristic damage. In particular, it is derived that D* is a proper indicator of macroscopic critical damage to damage localization, like D* ∼ (10–3~10–2) in spallation. More importantly, we found that this small intrinsic Deborah number D* indicates the energy partition of microdamage dissipation over bulk plastic work. This explains why spallation can not be formulated by macroscopic energy criterion and must be treated by multi-scale analysis.
Resumo:
Accurate and precise estimates of age and growth rates are essential parameters in understanding the population dynamics of fishes. Some of the more sophisticated stock assessment models, such as virtual population analysis, require age and growth information to partition catch data by age. Stock assessment efforts by regulatory agencies are usually directed at specific fisheries which are being heavily exploited and are suspected of being overfished. Interest in stock assessment of some of the oceanic pelagic fishes (tunas, billfishes, and sharks) has developed only over the last decade, during which exploitation has increased steadily in response to increases in worldwide demand for these resources. Traditionally, estimating the age of fishes has been done by enumerating growth bands on skeletal hardparts, through length frequency analysis, tag and recapture studies, and raising fish in enclosures. However, problems related to determining the age of some of the oceanic pelagic fishes are unique compared with other species. For example, sampling is difficult for these large, highly mobile fishes because of their size, extensive distributions throughout the world's oceans, and for some, such as the marlins, infrequent catches. In addition, movements of oceanic pelagic fishes often transect temperate as well as tropical oceans, making interpretation of growth bands on skeletal hardparts more difficult than with more sedentary temperate species. Many oceanic pelagics are also long-lived, attaining ages in excess of 30 yr, and more often than not, their life cycles do not lend themselves easily to artificial propagation and culture. These factors contribute to the difficulty of determining ages and are generally characteristic of this group-the tunas, billfishes, and sharks. Accordingly, the rapidly growing international concern in managing oceanic pelagic fishes, as well as unique difficulties in ageing these species, prompted us to hold this workshop. Our two major objectives for this workshop are to: I) Encourage the interchange of ideas on this subject, and 2) establish the "state of the art." A total of 65 scientists from 10 states in the continental United States and Hawaii, three provinces in Canada, France, Republic of Senegal, Spain, Mexico, Ivory Coast, and New South Wales (Australia) attended the workshop held at the Southeast Fisheries Center, Miami, Fla., 15-18 February 1982. Our first objective, encouraging the interchange of ideas, is well illustrated in the summaries of the Round Table Discussions and in the Glossary, which defines terms used in this volume. The majority of the workshop participants agreed that the lack of validation of age estimates and the means to accomplish the same are serious problems preventing advancements in assessing the age and growth of fishes, particularly oceanic pelagics. The alternatives relating to the validation problem were exhaustively reviewed during the Round Table Discussions and are a major highlight of this workshop. How well we accomplished our second objective, to establish the "state of the art" on age determination of oceanic pelagic fishes, will probably best be judged on the basis of these proceedings and whether future research efforts are directed at the problem areas we have identified. In order to produce high-quality papers, workshop participants served as referees for the manuscripts published in this volume. Several papers given orally at the workshop, and included in these proceedings, were summarized from full-length manuscripts, which have been submitted to or published in other scientific outlets-these papers are designated as SUMMARY PAPERS. In addition, the SUMMARY PAPER designation was also assigned to workshop papers that represented very preliminary or initial stages of research, cursory progress reports, papers that were data shy, or provide only brief reviews on general topics. Bilingual abstracts were included for all papers that required translation. We gratefully acknowledge the support of everyone involved in this workshop. Funding was provided by the Southeast Fisheries Center, and Jack C. Javech did the scientific illustrations appearing on the cover, between major sections, and in the Glossary. (PDF file contains 228 pages.)
Resumo:
Background: Gene expression technologies have opened up new ways to diagnose and treat cancer and other diseases. Clustering algorithms are a useful approach with which to analyze genome expression data. They attempt to partition the genes into groups exhibiting similar patterns of variation in expression level. An important problem associated with gene classification is to discern whether the clustering process can find a relevant partition as well as the identification of new genes classes. There are two key aspects to classification: the estimation of the number of clusters, and the decision as to whether a new unit (gene, tumor sample ... ) belongs to one of these previously identified clusters or to a new group. Results: ICGE is a user-friendly R package which provides many functions related to this problem: identify the number of clusters using mixed variables, usually found by applied biomedical researchers; detect whether the data have a cluster structure; identify whether a new unit belongs to one of the pre-identified clusters or to a novel group, and classify new units into the corresponding cluster. The functions in the ICGE package are accompanied by help files and easy examples to facilitate its use. Conclusions: We demonstrate the utility of ICGE by analyzing simulated and real data sets. The results show that ICGE could be very useful to a broad research community.
Resumo:
ENGLISH: Morphometric data from yellowfin tuna, Thunnus albacares, were collected from various locations in the eastern Pacific Ocean during 1974 to 1976, to assess geographic and temporal variation of morphometric characters. The data were statistically adjusted, using allometric formulae to partition size. Discriminant analyses were applied to the adjusted morphometric characters. Yellowfin sampled from north of 15°N-20oN were different from those sampled from south of 15°N-20oN. The absence of any clinal relationships between morphometric characters and latitude or longitude suggests a pattern of somewhat distinct regional groups. These results clearly demonstrate geographic variation in morphometric characters of yellowfin in the eastern Pacific Ocean, which suggests differences between the life histories of the northern and southern groups. SPANISH: Entre 1974 Y1976 se tomaron datos morfométricos de atunes aleta amarilla, Thunmus albacares, de varios lugares en el Océano Pacífico oriental, a fin de evaluar la variación geográfica y temporal de los caracteres morfométricos. Se ajustaron los datos estadísticamente, usando fórmulas alométricas para eliminar los efectos del tamaño. Se aplicaron análisis discriminantes a los caracteres morfométricos ajustados. Aletas amarillas muestreados provenientes del norte de 15°N-20°N eran diferentes a aquellos muestreados del sur de 15°N -20°N. La falta de una relación clinal entre los caracteres morfométricos y latitud o longitud sugiere la existencia de grupos regionales algo distintos. Estos resultados demuestran claramente una variación geográfica en los caracteres morfométricos del aleta amarilla en el Océano Pacífico oriental, la cual sugiere diferencias en los ciclos vitales de los grupos del norte y del sur. (PDF contains 41 pages.)