986 resultados para demerit point loss
Resumo:
Periglacial processes act on cold, non-glacial regions where the landscape deveploment is mainly controlled by frost activity. Circa 25 percent of Earth's surface can be considered as periglacial. Geographical Information System combined with advanced statistical modeling methods, provides an efficient tool and new theoretical perspective for study of cold environments. The aim of this study was to: 1) model and predict the abundance of periglacial phenomena in subarctic environment with statistical modeling, 2) investigate the most import factors affecting the occurence of these phenomena with hierarchical partitioning, 3) compare two widely used statistical modeling methods: Generalized Linear Models and Generalized Additive Models, 4) study modeling resolution's effect on prediction and 5) study how spatially continous prediction can be obtained from point data. The observational data of this study consist of 369 points that were collected during the summers of 2009 and 2010 at the study area in Kilpisjärvi northern Lapland. The periglacial phenomena of interest were cryoturbations, slope processes, weathering, deflation, nivation and fluvial processes. The features were modeled using Generalized Linear Models (GLM) and Generalized Additive Models (GAM) based on Poisson-errors. The abundance of periglacial features were predicted based on these models to a spatial grid with a resolution of one hectare. The most important environmental factors were examined with hierarchical partitioning. The effect of modeling resolution was investigated with in a small independent study area with a spatial resolution of 0,01 hectare. The models explained 45-70 % of the occurence of periglacial phenomena. When spatial variables were added to the models the amount of explained deviance was considerably higher, which signalled a geographical trend structure. The ability of the models to predict periglacial phenomena were assessed with independent evaluation data. Spearman's correlation varied 0,258 - 0,754 between the observed and predicted values. Based on explained deviance, and the results of hierarchical partitioning, the most important environmental variables were mean altitude, vegetation and mean slope angle. The effect of modeling resolution was clear, too coarse resolution caused a loss of information, while finer resolution brought out more localized variation. The models ability to explain and predict periglacial phenomena in the study area were mostly good and moderate respectively. Differences between modeling methods were small, although the explained deviance was higher with GLM-models than GAMs. In turn, GAMs produced more realistic spatial predictions. The single most important environmental variable controlling the occurence of periglacial phenomena was mean altitude, which had strong correlations with many other explanatory variables. The ongoing global warming will have great impact especially in cold environments on high latitudes, and for this reason, an important research topic in the near future will be the response of periglacial environments to a warming climate.
Resumo:
We present an explicit solution of the problem of two coupled spin-1/2 impurities, interacting with a band of conduction electrons. We obtain an exact effective bosonized Hamiltonian, which is then treated by two different methods (low-energy theory and mean-field approach). Scale invariance is explicitly shown at the quantum critical point. The staggered susceptibility behaves like ln(T(K)/T) at low T, whereas the magnetic susceptibility and [S1.S2] are well behaved at the transition. The divergence of C(T)/T when approaching the transition point is also studied. The non-Fermi-liquid (actually marginal-Fermi-liquid) critical point is shown to arise because of the existence of anomalous correlations, which lead to degeneracies between bosonic and fermionic states of the system. The methods developed in this paper are of interest for studying more physically relevant models, for instance, for high-T(c) cuprates.
Resumo:
Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.
Explicit and Optimal Exact-Regenerating Codes for the Minimum-Bandwidth Point in Distributed Storage
Resumo:
In the distributed storage setting that we consider, data is stored across n nodes in the network such that the data can be recovered by connecting to any subset of k nodes. Additionally, one can repair a failed node by connecting to any d nodes while downloading beta units of data from each. Dimakis et al. show that the repair bandwidth d beta can be considerably reduced if each node stores slightly more than the minimum required and characterize the tradeoff between the amount of storage per node and the repair bandwidth. In the exact regeneration variation, unlike the functional regeneration, the replacement for a failed node is required to store data identical to that in the failed node. This greatly reduces the complexity of system maintenance. The main result of this paper is an explicit construction of codes for all values of the system parameters at one of the two most important and extreme points of the tradeoff - the Minimum Bandwidth Regenerating point, which performs optimal exact regeneration of any failed node. A second result is a non-existence proof showing that with one possible exception, no other point on the tradeoff can be achieved for exact regeneration.
Resumo:
A simple one dimensional inertial model is presented for transient response analysis of notched beams under impact, and extracting dynamic initiation toughness values. The model includes the effects of striker mass interactions, and contact deformations of the beam. Displacement time history of the striker mass is applied to the model as forcing function. The model is validated by comparison with the experimental investigation on ductile aluminium 6061 alloy and brittle polymer, PMMA.
Resumo:
Optimum design of dynamic fracture test rigs demands a thorough appreciation of beam vibration under impact. Analyses invariably presume rigid anvils, and neglect overhang effects. The beam response predicted analytically and numerically in this paper highlights the significant role of anvil rigidity and beam overhangs on the impact dynamics of three point bend (3PB) specimens.
Resumo:
We present analytic results to show that the Schwinger-boson hole-fermion mean-field state exhibits non-Fermi liquid behavior due to spin-charge separation. The physical electron Green's function consists of three additive components. (a) A Fermi-liquid component associated with the bose condensate. (b) A non-Fermi liquid component which has a logarithmic peak and a long tail that gives rise to a linear density of states that is symmetric about the Fermi level and a momentum distribution function with a logarithmic discontinuity at the Fermi surface. (c) A second non-Fermi liquid component associated with the thermal bosons which leads to a constant density of states. It is shown that zero-point fluctuations associated with the spin-degrees of freedom are responsible for the logarithmic instabilities and the restoration of particle-hole symmetry close to the Fermi surface.
Resumo:
Our ability to infer the protein quaternary structure automatically from atom and lattice information is inadequate, especially for weak complexes, and heteromeric quaternary structures. Several approaches exist, but they have limited performance. Here, we present a new scheme to infer protein quaternary structure from lattice and protein information, with all-around coverage for strong, weak and very weak affinity homomeric and heteromeric complexes. The scheme combines naive Bayes classifier and point group symmetry under Boolean framework to detect quaternary structures in crystal lattice. It consistently produces >= 90% coverage across diverse benchmarking data sets, including a notably superior 95% coverage for recognition heteromeric complexes, compared with 53% on the same data set by current state-of-the-art method. The detailed study of a limited number of prediction-failed cases offers interesting insights into the intriguing nature of protein contacts in lattice. The findings have implications for accurate inference of quaternary states of proteins, especially weak affinity complexes.
Resumo:
It is shown that, although the mathematical analysis of the Alfven-wave equation does not show any variation at non-zero or zero singular points, the role of surface waves in the physical mechanism of resonant absorption of Alfven waves is very different at these points. This difference becomes even greater when resistivity is taken into account. At the neutral point the zero-frequency surface waves that are symmetric surface modes of the structured neutral layer couple to the tearing mode instability of the layer. The importance of this study for the energy balance in tearing modes and the association of surface waves with driven magnetic reconnection is also pointed out.
Resumo:
We propose a model for concentrated emulsions based on the speculation that a macroscopic shear strain does not produce an affine deformation in the randomly close-packed droplet structure. The model yields an anomalous contribution to the complex dynamic shear modulus that varies as the square root of frequency. We test this prediction using a novel light scattering technique to measure the dynamic shear modulus, and directly observe the predicted behavior over six decades of frequency and a wide range of volume fractions.
Resumo:
This paper describes an algorithm for constructing the solid model (boundary representation) from pout data measured from the faces of the object. The poznt data is assumed to be clustered for each face. This algorithm does not require any compuiier model of the part to exist and does not require any topological infarmation about the part to be input by the user. The property that a convex solid can be constructed uniquely from geometric input alone is utilized in the current work. Any object can be represented a5 a combznatzon of convex solids. The proposed algorithm attempts to construct convex polyhedra from the given input. The polyhedra so obtained are then checked against the input data for containment and those polyhedra, that satisfy this check, are combined (using boolean union operation) to realise the solid model. Results of implementation are presented.
Resumo:
The general equation for one-dimensional wave propagation at low flow Mach numbers (M less-than-or-equals, slant0·2) is derived and is solved analytically for conical and exponential shapes. The transfer matrices are derived and shown to be self-consistent. Comparison is also made with the relevant data available in the literature. The transmission loss behaviour of conical and exponential pipes, and mufflers involving these shapes, are studied. Analytical expressions of the same are given for the case of a stationary medium. The mufflers involving conical and exponential pipes are shown to be inferior to simple expansion chambers (of similar dimensions) at higher frequencies from the point of view of noise abatement, as was observed earlier experimentally.
Resumo:
We use the BBGKY hierarchy equations to calculate, perturbatively, the lowest order nonlinear correction to the two-point correlation and the pair velocity for Gaussian initial conditions in a critical density matter-dominated cosmological model. We compare our results with the results obtained using the hydrodynamic equations that neglect pressure and find that the two match, indicating that there are no effects of multistreaming at this order of perturbation. We analytically study the effect of small scales on the large scales by calculating the nonlinear correction for a Dirac delta function initial two-point correlation. We find that the induced two-point correlation has a x(-6) behavior at large separations. We have considered a class of initial conditions where the initial power spectrum at small k has the form k(n) with 0 < n less than or equal to 3 and have numerically calculated the nonlinear correction to the two-point correlation, its average over a sphere and the pair velocity over a large dynamical range. We find that at small separations the effect of the nonlinear term is to enhance the clustering, whereas at intermediate scales it can act to either increase or decrease the clustering. At large scales we find a simple formula that gives a very good fit for the nonlinear correction in terms of the initial function. This formula explicitly exhibits the influence of small scales on large scales and because of this coupling the perturbative treatment breaks down at large scales much before one would expect it to if the nonlinearity were local in real space. We physically interpret this formula in terms of a simple diffusion process. We have also investigated the case n = 0, and we find that it differs from the other cases in certain respects. We investigate a recently proposed scaling property of gravitational clustering, and we find that the lowest order nonlinear terms cause deviations from the scaling relations that are strictly valid in the linear regime. The approximate validity of these relations in the nonlinear regime in l(T)-body simulations cannot be understood at this order of evolution.
Resumo:
Reflection electron energy-loss spectra are reported for the family of compounds TiOx over the entire homogeneity range (0.8 < a: < 1.3). The spectra exhibit a plasmon feature on the low-energy side, while several interband transitions are prominent at higher energies. The real and imaginary parts of dielectric functions and optical conductivity for these compounds are determined using the Kramers-Kronig analysis. The results exhibit systematic behavior with varying oxygen stoichiometry.
Resumo:
We combine multiple scattering and renormalization group methods to calculate the leading order dimensionless virial coefficient k(s) for the friction coefficient of dilute polymer solutions under conditions where the osmotic second virial coefficient vanishes (i.e., at the theta point T-theta). Our calculations are formulated in terms of coupled kinetic equations for the polymer and solvent, in which the polymers are modeled as continuous chains whose configurations evolve under the action of random forces in, the velocity field of the solvent. To lowest order in epsilon=4-d, we find that k(s) = 1.06. This result compares satisfactorily with existing experimental estimates of k(s), which are in the range 0.7-0.8. It is also in good agreement with other theoretical results on chains and suspensions at T-theta. Our calculated k(s) is also found to be identical to the leading order virial coefficient of the tracer friction coefficient at the theta point. We discuss possible reasons for the difficulties encountered when attempting to evaluate k(s) by extrapolating prior renormalization group calculations from semidilute concentrations to the infinitely dilute limit. (C) 1996 American Institute of Physics.