1000 resultados para Vertex Models
Resumo:
Similar quantum phase diagrams and transitions are found for three classes of one-dimensional models with equally spaced sites, singlet ground states (GS), inversion symmetry at sites and a bond order wave (BOW) phase in some sectors. The models are frustrated spin-1/2 chains with variable range exchange, half-filled Hubbard models with spin-independent interactions and modified Hubbard models with site energies for describing organic charge transfer salts. In some range of parameters, the models have a first order quantum transition at which the GS expectation value of the sublattice spin < S-A(2)> of odd or even-numbered sites is discontinuous. There is an intermediate BOW phase for other model parameters that lead to two continuous quantum transitions with continuous < S-A(2)>. Exact diagonalization of finite systems and symmetry arguments provide a unified picture of familiar 1D models that have appeared separately in widely different contexts.
Resumo:
In this work, we consider two-dimensional (2-D) binary channels in which the 2-D error patterns are constrained so that errors cannot occur in adjacent horizontal or vertical positions. We consider probabilistic and combinatorial models for such channels. A probabilistic model is obtained from a 2-D random field defined by Roth, Siegel and Wolf (2001). Based on the conjectured ergodicity of this random field, we obtain an expression for the capacity of the 2-D non-adjacent-errors channel. We also derive an upper bound for the asymptotic coding rate in the combinatorial model.
Resumo:
We address the task of mapping a given textual domain model (e.g., an industry-standard reference model) for a given domain (e.g., ERP), with the source code of an independently developed application in the same domain. This has applications in improving the understandability of an existing application, migrating it to a more flexible architecture, or integrating it with other related applications. We use the vector-space model to abstractly represent domain model elements as well as source-code artifacts. The key novelty in our approach is to leverage the relationships between source-code artifacts in a principled way to improve the mapping process. We describe experiments wherein we apply our approach to the task of matching two real, open-source applications to corresponding industry-standard domain models. We demonstrate the overall usefulness of our approach, as well as the role of our propagation techniques in improving the precision and recall of the mapping task.
Resumo:
Co-crystal screening of the anti-HIV drug lamivudine was carried out with dicarboxylic acids as co-formers, and three of the resulting crystalline solids, two salts and a co-crystal, were studied with SCXRD, PXRD and FTIR spectroscopy. Salts of cytosine, a molecule that incorporates critical structural features of lamivudine, with the same co-formers, were taken as model systems for IR spectroscopic studies of the synthons in the salts of lamivudine. It is shown that different systems with the same synthon show very similar spectral signatures in the regions corresponding to the synthon absorptions. This reveals again the modular nature of the supramolecular synthon.
Resumo:
The formulation of higher order structural models and their discretization using the finite element method is difficult owing to their complexity, especially in the presence of non-linearities. In this work a new algorithm for automating the formulation and assembly of hyperelastic higher-order structural finite elements is developed. A hierarchic series of kinematic models is proposed for modeling structures with special geometries and the algorithm is formulated to automate the study of this class of higher order structural models. The algorithm developed in this work sidesteps the need for an explicit derivation of the governing equations for the individual kinematic modes. Using a novel procedure involving a nodal degree-of-freedom based automatic assembly algorithm, automatic differentiation and higher dimensional quadrature, the relevant finite element matrices are directly computed from the variational statement of elasticity and the higher order kinematic model. Another significant feature of the proposed algorithm is that natural boundary conditions are implicitly handled for arbitrary higher order kinematic models. The validity algorithm is illustrated with examples involving linear elasticity and hyperelasticity. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
Tuberculosis (TB) is a life threatening disease caused due to infection from Mycobacterium tuberculosis (Mtb). That most of the TB strains have become resistant to various existing drugs, development of effective novel drug candidates to combat this disease is a need of the day. In spite of intensive research world-wide, the success rate of discovering a new anti-TB drug is very poor. Therefore, novel drug discovery methods have to be tried. We have used a rule based computational method that utilizes a vertex index, named `distance exponent index (D-x)' (taken x = -4 here) for predicting anti-TB activity of a series of acid alkyl ester derivatives. The method is meant to identify activity related substructures from a series a compounds and predict activity of a compound on that basis. The high degree of successful prediction in the present study suggests that the said method may be useful in discovering effective anti-TB compound. It is also apparent that substructural approaches may be leveraged for wide purposes in computer-aided drug design.
Resumo:
Eleven GCMs (BCCR-BCCM2.0, INGV-ECHAM4, GFDL2.0, GFDL2.1, GISS, IPSL-CM4, MIROC3, MRI-CGCM2, NCAR-PCMI, UKMO-HADCM3 and UKMO-HADGEM1) were evaluated for India (covering 73 grid points of 2.5 degrees x 2.5 degrees) for the climate variable `precipitation rate' using 5 performance indicators. Performance indicators used were the correlation coefficient, normalised root mean square error, absolute normalised mean bias error, average absolute relative error and skill score. We used a nested bias correction methodology to remove the systematic biases in GCM simulations. The Entropy method was employed to obtain weights of these 5 indicators. Ranks of the 11 GCMs were obtained through a multicriterion decision-making outranking method, PROMETHEE-2 (Preference Ranking Organisation Method of Enrichment Evaluation). An equal weight scenario (assigning 0.2 weight for each indicator) was also used to rank the GCMs. An effort was also made to rank GCMs for 4 river basins (Godavari, Krishna, Mahanadi and Cauvery) in peninsular India. The upper Malaprabha catchment in Karnataka, India, was chosen to demonstrate the Entropy and PROMETHEE-2 methods. The Spearman rank correlation coefficient was employed to assess the association between the ranking patterns. Our results suggest that the ensemble of GFDL2.0, MIROC3, BCCR-BCCM2.0, UKMO-HADCM3, MPIECHAM4 and UKMO-HADGEM1 is suitable for India. The methodology proposed can be extended to rank GCMs for any selected region.
Resumo:
Using numerical diagonalization we study the crossover among different random matrix ensembles (Poissonian, Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble (GUE) and Gaussian symplectic ensemble (GSE)) realized in two different microscopic models. The specific diagnostic tool used to study the crossovers is the level spacing distribution. The first model is a one-dimensional lattice model of interacting hard-core bosons (or equivalently spin 1/2 objects) and the other a higher dimensional model of non-interacting particles with disorder and spin-orbit coupling. We find that the perturbation causing the crossover among the different ensembles scales to zero with system size as a power law with an exponent that depends on the ensembles between which the crossover takes place. This exponent is independent of microscopic details of the perturbation. We also find that the crossover from the Poissonian ensemble to the other three is dominated by the Poissonian to GOE crossover which introduces level repulsion while the crossover from GOE to GUE or GOE to GSE associated with symmetry breaking introduces a subdominant contribution. We also conjecture that the exponent is dependent on whether the system contains interactions among the elementary degrees of freedom or not and is independent of the dimensionality of the system.
Resumo:
We compute the one loop corrections to the CP-even Higgs mass matrix in the supersymmetric inverse seesaw model to single out the different cases where the radiative corrections from the neutrino sector could become important. It is found that there could be a significant enhancement in the Higgs mass even for Dirac neutrino masses of O(30) GeV if the left-handed sneutrino soft mass is comparable or larger than the right-handed neutrino mass. In the case where right-handed neutrino masses are significantly larger than the supersymmetry breaking scale, the corrections can utmost account to an upward shift of 3 GeV. For very heavy multi TeV sneutrinos, the corrections replicate the stop corrections at 1-loop. We further show that general gauge mediation with inverse seesaw model naturally accommodates a 125 GeV Higgs with TeV scale stops. (C) 2014 The Authors. Published by Elsevier B.V.
Resumo:
The boxicity (resp. cubicity) of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of axis parallel boxes (resp. cubes) in R-k. Equivalently, it is the minimum number of interval graphs (resp. unit interval graphs) on the vertex set V, such that the intersection of their edge sets is E. The problem of computing boxicity (resp. cubicity) is known to be inapproximable, even for restricted graph classes like bipartite, co-bipartite and split graphs, within an O(n(1-epsilon))-factor for any epsilon > 0 in polynomial time, unless NP = ZPP. For any well known graph class of unbounded boxicity, there is no known approximation algorithm that gives n(1-epsilon)-factor approximation algorithm for computing boxicity in polynomial time, for any epsilon > 0. In this paper, we consider the problem of approximating the boxicity (cubicity) of circular arc graphs intersection graphs of arcs of a circle. Circular arc graphs are known to have unbounded boxicity, which could be as large as Omega(n). We give a (2 + 1/k) -factor (resp. (2 + log n]/k)-factor) polynomial time approximation algorithm for computing the boxicity (resp. cubicity) of any circular arc graph, where k >= 1 is the value of the optimum solution. For normal circular arc (NCA) graphs, with an NCA model given, this can be improved to an additive two approximation algorithm. The time complexity of the algorithms to approximately compute the boxicity (resp. cubicity) is O(mn + n(2)) in both these cases, and in O(mn + kn(2)) = O(n(3)) time we also get their corresponding box (resp. cube) representations, where n is the number of vertices of the graph and m is its number of edges. Our additive two approximation algorithm directly works for any proper circular arc graph, since their NCA models can be computed in polynomial time. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The problem of finding an optimal vertex cover in a graph is a classic NP-complete problem, and is a special case of the hitting set question. On the other hand, the hitting set problem, when asked in the context of induced geometric objects, often turns out to be exactly the vertex cover problem on restricted classes of graphs. In this work we explore a particular instance of such a phenomenon. We consider the problem of hitting all axis-parallel slabs induced by a point set P, and show that it is equivalent to the problem of finding a vertex cover on a graph whose edge set is the union of two Hamiltonian Paths. We show the latter problem to be NP-complete, and also give an algorithm to find a vertex cover of size at most k, on graphs of maximum degree four, whose running time is 1.2637(k) n(O(1)).
Resumo:
Several statistical downscaling models have been developed in the past couple of decades to assess the hydrologic impacts of climate change by projecting the station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs). This paper presents and compares different statistical downscaling models that use multiple linear regression (MLR), positive coefficient regression (PCR), stepwise regression (SR), and support vector machine (SVM) techniques for estimating monthly rainfall amounts in the state of Florida. Mean sea level pressure, air temperature, geopotential height, specific humidity, U wind, and V wind are used as the explanatory variables/predictors in the downscaling models. Data for these variables are obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis dataset and the Canadian Centre for Climate Modelling and Analysis (CCCma) Coupled Global Climate Model, version 3 (CGCM3) GCM simulations. The principal component analysis (PCA) and fuzzy c-means clustering method (FCM) are used as part of downscaling model to reduce the dimensionality of the dataset and identify the clusters in the data, respectively. Evaluation of the performances of the models using different error and statistical measures indicates that the SVM-based model performed better than all the other models in reproducing most monthly rainfall statistics at 18 sites. Output from the third-generation CGCM3 GCM for the A1B scenario was used for future projections. For the projection period 2001-10, MLR was used to relate variables at the GCM and NCEP grid scales. Use of MLR in linking the predictor variables at the GCM and NCEP grid scales yielded better reproduction of monthly rainfall statistics at most of the stations (12 out of 18) compared to those by spatial interpolation technique used in earlier studies.
Resumo:
Vernacular dwellings are well-suited climate-responsive designs that adopt local materials and skills to support comfortable indoor environments in response to local climatic conditions. These naturally-ventilated passive dwellings have enabled civilizations to sustain even in extreme climatic conditions. The design and physiological resilience of the inhabitants have coevolved to be attuned to local climatic and environmental conditions. Such adaptations have perplexed modern theories in human thermal-comfort that have evolved in the era of electricity and air-conditioned buildings. Vernacular local building elements like rubble walls and mud roofs are given way to burnt brick walls and reinforced cement concrete tin roofs. Over 60% of Indian population is rural, and implications of such transitions on thermal comfort and energy in buildings are crucial to understand. Types of energy use associated with a buildings life cycle include its embodied energy, operational and maintenance energy, demolition and disposal energy. Embodied Energy (EE) represents total energy consumption for construction of building, i.e., embodied energy of building materials, material transportation energy and building construction energy. Embodied energy of building materials forms major contribution to embodied energy in buildings. Operational energy (OE) in buildings mainly contributed by space conditioning and lighting requirements, depends on the climatic conditions of the region and comfort requirements of the building occupants. Less energy intensive natural materials are used for traditional buildings and the EE of traditional buildings is low. Transition in use of materials causes significant impact on embodied energy of vernacular dwellings. Use of manufactured, energy intensive materials like brick, cement, steel, glass etc. contributes to high embodied energy in these dwellings. This paper studies the increase in EE of the dwelling attributed to change in wall materials. Climatic location significantly influences operational energy in dwellings. Buildings located in regions experiencing extreme climatic conditions would require more operational energy to satisfy the heating and cooling energy demands throughout the year. Traditional buildings adopt passive techniques or non-mechanical methods for space conditioning to overcome the vagaries of extreme climatic variations and hence less operational energy. This study assesses operational energy in traditional dwelling with regard to change in wall material and climatic location. OE in the dwellings has been assessed for hot-dry, warm humid and moderate climatic zones. Choice of thermal comfort models is yet another factor which greatly influences operational energy assessment in buildings. The paper adopts two popular thermal-comfort models, viz., ASHRAE comfort standards and TSI by Sharma and Ali to investigate thermal comfort aspects and impact of these comfort models on OE assessment in traditional dwellings. A naturally ventilated vernacular dwelling in Sugganahalli, a village close to Bangalore (India), set in warm - humid climate is considered for present investigations on impact of transition in building materials, change in climatic location and choice of thermal comfort models on energy in buildings. The study includes a rigorous real time monitoring of the thermal performance of the dwelling. Dynamic simulation models validated by measured data have also been adopted to determine the impact of the transition from vernacular to modern material-configurations. Results of the study and appraisal for appropriate thermal comfort standards for computing operational energy has been presented and discussed in this paper. (c) 2014 K.I. Praseeda. Published by Elsevier Ltd.
Resumo:
We carry out an extensive numerical study of the dynamics of spiral waves of electrical activation, in the presence of periodic deformation (PD) in two-dimensional simulation domains, in the biophysically realistic mathematical models of human ventricular tissue due to (a) ten-Tusscher and Panfilov (the TP06 model) and (b) ten-Tusscher, Noble, Noble, and Panfilov (the TNNPO4 model). We first consider simulations in cable-type domains, in which we calculate the conduction velocity theta and the wavelength lambda of a plane wave; we show that PD leads to a periodic, spatial modulation of theta and a temporally periodic modulation of lambda; both these modulations depend on the amplitude and frequency of the PD. We then examine three types of initial conditions for both TP06 and TNNPO4 models and show that the imposition of PD leads to a rich variety of spatiotemporal patterns in the transmembrane potential including states with a single rotating spiral (RS) wave, a spiral-turbulence (ST) state with a single meandering spiral, an ST state with multiple broken spirals, and a state SA in which all spirals are absorbed at the boundaries of our simulation domain. We find, for both TP06 and TNNPO4 models, that spiral-wave dynamics depends sensitively on the amplitude and frequency of PD and the initial condition. We examine how these different types of spiral-wave states can be eliminated in the presence of PD by the application of low-amplitude pulses by square- and rectangular-mesh suppression techniques. We suggest specific experiments that can test the results of our simulations.
Resumo:
The performance of prediction models is often based on ``abstract metrics'' that estimate the model's ability to limit residual errors between the observed and predicted values. However, meaningful evaluation and selection of prediction models for end-user domains requires holistic and application-sensitive performance measures. Inspired by energy consumption prediction models used in the emerging ``big data'' domain of Smart Power Grids, we propose a suite of performance measures to rationally compare models along the dimensions of scale independence, reliability, volatility and cost. We include both application independent and dependent measures, the latter parameterized to allow customization by domain experts to fit their scenario. While our measures are generalizable to other domains, we offer an empirical analysis using real energy use data for three Smart Grid applications: planning, customer education and demand response, which are relevant for energy sustainability. Our results underscore the value of the proposed measures to offer a deeper insight into models' behavior and their impact on real applications, which benefit both data mining researchers and practitioners.