949 resultados para Kähler-Einstein Metrics
Resumo:
In the presence of a synthetic non-Abelian gauge field that produces a Rashba-like spin-orbit interaction, a collection of weakly interacting fermions undergoes a crossover from a Bardeen-Cooper-Schrieffer (BCS) ground state to a Bose-Einstein condensate (BEC) ground state when the strength of the gauge field is increased (Vyasanakere et al 2011 Phys. Rev. B 84 014512). The BEC that is obtained at large gauge coupling strengths is a condensate of tightly bound bosonic fermion pairs. The properties of these bosons are solely determined by the Rashba gauge field-hence called rashbons. In this paper, we conduct a systematic study of the properties of rashbons and their dispersion. This study reveals a new qualitative aspect of the problem of interacting fermions in non-Abelian gauge fields, i.e. that the rashbon state ceases to exist when the center-of-mass momentum of the fermions exceeds a critical value that is of the order of the gauge coupling strength. The study allows us to estimate the transition temperature of the rashbon BEC and suggests a route to enhance the exponentially small transition temperature of the system with a fixed weak attraction to the order of the Fermi temperature by tuning the strength of the non-Abelian gauge field. The nature of the rashbon dispersion, and in particular the absence of the rashbon states at large momenta, suggests a regime in parameter space where the normal state of the system will be a dynamical mixture of uncondensed rashbons and unpaired helical fermions. Such a state should show many novel features including pseudogap physics.
Resumo:
Convergence of the vast sequence space of proteins into a highly restricted fold/conformational space suggests a simple yet unique underlying mechanism of protein folding that has been the subject of much debate in the last several decades. One of the major challenges related to the understanding of protein folding or in silico protein structure prediction is the discrimination of non-native structures/decoys from the native structure. Applications of knowledge-based potentials to attain this goal have been extensively reported in the literature. Also, scoring functions based on accessible surface area and amino acid neighbourhood considerations were used in discriminating the decoys from native structures. In this article, we have explored the potential of protein structure network (PSN) parameters to validate the native proteins against a large number of decoy structures generated by diverse methods. We are guided by two principles: (a) the PSNs capture the local properties from a global perspective and (b) inclusion of non-covalent interactions, at all-atom level, including the side-chain atoms, in the network construction accommodates the sequence dependent features. Several network parameters such as the size of the largest cluster, community size, clustering coefficient are evaluated and scored on the basis of the rank of the native structures and the Z-scores. The network analysis of decoy structures highlights the importance of the global properties contributing to the uniqueness of native structures. The analysis also exhibits that the network parameters can be used as metrics to identify the native structures and filter out non-native structures/decoys in a large number of data-sets; thus also has a potential to be used in the protein `structure prediction' problem.
Resumo:
Niche differentiation has been proposed as an explanation for rarity in species assemblages. To test this hypothesis requires quantifying the ecological similarity of species. This similarity can potentially be estimated by using phylogenetic relatedness. In this study, we predicted that if niche differentiation does explain the co-occurrence of rare and common species, then rare species should contribute greatly to the overall community phylogenetic diversity (PD), abundance will have phylogenetic signal, and common and rare species will be phylogenetically dissimilar. We tested these predictions by developing a novel method that integrates species rank abundance distributions with phylogenetic trees and trend analyses, to examine the relative contribution of individual species to the overall community PD. We then supplement this approach with analyses of phylogenetic signal in abundances and measures of phylogenetic similarity within and between rare and common species groups. We applied this analytical approach to 15 long-term temperate and tropical forest dynamics plots from around the world. We show that the niche differentiation hypothesis is supported in six of the nine gap-dominated forests but is rejected in the six disturbance-dominated and three gap-dominated forests. We also show that the three metrics utilized in this study each provide unique but corroborating information regarding the phylogenetic distribution of rarity in communities.
Resumo:
Urbanisation is a dynamic complex phenomenon involving large scale changes in the land uses at local levels. Analyses of changes in land uses in urban environments provide a historical perspective of land use and give an opportunity to assess the spatial patterns, correlation, trends, rate and impacts of the change, which would help in better regional planning and good governance of the region. Main objective of this research is to quantify the urban dynamics using temporal remote sensing data with the help of well-established landscape metrics. Bangalore being one of the rapidly urbanising landscapes in India has been chosen for this investigation. Complex process of urban sprawl was modelled using spatio temporal analysis. Land use analyses show 584% growth in built-up area during the last four decades with the decline of vegetation by 66% and water bodies by 74%. Analyses of the temporal data reveals an increase in urban built up area of 342.83% (during 1973-1992), 129.56% (during 1992-1999), 106.7% (1999-2002), 114.51% (2002-2006) and 126.19% from 2006 to 2010. The Study area was divided into four zones and each zone is further divided into 17 concentric circles of 1 km incrementing radius to understand the patterns and extent of the urbanisation at local levels. The urban density gradient illustrates radial pattern of urbanisation for the period 1973-2010. Bangalore grew radially from 1973 to 2010 indicating that the urbanisation is intensifying from the central core and has reached the periphery of the Greater Bangalore. Shannon's entropy, alpha and beta population densities were computed to understand the level of urbanisation at local levels. Shannon's entropy values of recent time confirms dispersed haphazard urban growth in the city, particularly in the outskirts of the city. This also illustrates the extent of influence of drivers of urbanisation in various directions. Landscape metrics provided in depth knowledge about the sprawl. Principal component analysis helped in prioritizing the metrics for detailed analyses. The results clearly indicates that whole landscape is aggregating to a large patch in 2010 as compared to earlier years which was dominated by several small patches. The large scale conversion of small patches to large single patch can be seen from 2006 to 2010. In the year 2010 patches are maximally aggregated indicating that the city is becoming more compact and more urbanised in recent years. Bangalore was the most sought after destination for its climatic condition and the availability of various facilities (land availability, economy, political factors) compared to other cities. The growth into a single urban patch can be attributed to rapid urbanisation coupled with the industrialisation. Monitoring of growth through landscape metrics helps to maintain and manage the natural resources. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The impact of gate-to-source/drain overlap length on performance and variability of 65 nm CMOS is presented. The device and circuit variability is investigated as a function of three significant process parameters, namely gate length, gate oxide thickness, and halo dose. The comparison is made with three different values of gate-to-source/drain overlap length namely 5 nm, 0 nm, and -5 nm and at two different leakage currents of 10 nA and 100 nA. The Worst-Case-Analysis approach is used to study the inverter delay fluctuations at the process corners. The drive current of the device for device robustness and stage delay of an inverter for circuit robustness are taken as performance metrics. The design trade-off between performance and variability is demonstrated both at the device level and circuit level. It is shown that larger overlap length leads to better performance, while smaller overlap length results in better variability. Performance trades with variability as overlap length is varied. An optimal value of overlap length of 0 nm is recommended at 65 nm gate length, for a reasonable combination of performance and variability.
Resumo:
We compute a certain class of corrections to (specific) screening lengths in strongly coupled non-abelian plasmas using the AdS/CFT correspondence. In this holographic framework, these corrections arise from various higher curvature interactions modifying the leading Einstein gravity action. The changes in the screening lengths are perturbative in inverse powers of the `t Hooft coupling or of the number of colors, as can be made precise in the context where the dual gauge theory is superconformal. We also compare the results of these holographic calculations to lattice results for the analogous screening lengths in QCD. In particular, we apply these results within the program of making quantitative comparisons between the strongly coupled quark-gluon plasma and holographic descriptions of conformal field theory. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
Generation and study of synthetic gauge fields has enhanced the possibility of using cold atom systems as quantum emulators of condensed matter Hamiltonians. In this article we describe the physics of interacting spin -1/2 fermions in synthetic non-Abelian gauge fields which induce a Rashba spin-orbit interaction on the motion of the fermions. We show that the fermion system can evolve to a Bose-Einstein condensate of a novel boson which we call rashbon. The rashbon-rashbon interaction is shown to be independent of the interaction between the constituent fermions. We also show that spin-orbit coupling can help enhancing superfluid transition temperature of weak superfluids to the order of Fermi temperature. A non-Abelian gauge field, when used in conjunction with another potential, can generate interesting Hamiltonians such as that of a magnetic monopole.
Resumo:
In a communication system in which K nodes communicate with a central sink node, the following problem of selection often occurs. Each node maintains a preference number called a metric, which is not known to other nodes. The sink node must find the `best' node with the largest metric. The local nature of the metrics requires the selection process to be distributed. Further, the selection needs to be fast in order to increase the fraction of time available for data transmission using the selected node and to handle time-varying environments. While several selection schemes have been proposed in the literature, each has its own shortcomings. We propose a novel, distributed selection scheme that generalizes the best features of the timer scheme, which requires minimal feedback but does not guarantee successful selection, and the splitting scheme, which requires more feedback but guarantees successful selection. The proposed scheme introduces several new ideas into the design of the timer and splitting schemes. It explicitly accounts for feedback overheads and guarantees selection of the best node. We analyze and optimize the performance of the scheme and show that it is scalable, reliable, and fast. We also present new insights about the optimal timer scheme.
Resumo:
In this paper we consider the downlink of an OFDM cellular system. The objective is to maximise the system utility by means of fractional frequency reuse and interference planning. The problem is a joint scheduling and power allocation problem. Using gradient scheduling scheme, the above problem is transformed to a problem of maximising weighted sum-rate at each time slot. At each slot, an iterative scheduling and power allocation algorithm is employed to address the weighted sum-rate maximisation problem. The power allocation problem in the above algorithm is a nonconvex optimisation problem. We study several algorithms that can tackle this part of the problem. We propose two modifications to the above algorithms to address practical and computational feasibility. Finally, we compare the performance of our algorithm with some existing algorithms based on certain achieved system utility metrics. We show that the practical considerations do not affect the system performance adversely.
Resumo:
We propose a set of metrics that evaluate the uniformity, sharpness, continuity, noise, stroke width variance,pulse width ratio, transient pixels density, entropy and variance of components to quantify the quality of a document image. The measures are intended to be used in any optical character recognition (OCR) engine to a priori estimate the expected performance of the OCR. The suggested measures have been evaluated on many document images, which have different scripts. The quality of a document image is manually annotated by users to create a ground truth. The idea is to correlate the values of the measures with the user annotated data. If the measure calculated matches the annotated description,then the metric is accepted; else it is rejected. In the set of metrics proposed, some of them are accepted and the rest are rejected. We have defined metrics that are easily estimatable. The metrics proposed in this paper are based on the feedback of homely grown OCR engines for Indic (Tamil and Kannada) languages. The metrics are independent of the scripts, and depend only on the quality and age of the paper and the printing. Experiments and results for each proposed metric are discussed. Actual recognition of the printed text is not performed to evaluate the proposed metrics. Sometimes, a document image containing broken characters results in good document image as per the evaluated metrics, which is part of the unsolved challenges. The proposed measures work on gray scale document images and fail to provide reliable information on binarized document image.
Resumo:
Mobile WiMAX is a burgeoning network technology with diverse applications, one of them being used for VANETs. The performance metrics such as Mean Throughput and Packet Loss Ratio for the operations of VANETs adopting 802.16e are computed through simulation techniques. Next we evaluated the similar performance of VANETs employing 802.11p, also known as WAVE (Wireless Access in Vehicular Environment). The simulation model proposed is close to reality as we have generated mobility traces for both the cases using a traffic simulator (SUMO), and fed it into network simulator (NS2) based on their operations in a typical urban scenario for VANETs. In sequel, a VANET application called `Street Congestion Alert' is developed to assess the performances of these two technologies. For this application, TraCI is used for coupling SUMO and NS2 in a feedback loop to set up a realistic simulation scenario. Our inferences show that the Mobile WiMAX performs better than WAVE for larger network sizes.
Resumo:
Recent advances in the generation of synthetic gauge fields in cold atomic systems have stimulated interest in the physics of interacting bosons and fermions in them. In this paper, we discuss interacting two-component fermionic systems in uniform non-Abelian gauge fields that produce a spin-orbit interaction and uniform spin potentials. Two classes of gauge fields discussed include those that produce a Rashba spin-orbit interaction and the type of gauge fields (SM gauge fields) obtained in experiments by the Shanxi and MIT groups. For high symmetry Rashba gauge fields, a two-particle bound state exists even for a vanishingly small attractive interaction described by a scattering length. Upon increasing the strength of a Rashba gauge field, a finite density of weakly interacting fermions undergoes a crossover from a BCS like ground state to a BEC state of a new kind of boson called the rashbon whose properties are determined solely by the gauge field and not by the interaction between the fermions. The rashbon Bose-Einstein condensate (RBEC) is a quite intriguing state with the rashbon-rashbon interactions being independent of the fermion-fermion interactions (scattering length). Furthermore, we show that the RBEC has a transition temperature of the order of the Fermi temperature, suggesting routes to enhance the transition temperatures of weakly interacting superfluids by tuning the spin-orbit coupling. For the SM gauge fields, we show that in a regime of parameters, a pair of particles with finite centre-of-mass momentum is the most strongly bound. In other regimes of centre-of-mass momenta, there is no two-body bound state, but a resonance like feature appears in the scattering continuum. In the many-body setting, this results in flow enhanced pairing. Also, strongly interacting normal states utilizing the scattering resonance can be created opening the possibility of studying properties of helical Fermi liquids. This paper contains a general discussion of the physics of Feshbach resonance in a non-Abelian gauge field, where several novel features such as centre-of-mass-momentum-dependent effective interactions are shown. It is also shown that a uniform non-Abelian gauge field in conjunction with a spatial potential can be used to generate novel Hamiltonians; we discuss an explicit example of the generation of a monopole Hamiltonian.
Resumo:
A new approach that can easily incorporate any generic penalty function into the diffuse optical tomographic image reconstruction is introduced to show the utility of nonquadratic penalty functions. The penalty functions that were used include quadratic (l(2)), absolute (l(1)), Cauchy, and Geman-McClure. The regularization parameter in each of these cases was obtained automatically by using the generalized cross-validation method. The reconstruction results were systematically compared with each other via utilization of quantitative metrics, such as relative error and Pearson correlation. The reconstruction results indicate that, while the quadratic penalty may be able to provide better separation between two closely spaced targets, its contrast recovery capability is limited, and the sparseness promoting penalties, such as l(1), Cauchy, and Geman-McClure have better utility in reconstructing high-contrast and complex-shaped targets, with the Geman-McClure penalty being the most optimal one. (C) 2013 Optical Society of America
Resumo:
An opportunistic, rate-adaptive system exploits multi-user diversity by selecting the best node, which has the highest channel power gain, and adapting the data rate to selected node's channel gain. Since channel knowledge is local to a node, we propose using a distributed, low-feedback timer backoff scheme to select the best node. It uses a mapping that maps the channel gain, or, in general, a real-valued metric, to a timer value. The mapping is such that timers of nodes with higher metrics expire earlier. Our goal is to maximize the system throughput when rate adaptation is discrete, as is the case in practice. To improve throughput, we use a pragmatic selection policy, in which even a node other than the best node can be selected. We derive several novel, insightful results about the optimal mapping and develop an algorithm to compute it. These results bring out the inter-relationship between the discrete rate adaptation rule, optimal mapping, and selection policy. We also extensively benchmark the performance of the optimal mapping with several timer and opportunistic multiple access schemes considered in the literature, and demonstrate that the developed scheme is effective in many regimes of interest.
Resumo:
In social choice theory, preference aggregation refers to computing an aggregate preference over a set of alternatives given individual preferences of all the agents. In real-world scenarios, it may not be feasible to gather preferences from all the agents. Moreover, determining the aggregate preference is computationally intensive. In this paper, we show that the aggregate preference of the agents in a social network can be computed efficiently and with sufficient accuracy using preferences elicited from a small subset of critical nodes in the network. Our methodology uses a model developed based on real-world data obtained using a survey on human subjects, and exploits network structure and homophily of relationships. Our approach guarantees good performance for aggregation rules that satisfy a property which we call expected weak insensitivity. We demonstrate empirically that many practically relevant aggregation rules satisfy this property. We also show that two natural objective functions in this context satisfy certain properties, which makes our methodology attractive for scalable preference aggregation over large scale social networks. We conclude that our approach is superior to random polling while aggregating preferences related to individualistic metrics, whereas random polling is acceptable in the case of social metrics.