966 resultados para penalized likelihood
Resumo:
Depth measures the extent of atom/residue burial within a protein. It correlates with properties such as protein stability, hydrogen exchange rate, protein-protein interaction hot spots, post-translational modification sites and sequence variability. Our server, DEPTH, accurately computes depth and solvent-accessible surface area (SASA) values. We show that depth can be used to predict small molecule ligand binding cavities in proteins. Often, some of the residues lining a ligand binding cavity are both deep and solvent exposed. Using the depth-SASA pair values for a residue, its likelihood to form part of a small molecule binding cavity is estimated. The parameters of the method were calibrated over a training set of 900 high-resolution X-ray crystal structures of single-domain proteins bound to small molecules (molecular weight < 1.5 KDa). The prediction accuracy of DEPTH is comparable to that of other geometry-based prediction methods including LIGSITE, SURFNET and Pocket-Finder (all with Matthew's correlation coefficient of similar to 0.4) over a testing set of 225 single and multi-chain protein structures. Users have the option of tuning several parameters to detect cavities of different sizes, for example, geometrically flat binding sites. The input to the server is a protein 3D structure in PDB format. The users have the option of tuning the values of four parameters associated with the computation of residue depth and the prediction of binding cavities. The computed depths, SASA and binding cavity predictions are displayed in 2D plots and mapped onto 3D representations of the protein structure using Jmol. Links are provided to download the outputs. Our server is useful for all structural analysis based on residue depth and SASA, such as guiding site-directed mutagenesis experiments and small molecule docking exercises, in the context of protein functional annotation and drug discovery.
Resumo:
This paper compares and analyzes the performance of distributed cophasing techniques for uplink transmission over wireless sensor networks. We focus on a time-division duplexing approach, and exploit the channel reciprocity to reduce the channel feedback requirement. We consider periodic broadcast of known pilot symbols by the fusion center (FC), and maximum likelihood estimation of the channel by the sensor nodes for the subsequent uplink cophasing transmission. We assume carrier and phase synchronization across the participating nodes for analytical tractability. We study binary signaling over frequency-flat fading channels, and quantify the system performance such as the expected gains in the received signal-to-noise ratio (SNR) and the average probability of error at the FC, as a function of the number of sensor nodes and the pilot overhead. Our results show that a modest amount of accumulated pilot SNR is sufficient to realize a large fraction of the maximum possible beamforming gain. We also investigate the performance gains obtained by censoring transmission at the sensors based on the estimated channel state, and the benefits obtained by using maximum ratio transmission (MRT) and truncated channel inversion (TCI) at the sensors in addition to cophasing transmission. Simulation results corroborate the theoretical expressions and show the relative performance benefits offered by the various schemes.
Resumo:
Growing concern over the status of global and regional bioenergy resources has necessitated the analysis and monitoring of land cover and land use parameters on spatial and temporal scales. The knowledge of land cover and land use is very important in understanding natural resources utilization, conversion and management. Land cover, land use intensity and land use diversity are land quality indicators for sustainable land management. Optimal management of resources aids in maintaining the ecosystem balance and thereby ensures the sustainable development of a region. Thus sustainable development of a region requires a synoptic ecosystem approach in the management of natural resources that relates to the dynamics of natural variability and the effects of human intervention on key indicators of biodiversity and productivity. Spatial and temporal tools such as remote sensing (RS), geographic information system (GIS) and global positioning system (GPS) provide spatial and attribute data at regular intervals with functionalities of a decision support system aid in visualisation, querying, analysis, etc., which would aid in sustainable management of natural resources. Remote sensing data and GIS technologies play an important role in spatially evaluating bioresource availability and demand. This paper explores various land cover and land use techniques that could be used for bioresources monitoring considering the spatial data of Kolar district, Karnataka state, India. Slope and distance based vegetation indices are computed for qualitative and quantitative assessment of land cover using remote spectral measurements. Differentscale mapping of land use pattern in Kolar district is done using supervised classification approaches. Slope based vegetation indices show area under vegetation range from 47.65 % to 49.05% while distance based vegetation indices shoes its range from 40.40% to 47.41%. Land use analyses using maximum likelihood classifier indicate that 46.69% is agricultural land, 42.33% is wasteland (barren land), 4.62% is built up, 3.07% of plantation, 2.77% natural forest and 0.53% water bodies. The comparative analysis of various classifiers, indicate that the Gaussian maximum likelihood classifier has least errors. The computation of talukwise bioresource status shows that Chikballapur Taluk has better availability of resources compared to other taluks in the district.
Resumo:
Urbanisation is the increase in the population of cities in proportion to the region's rural population. Urbanisation in India is very rapid with urban population growing at around 2.3 percent per annum. Urban sprawl refers to the dispersed development along highways or surrounding the city and in rural countryside with implications such as loss of agricultural land, open space and ecologically sensitive habitats. Sprawl is thus a pattern and pace of land use in which the rate of land consumed for urban purposes exceeds the rate of population growth resulting in an inefficient and consumptive use of land and its associated resources. This unprecedented urbanisation trend due to burgeoning population has posed serious challenges to the decision makers in the city planning and management process involving plethora of issues like infrastructure development, traffic congestion, and basic amenities (electricity, water, and sanitation), etc. In this context, to aid the decision makers in following the holistic approaches in the city and urban planning, the pattern, analysis, visualization of urban growth and its impact on natural resources has gained importance. This communication, analyses the urbanisation pattern and trends using temporal remote sensing data based on supervised learning using maximum likelihood estimation of multivariate normal density parameters and Bayesian classification approach. The technique is implemented for Greater Bangalore – one of the fastest growing city in the World, with Landsat data of 1973, 1992 and 2000, IRS LISS-3 data of 1999, 2006 and MODIS data of 2002 and 2007. The study shows that there has been a growth of 466% in urban areas of Greater Bangalore across 35 years (1973 to 2007). The study unravels the pattern of growth in Greater Bangalore and its implication on local climate and also on the natural resources, necessitating appropriate strategies for the sustainable management.
Resumo:
A construction of a new family of distributed space time codes (DSTCs) having full diversity and low Maximum Likelihood (ML) decoding complexity is provided for the two phase based cooperative diversity protocols of Jing-Hassibi and the recently proposed Generalized Non-orthogonal Amplify and Forward (GNAF) protocol of Rajan et al. The salient feature of the proposed DSTCs is that they satisfy the extra constraints imposed by the protocols and are also four-group ML decodable which leads to significant reduction in ML decoding complexity compared to all existing DSTC constructions. Moreover these codes have uniform distribution of power among the relays as well as in time. Also, simulations results indicate that these codes perform better in comparison with the only known DSTC with the same rate and decoding complexity, namely the Coordinate Interleaved Orthogonal Design (CIOD). Furthermore, they perform very close to DSTCs from field extensions which have same rate but higher decoding complexity.
Resumo:
A Space-Time Block Code (STBC) in K symbols (variables) is called g-group decodable STBC if its maximum-likelihood decoding metric can be written as a sum of g terms such that each term is a function of a subset of the K variables and each variable appears in only one term. In this paper we provide a general structure of the weight matrices of multi-group decodable codes using Clifford algebras. Without assuming that the number of variables in each group to be the same, a method of explicitly constructing the weight matrices of full-diversity, delay-optimal g-group decodable codes is presented for arbitrary number of antennas. For the special case of Nt=2a we construct two subclass of codes: (i) A class of 2a-group decodable codes with rate a2(a−1), which is, equivalently, a class of Single-Symbol Decodable codes, (ii) A class of (2a−2)-group decodable with rate (a−1)2(a−2), i.e., a class of Double-Symbol Decodable codes. Simulation results show that the DSD codes of this paper perform better than previously known Quasi-Orthogonal Designs.
Resumo:
The Generalized Distributive Law (GDL) is a message passing algorithm which can efficiently solve a certain class of computational problems, and includes as special cases the Viterbi's algorithm, the BCJR algorithm, the Fast-Fourier Transform, Turbo and LDPC decoding algorithms. In this paper GDL based maximum-likelihood (ML) decoding of Space-Time Block Codes (STBCs) is introduced and a sufficient condition for an STBC to admit low GDL decoding complexity is given. Fast-decoding and multigroup decoding are the two algorithms used in the literature to ML decode STBCs with low complexity. An algorithm which exploits the advantages of both these two is called Conditional ML (CML) decoding. It is shown in this paper that the GDL decoding complexity of any STBC is upper bounded by its CML decoding complexity, and that there exist codes for which the GDL complexity is strictly less than the CML complexity. Explicit examples of two such families of STBCs is given in this paper. Thus the CML is in general suboptimal in reducing the ML decoding complexity of a code, and one should design codes with low GDL complexity rather than low CML complexity.
Resumo:
It has been shown recently that the maximum rate of a 2-real-symbol (single-complex-symbol) maximum likelihood (ML) decodable, square space-time block codes (STBCs) with unitary weight matrices is 2a/2a complex symbols per channel use (cspcu) for 2a number of transmit antennas [1]. These STBCs are obtained from Unitary Weight Designs (UWDs). In this paper, we show that the maximum rates for 3- and 4-real-symbol (2-complex-symbol) ML decodable square STBCs from UWDs, for 2a transmit antennas, are 3(a-1)/2a and 4(a-1)/2a cspcu, respectively. STBCs achieving this maximum rate are constructed. A set of sufficient conditions on the signal set, required for these codes to achieve full-diversity are derived along with expressions for their coding gain.
Resumo:
For a family/sequence of Space-Time Block Codes (STBCs) C1, C2,⋯, with increasing number of transmit antennas Ni, with rates Ri complex symbols per channel use (cspcu), i = 1,2,⋯, the asymptotic normalized rate is defined as limi→∞ Ri/Ni. A family of STBCs is said to be asymptotically-good if the asymptotic normalized rate is non-zero, i.e., when the rate scales as a non-zero fraction of the number of transmit antennas, and the family of STBCs is said to be asymptotically-optimal if the asymptotic normalized rate is 1, which is the maximum possible value. In this paper, we construct a new class of full-diversity STBCs that have the least maximum-likelihood (ML) decoding complexity among all known codes for any number of transmit antennas N>;1 and rates R>;1 cspcu. For a large set of (R,N) pairs, the new codes have lower ML decoding complexity than the codes already available in the literature. Among the new codes, the class of full-rate codes (R=N) are asymptotically-optimal and fast-decodable, and for N>;5 have lower ML decoding complexity than all other families of asymptotically-optimal, fast-decodable, full-diversity STBCs available in the literature. The construction of the new STBCs is facilitated by the following further contributions of this paper: (i) Construction of a new class of asymptotically-good, full-diversity multigroup ML decodable codes, that not only includes STBCs for a larger set of antennas, but also either matches in rate or contains as a proper subset all other high-rate or asymptotically-good, delay-optimal, multigroup ML decodable codes available in the literature. (ii) Construction of a new class of fast-group-decodable codes (codes that combine the low ML decoding complexity properties of multigroup ML decodable codes and fast-decodable codes) for all even number of transmit antennas and rates 1 <; R ≤ 5/4.- - (iii) Given a design with full-rank linear dispersion matrices, we show that a full-diversity STBC can be constructed from this design by encoding the real symbols independently using only regular PAM constellations.
Resumo:
Land cover (LC) and land use (LU) dynamics induced by human and natural processes play a major role in global as well as regional patterns of landscapes influencing biodiversity, hydrology, ecology and climate. Changes in LC features resulting in forest fragmentations have posed direct threats to biodiversity, endangering the sustainability of ecological goods and services. Habitat fragmentation is of added concern as the residual spatial patterns mitigate or exacerbate edge effects. LU dynamics are obtained by classifying temporal remotely sensed satellite imagery of different spatial and spectral resolutions. This paper reviews five different image classification algorithms using spatio-temporal data of a temperate watershed in Himachal Pradesh, India. Gaussian Maximum Likelihood classifier was found to be apt for analysing spatial pattern at regional scale based on accuracy assessment through error matrix and ROC (receiver operating characteristic) curves. The LU information thus derived was then used to assess spatial changes from temporal data using principal component analysis and correspondence analysis based image differencing. The forest area dynamics was further studied by analysing the different types of fragmentation through forest fragmentation models. The computed forest fragmentation and landscape metrics show a decline of interior intact forests with a substantial increase in patch forest during 1972-2007.
Resumo:
Background & objectives: There is a need to develop an affordable and reliable tool for hearing screening of neonates in resource constrained, medically underserved areas of developing nations. This study valuates a strategy of health worker based screening of neonates using a low cost mechanical calibrated noisemaker followed up with parental monitoring of age appropriate auditory milestones for detecting severe-profound hearing impairment in infants by 6 months of age. Methods: A trained health worker under the supervision of a qualified audiologist screened 425 neonates of whom 20 had confirmed severe-profound hearing impairment. Mechanical calibrated noisemakers of 50, 60, 70 and 80 dB (A) were used to elicit the behavioural responses. The parents of screened neonates were instructed to monitor the normal language and auditory milestones till 6 months of age. This strategy was validated against the reference standard consisting of a battery of tests - namely, auditory brain stem response (ABR), otoacoustic emissions (OAE) and behavioural assessment at 2 years of age. Bayesian prevalence weighted measures of screening were calculated. Results: The sensitivity and specificity was high with least false positive referrals for. 70 and 80 dB (A) noisemakers. All the noisemakers had 100 per cent negative predictive value. 70 and 80 dB (A) noisemakers had high positive likelihood ratios of 19 and 34, respectively. The probability differences for pre- and post- test positive was 43 and 58 for 70 and 80 dB (A) noisemakers, respectively. Interpretation & conclusions: In a controlled setting, health workers with primary education can be trained to use a mechanical calibrated noisemaker made of locally available material to reliably screen for severe-profound hearing loss in neonates. The monitoring of auditory responses could be done by informed parents. Multi-centre field trials of this strategy need to be carried out to examine the feasibility of community health care workers using it in resource constrained settings of developing nations to implement an effective national neonatal hearing screening programme.
Resumo:
The constraint complexity of a graphical realization of a linear code is the maximum dimension of the local constraint codes in the realization. The treewidth of a linear code is the least constraint complexity of any of its cycle-free graphical realizations. This notion provides a useful parameterization of the maximum-likelihood decoding complexity for linear codes. In this paper, we show the surprising fact that for maximum distance separable codes and Reed-Muller codes, treewidth equals trelliswidth, which, for a code, is defined to be the least constraint complexity (or branch complexity) of any of its trellis realizations. From this, we obtain exact expressions for the treewidth of these codes, which constitute the only known explicit expressions for the treewidth of algebraic codes.
Resumo:
The magnetorotational instability (MRI) is a crucial mechanism of angular momentum transport in a variety of astrophysical accretion disks. In systems accreting at well below the Eddington rate, such as the central black hole in the Milky Way (Sgr A*), the plasma in the disk is essentially collisionless. We present a nonlinear study of the collisionless MRI using first-principles particle-in-cell plasma simulations. We focus on local two-dimensional (axisymmetric) simulations, deferring more realistic three-dimensional simulations to future work. For simulations with net vertical magnetic flux, the MRI continuously amplifies the magnetic field, B, until the Alfven velocity, v(A), is comparable to the speed of light, c (independent of the initial value of v(A)/c). This is consistent with the lack of saturation of MRI channel modes in analogous axisymmetric MHD simulations. The amplification of the magnetic field by the MRI generates a significant pressure anisotropy in the plasma (with the pressure perpendicular to B being larger than the parallel pressure). We find that this pressure anisotropy in turn excites mirror modes and that the volume-averaged pressure anisotropy remains near the threshold for mirror mode excitation. Particle energization is due to both reconnection and viscous heating associated with the pressure anisotropy. Reconnection produces a distinctive power-law component in the energy distribution function of the particles, indicating the likelihood of non-thermal ion and electron acceleration in collisionless accretion disks. This has important implications for interpreting the observed emission-from the radio to the gamma-rays-of systems such as Sgr A*.
Resumo:
Trypanosomatids cause deadly diseases in humans. Of the various biochemical pathways in trypanosomatids, glycolysis, has received special attention because of being sequestered in peroxisome like organelles critical for the survival of the parasites. This study focuses on phosphoglycerate kinase (PGK) from Leishmania spp. which, exists in two isoforms, the cytoplasmic PGKB and glycosomal PGKC differing in their biochemical properties. Computational analysis predicted the likelihood of a transmembrane helix only in the glycosomal isoform PGKC, of approximate length 20 residues in the 62-residue extension, ending at, arginine residues R471 and R472. From experimental studies using circular dichroism and NMR with deuterated sodium dodecyl sulfate, we find that the transmembrane helix spans residues 448 +/- 2 to 476 in Leishmania mexicana PGKC. The significance of this observation is discussed in the context of glycosomal transport and substrate tunneling. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We reconsider standard uniaxial fatigue test data obtained from handbooks. Many S-N curve fits to such data represent the median life and exclude load-dependent variance in life. Presently available approaches for incorporating probabilistic aspects explicitly within the S-N curves have some shortcomings, which we discuss. We propose a new linear S-N fit with a prespecified failure probability, load-dependent variance, and reasonable behavior at extreme loads. We fit our parameters using maximum likelihood, show the reasonableness of the fit using Q-Q plots, and obtain standard error estimates via Monte Carlo simulations. The proposed fitting method may be used for obtaining S-N curves from the same data as already available, with the same mathematical form, but in cases in which the failure probability is smaller, say, 10 % instead of 50 %, and in which the fitted line is not parallel to the 50 % (median) line.