22 resultados para Comparative Literature
Resumo:
A major question in current network science is how to understand the relationship between structure and functioning of real networks. Here we present a comparative network analysis of 48 wasp and 36 human social networks. We have compared the centralisation and small world character of these interaction networks and have studied how these properties change over time. We compared the interaction networks of (1) two congeneric wasp species (Ropalidia marginata and Ropalidia cyathiformis), (2) the queen-right (with the queen) and queen-less (without the queen) networks of wasps, (3) the four network types obtained by combining (1) and (2) above, and (4) wasp networks with the social networks of children in 36 classrooms. We have found perfect (100%) centralisation in a queen-less wasp colony and nearly perfect centralisation in several other queen-less wasp colonies. Note that the perfectly centralised interaction network is quite unique in the literature of real-world networks. Differences between the interaction networks of the two wasp species are smaller than differences between the networks describing their different colony conditions. Also, the differences between different colony conditions are larger than the differences between wasp and children networks. For example, the structure of queen-right R. marginata colonies is more similar to children social networks than to that of their queen-less colonies. We conclude that network architecture depends more on the functioning of the particular community than on taxonomic differences (either between two wasp species or between wasps and humans).
Resumo:
The topology optimization problem for the synthesis of compliant mechanisms has been formulated in many different ways in the last 15 years, but there is not yet a definitive formulation that is universally accepted. Furthermore, there are two unresolved issues in this problem. In this paper, we present a comparative study of five distinctly different formulations that are reported in the literature. Three benchmark examples are solved with these formulations using the same input and output specifications and the same numerical optimization algorithm. A total of 35 different synthesis examples are implemented. The examples are limited to desired instantaneous output direction for prescribed input force direction. Hence, this study is limited to linear elastic modeling with small deformations. Two design parameterizations, namely, the frame element based ground structure and the density approach using continuum elements, are used. The obtained designs are evaluated with all other objective functions and are compared with each other. The checkerboard patterns, point flexures, the ability to converge from an unbiased uniform initial guess, and the computation time are analyzed. Some observations are noted based on the extensive implementation done in this study. Complete details of the benchmark problems and the results are included. The computer codes related to this study are made available on the internet for ready access.
Resumo:
Factors influencing the effectiveness of democratic institutions and to that effect processes involved at the local governance level have been the interest in the literature, given the presence of various advocacies and networks that are context-specific. This paper is motivated to understand the adaptability issues related to governance given these complexities through a comparative analysis of diversified regions. We adopted a two-stage clustering along with regression methodology for this purpose. The results show that the formation of advocacies and networks depends on the context and institutional framework. The paper concludes by exploring different strategies and dynamics involved in network governance and insists on the importance of governing the networks for structural reformation through regional policy making.
Resumo:
The non-availability of high-spatial-resolution thermal data from satellites on a consistent basis led to the development of different models for sharpening coarse-spatial-resolution thermal data. Thermal sharpening models that are based on the relationship between land-surface temperature (LST) and a vegetation index (VI) such as the normalized difference vegetation index (NDVI) or fraction vegetation cover (FVC) have gained much attention due to their simplicity, physical basis, and operational capability. However, there are hardly any studies in the literature examining comprehensively various VIs apart from NDVI and FVC, which may be better suited for thermal sharpening over agricultural and natural landscapes. The aim of this study is to compare the relative performance of five different VIs, namely NDVI, FVC, the normalized difference water index (NDWI), soil adjusted vegetation index (SAVI), and modified soil adjusted vegetation index (MSAVI), for thermal sharpening using the DisTrad thermal sharpening model over agricultural and natural landscapes in India. Multi-temporal LST data from Landsat-7 Enhanced Thematic Mapper Plus (ETM+) and Moderate Resolution Imaging Spectroradiometer (MODIS) sensors obtained over two different agro-climatic grids in India were disaggregated from 960 m to 120 m spatial resolution. The sharpened LST was compared with the reference LST estimated from the Landsat data at 120 m spatial resolution. In addition to this, MODIS LST was disaggregated from 960 m to 480 m and compared with ground measurements at five sites in India. It was found that NDVI and FVC performed better only under wet conditions, whereas under drier conditions, the performance of NDWI was superior to other indices and produced accurate results. SAVI and MSAVI always produced poorer results compared with NDVI/FVC and NDWI for wet and dry cases, respectively.
Resumo:
The DL- and L-arginine complexes of oxalic acid are made up of zwitterionic positively charged amino acid molecules and semi-oxalate ions. The dissimilar molecules aggregate into separate alternating layers in the former. The basic unit in the arginine layer is a centrosymmetric dimer, while the semi-oxalate ions form hydrogen-bonded strings in their layer. In the L-arginine complex each semi-oxalate ion is surrounded by arginine molecules and the complex can be described as an inclusion compound. The oxalic acid complexes of basic amino acids exhibit a variety of ionization states and stoichiometry. They illustrate the effect of aggregation and chirality on ionization state and stoichiometry, and that of molecular properties on aggregation. The semi-oxalate/oxalate ions tend to be planar, but large departures from planarity are possible. The amino acid aggregation in the different oxalic acid complexes do not resemble one another significantly, but the aggregation of a particular amino acid in its oxalic acid complex tends to have similarities with its aggregation in other structures. Also, semi-oxalate ions aggregate into similar strings in four of the six oxalic acid complexes. Thus, the intrinsic aggregation propensities of individual molecules tend to be retained in the complexes.
Resumo:
The thermal degradation processes of two sulfur polymers, poly(xylylene sulfide) (PXM) and poly(xylylene disulfide) (PXD), were investigated in parallel by direct pyrolysis mass spectrometry (DPMS) and flash pyrolysis GC/MS (Py-GC/MS). Thermogravimetric data showed that these polymers decompose with two separate steps in the temperature ranges of 250-280 and 600-650 degrees C, leaving a high amount of residue (about 50% at 800 degrees C). The pyrolysis products detected by DPMS in the first degradation step of PXM and PXD were terminated by three types of end groups, -CH3, -CH2SH, and -CH=S, originating from thermal cleavage reactions involving a series of homolytic chain scissions followed by hydrogen transfer reactions, generating several oligomers containing some intact xylylene sulfide repeating units. The presence of pyrolysis compounds containing some stilbene-like units in the first degradation step has also been observed. Their formation has been accounted for with a parallel cleavage involving the elimination of H2S from the PXM main chains. These unsaturated units can undergo cross-linking at higher temperatures, producing the high amount of char residue observed. The thermal degradation compounds detected by DPMS in the second decomposition step at about 600-650 degrees C were constituted of condensed aromatic molecules containing dihydrofenanthrene and fenanthrene units. These compounds might be generated from the polymer chains containing stilbene units, by isomerization and dehydrogenation reactions. The pyrolysis products obtained in the Py-GC/MS of PXM and PXD at 610 degrees C are almost identical. The relative abundance in the pyrolysate and the spectral properties of the main pyrolysis products were found to be in generally good agreement with those obtained by DPMS. Polycyclic aromatic hydrocarbons (PAHs) were also detected by Py-GC/MS but in minor amounts with respect to DPMS. This apparent discrepancy was due to the simultaneous detection of PAHs together with all pyrolysis products in the Py-GC/MS, whereas in DPMS they were detected in the second thermal degradation step without the greatest part of pyrolysis compounds generated in the first degradation step. The results obtained by DPMS and PSI-GC/MS experiments showed complementary data for the degradation of PXM and PXD and, therefore, allowed the unequivocal formulation of the thermal degradation mechanism for these sulfur-containing polymers.
Resumo:
Studies on melt rheological properties of blends of low density polyethylene (LDPE) with selected grades of linear low density polyethylene (LLDPE), which differ widely in their melt flow indices, are reported, The data obtained in a capillary rheometer are presented to describe the effects of blend composition and shear rate on flow behavior index, melt viscosity, and melt elasticity. In general, blending of LLDPE I that has a low melt flow index (2 g/10 min) with LDPE results in a decrease of its melt viscosity, processing temperature, and the tendency of extrudate distortion, depending on blending ratio. A blending ratio around 20-30% LLDPE I seems optimum from the point of view of desirable improvement in processability behavior. On the other hand, blending of LLDPE II that has a high melt flow index (10 g/10 min) with LDPE offers a distinct advantage in increasing the pseudoplasticity of LDPE/LLDPE II blends.
Resumo:
Background: Phosphorylation by protein kinases is a common event in many cellular processes. Further, many kinases perform specialized roles and are regulated by non-kinase domains tethered to kinase domain. Perturbation in the regulation of kinases leads to malignancy. We have identified and analysed putative protein kinases encoded in the genome of chimpanzee which is a close evolutionary relative of human. Result: The shared core biology between chimpanzee and human is characterized by many orthologous protein kinases which are involved in conserved pathways. Domain architectures specific to chimp/human kinases have been observed. Chimp kinases with unique domain architectures are characterized by deletion of one or more non-kinase domains in the human kinases. Interestingly, counterparts of some of the multi-domain human kinases in chimp are characterized by identical domain architectures but with kinase-like non-kinase domain. Remarkably, out of 587 chimpanzee kinases no human orthologue with greater than 95% sequence identity could be identified for 160 kinases. Variations in chimpanzee kinases compared to human kinases are brought about also by differences in functions of domains tethered to the catalytic kinase domain. For example, the heterodimer forming PB1 domain related to the fold of ubiquitin/Ras-binding domain is seen uniquely tethered to PKC-like chimpanzee kinase. Conclusion: Though the chimpanzee and human are evolutionary very close, there are chimpanzee kinases with no close counterpart in the human suggesting differences in their functions. This analysis provides a direction for experimental analysis of human and chimpanzee protein kinases in order to enhance our understanding on their specific biological roles.
Resumo:
Background: Tuberculosis still remains one of the largest killer infectious diseases, warranting the identification of newer targets and drugs. Identification and validation of appropriate targets for designing drugs are critical steps in drug discovery, which are at present major bottle-necks. A majority of drugs in current clinical use for many diseases have been designed without the knowledge of the targets, perhaps because standard methodologies to identify such targets in a high-throughput fashion do not really exist. With different kinds of 'omics' data that are now available, computational approaches can be powerful means of obtaining short-lists of possible targets for further experimental validation. Results: We report a comprehensive in silico target identification pipeline, targetTB, for Mycobacterium tuberculosis. The pipeline incorporates a network analysis of the protein-protein interactome, a flux balance analysis of the reactome, experimentally derived phenotype essentiality data, sequence analyses and a structural assessment of targetability, using novel algorithms recently developed by us. Using flux balance analysis and network analysis, proteins critical for survival of M. tuberculosis are first identified, followed by comparative genomics with the host, finally incorporating a novel structural analysis of the binding sites to assess the feasibility of a protein as a target. Further analyses include correlation with expression data and non-similarity to gut flora proteins as well as 'anti-targets' in the host, leading to the identification of 451 high-confidence targets. Through phylogenetic profiling against 228 pathogen genomes, shortlisted targets have been further explored to identify broad-spectrum antibiotic targets, while also identifying those specific to tuberculosis. Targets that address mycobacterial persistence and drug resistance mechanisms are also analysed. Conclusion: The pipeline developed provides rational schema for drug target identification that are likely to have high rates of success, which is expected to save enormous amounts of money, resources and time in the drug discovery process. A thorough comparison with previously suggested targets in the literature demonstrates the usefulness of the integrated approach used in our study, highlighting the importance of systems-level analyses in particular. The method has the potential to be used as a general strategy for target identification and validation and hence significantly impact most drug discovery programmes.
Resumo:
The potential energy surfaces of the HCN<->HNC and LiCN<->LiNC isomerization processes were determined by ab initio theory using fully optimized triple-zeta double polarization types of basis sets. Both the MP2 corrections and the QCISD level of calculations were performed to correct for the electron correlation. Results show that electron correlation has a considerable influence on the energetics and structures. Analysis of the intramolecular bond rearrangement processes reveals that, in both cases, H (or Li+) migrates in an almost elliptic path in the plane of the molecule. In HCN<->HNC, the migrating hydrogen interacts with the in-plane pi,pi* orbitals of CN, leading to a decrease in the C-N bond order. In LiCN<->LiNC, Li+ does not interact with the corresponding pi,pi* orbitals of CN.
Resumo:
Interaction of tetrathiafulvalene (TTF) and tetracyanoethylene (TCNE) with few-layer graphene samples prepared by the exfoliation of graphite oxide (EG), conversion of nanodiamond (DG) and arc-evaporation of graphite in hydrogen (HG) has been investigated by Raman spectroscopy to understand the role of the graphene surface. The position and full-width at half maximum of the Raman G-band are affected on interaction with TTF and TCNE and the effect is highest with EG and least with HG. The effect of TTF and TCNE on the 2D-band is also maximum with EG. The magnitude of interaction between the donor/acceptor molecules varies in the same order as the surface areas of the graphenes. (C) 2009 Published by Elsevier B. V.
Resumo:
M r= 975.9, orthorhombic, Pnna, a = 20.262 (3), b= 15.717 (2), c= 15.038 (1)A, V= 4788.97 A 3, z = 4, D x = 1.35 Mg m -3, Cu Kct radiation, 2 = 1.5418 A, /t = 2.79 mm -1, F(000) -= 2072, T = 293 K, R = 0.08, 3335 observed reflections. The molecular structure and the crystal packing are similar to those observed in the nonactin complexes of sodium thiocyanate and potassium thiocyanate. The eight metal-O distances are nearly the same in the potassium complex whereas the four distances involving carbonyl O atoms are shorter than the remaining four involving the tetrahydrofuran-ring O atoms in the Na and the Ca complexes. This observation can be explained in terms of the small ionic radii of Na + and Ca 2+, and leads to a plausible structural rationale for the stronger affinity of nonactin for K + than for the other two metal ions.
Resumo:
A variety of data structures such as inverted file, multi-lists, quad tree, k-d tree, range tree, polygon tree, quintary tree, multidimensional tries, segment tree, doubly chained tree, the grid file, d-fold tree. super B-tree, Multiple Attribute Tree (MAT), etc. have been studied for multidimensional searching and related problems. Physical data base organization, which is an important application of multidimensional searching, is traditionally and mostly handled by employing inverted file. This study proposes MAT data structure for bibliographic file systems, by illustrating the superiority of MAT data structure over inverted file. Both the methods are compared in terms of preprocessing, storage and query costs. Worst-case complexity analysis of both the methods, for a partial match query, is carried out in two cases: (a) when directory resides in main memory, (b) when directory resides in secondary memory. In both cases, MAT data structure is shown to be more efficient than the inverted file method. Arguments are given to illustrate the superiority of MAT data structure in an average case also. An efficient adaptation of MAT data structure, that exploits the special features of MAT structure and bibliographic files, is proposed for bibliographic file systems. In this adaptation, suitable techniques for fixing and ranking of the attributes for MAT data structure are proposed. Conclusions and proposals for future research are presented.
Resumo:
This paper deals with new results obtained in regard to the reconstruction properties of side-band Fresnel holograms (SBFH) of self-imaging type objects (for example, gratings) as compared with those of general objects. The major finding is that a distribution I2, which appears on the real-image plane along with the conventional real-image I1, remains a 2Z distribution (where 2Z is the axial distance between the object and its self-imaging plane) under a variety of situations, while its nature and focusing properties differ from one situation to another. It is demonstrated that the two distributions I1 and I2 can be used in the development of a novel technique for image subtraction.