30 resultados para Intensive and extensive margin


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Graphene has generated, great sensation due to its amazing properties,and extensive research is being pursued on single as well as bi- and few-layer graphenes. In this Perspective, we highlight some aspects of graphene synthesis surface, magnetic, and mechanical properties, as well as effects of doping and indicate a few useful directions for future research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern wireline and wireless communication devices are multimode and multifunctional communication devices. In order to support multiple standards on a single platform, it is necessary to develop a reconfigurable architecture that can provide the required flexibility and performance. The Channel decoder is one of the most compute intensive and essential elements of any communication system. Most of the standards require a reconfigurable Channel decoder that is capable of performing Viterbi decoding and Turbo decoding. Furthermore, the Channel decoder needs to support different configurations of Viterbi and Turbo decoders. In this paper, we propose a reconfigurable Channel decoder that can be reconfigured for standards such as WCDMA, CDMA2000, IEEE802.11, DAB, DVB and GSM. Different parameters like code rate, constraint length, polynomials and truncation length can be configured to map any of the above mentioned standards. A multiprocessor approach has been followed to provide higher throughput and scalable power consumption in various configurations of the reconfigurable Viterbi decoder and Turbo decoder. We have proposed A Hybrid register exchange approach for multiprocessor architecture to minimize power consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The solution conformation of alamethicin, a 20-residue antibiotic peptide, has been investigated using two-dimensional n.m.r. spectroscopy. Complete proton resonance assignments of this peptide have been carried out using COSY, SUPERCOSY, RELAY COSY and NOESY two-dimensional spectroscopies. Observation of a large number of nuclear Overhauser effects between sequential backbone amide protons, between backbone amide protons and CβH protons of preceding residues and extensive intramolecular hydrogen bonding patterns of NH protons has established that this polypeptide is in a largely helical conformation. This result is in conformity with earlier reported solid state X-ray results and a recent n.m.r. study in methanol solution (Esposito et al. (1987) Biochemistry26, 1043-1050) but is at variance with an earlier study which favored an extended conformation for the C-terminal half of alamethicin (Bannerjee et al.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presence of DNA-specific IgG4 antibodies was demonstrated in the sera of patients with systemic lupus erythematosus (SLE) by a microtiter solid-phase radioimmunoassay. A patient with distal inter-phalangeal swelling and extensive ulcers in the oral cavity, seronegative for anti-DNA antibodies of the IgG isotype, was found to have anti-DNA autoantibodies exclusively of the IgG4 subclass. These autoantibodies directed against the dsDNA conformation cross-reacted with chondroitin sulfate, dermatan sulfate and heparin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gaussian Processes (GPs) are promising Bayesian methods for classification and regression problems. They have also been used for semi-supervised learning tasks. In this paper, we propose a new algorithm for solving semi-supervised binary classification problem using sparse GP regression (GPR) models. It is closely related to semi-supervised learning based on support vector regression (SVR) and maximum margin clustering. The proposed algorithm is simple and easy to implement. It gives a sparse solution directly unlike the SVR based algorithm. Also, the hyperparameters are estimated easily without resorting to expensive cross-validation technique. Use of sparse GPR model helps in making the proposed algorithm scalable. Preliminary results on synthetic and real-world data sets demonstrate the efficacy of the new algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Channel assignment in multi-channel multi-radio wireless networks poses a significant challenge due to scarcity of number of channels available in the wireless spectrum. Further, additional care has to be taken to consider the interference characteristics of the nodes in the network especially when nodes are in different collision domains. This work views the problem of channel assignment in multi-channel multi-radio networks with multiple collision domains as a non-cooperative game where the objective of the players is to maximize their individual utility by minimizing its interference. Necessary and sufficient conditions are derived for the channel assignment to be a Nash Equilibrium (NE) and efficiency of the NE is analyzed by deriving the lower bound of the price of anarchy of this game. A new fairness measure in multiple collision domain context is proposed and necessary and sufficient conditions for NE outcomes to be fair are derived. The equilibrium conditions are then applied to solve the channel assignment problem by proposing three algorithms, based on perfect/imperfect information, which rely on explicit communication between the players for arriving at an NE. A no-regret learning algorithm known as Freund and Schapire Informed algorithm, which has an additional advantage of low overhead in terms of information exchange, is proposed and its convergence to the stabilizing outcomes is studied. New performance metrics are proposed and extensive simulations are done using Matlab to obtain a thorough understanding of the performance of these algorithms on various topologies with respect to these metrics. It was observed that the algorithms proposed were able to achieve good convergence to NE resulting in efficient channel assignment strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear Elastic Fracture Mechanics (LEFM) has been widely used in the past for fatigue crack growth studies, but this is acceptable only in situations which are within small scale yielding (SSY). In many practical structural components, conditions of SSY could be violated and one has to look for fracture criteria based on elasto-plastic analysis. Crack closure phenomenon, one of the most striking discoveries based on inelastic deformations during crack growth, has significant effect on fatigue crack growth rate. Numerical simulation of this phenomenon is computationally intensive and involved but has been successfully implemented. Stress intensity factors and strain energy release rates lose their meaning, J-integral (or its incremental) values are applicable only in specific situations, whereas alternate path independent integrals have been proposed in the literature for use with elasto-plastic fracture mechanics (EPFM) based criteria. This paper presents certain salient features of two independent finite element (numerical) studies of relevance to fatigue crack growth, where elasto-plastic analysis becomes significant. These problems can only be handled in the current day computational environment, and would have been only a dream just a few years ago.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the following problem: given a set of jobs and a set of people with preferences over the jobs, what is the optimal way of matching people to jobs? Here we consider the notion of popularity. A matching M is popular if there is no matching M' such that more people prefer M' to M than the other way around. Determining whether a given instance admits a popular matching and, if so, finding one, was studied by Abraham et al. (SIAM J. Comput. 37(4):1030-1045, 2007). If there is no popular matching, a reasonable substitute is a matching whose unpopularity is bounded. We consider two measures of unpopularity-unpopularity factor denoted by u(M) and unpopularity margin denoted by g(M). McCutchen recently showed that computing a matching M with the minimum value of u(M) or g(M) is NP-hard, and that if G does not admit a popular matching, then we have u(M) >= 2 for all matchings M in G. Here we show that a matching M that achieves u(M) = 2 can be computed in O(m root n) time (where m is the number of edges in G and n is the number of nodes) provided a certain graph H admits a matching that matches all people. We also describe a sequence of graphs: H = H(2), H(3), ... , H(k) such that if H(k) admits a matching that matches all people, then we can compute in O(km root n) time a matching M such that u(M) <= k - 1 and g(M) <= n(1 - 2/k). Simulation results suggest that our algorithm finds a matching with low unpopularity in random instances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Community Climate System Model (CCSM) is a Multiple Program Multiple Data (MPMD) parallel global climate model comprising atmosphere, ocean, land, ice and coupler components. The simulations have a time-step of the order of tens of minutes and are typically performed for periods of the order of centuries. These climate simulations are highly computationally intensive and can take several days to weeks to complete on most of today’s multi-processor systems. ExecutingCCSM on grids could potentially lead to a significant reduction in simulation times due to the increase in number of processors. However, in order to obtain performance gains on grids, several challenges have to be met. In this work,we describe our load balancing efforts in CCSM to make it suitable for grid enabling.We also identify the various challenges in executing CCSM on grids. Since CCSM is an MPI application, we also describe our current work on building a MPI implementation for grids to grid-enable CCSM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most Java programmers would agree that Java is a language that promotes a philosophy of “create and go forth”. By design, temporary objects are meant to be created on the heap, possibly used and then abandoned to be collected by the garbage collector. Excessive generation of temporary objects is termed “object churn” and is a form of software bloat that often leads to performance and memory problems. To mitigate this problem, many compiler optimizations aim at identifying objects that may be allocated on the stack. However, most such optimizations miss large opportunities for memory reuse when dealing with objects inside loops or when dealing with container objects. In this paper, we describe a novel algorithm that detects bloat caused by the creation of temporary container and String objects within a loop. Our analysis determines which objects created within a loop can be reused. Then we describe a source-to-source transformation that efficiently reuses such objects. Empirical evaluation indicates that our solution can reduce upto 40% of temporary object allocations in large programs, resulting in a performance improvement that can be as high as a 20% reduction in the run time, specifically when a program has a high churn rate or when the program is memory intensive and needs to run the GC often.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Precise information on streamflows is of major importance for planning and monitoring of water resources schemes related to hydro power, water supply, irrigation, flood control, and for maintaining ecosystem. Engineers encounter challenges when streamflow data are either unavailable or inadequate at target locations. To address these challenges, there have been efforts to develop methodologies that facilitate prediction of streamflow at ungauged sites. Conventionally, time intensive and data exhaustive rainfall-runoff models are used to arrive at streamflow at ungauged sites. Most recent studies show improved methods based on regionalization using Flow Duration Curves (FDCs). A FDC is a graphical representation of streamflow variability, which is a plot between streamflow values and their corresponding exceedance probabilities that are determined using a plotting position formula. It provides information on the percentage of time any specified magnitude of streamflow is equaled or exceeded. The present study assesses the effectiveness of two methods to predict streamflow at ungauged sites by application to catchments in Mahanadi river basin, India. The methods considered are (i) Regional flow duration curve method, and (ii) Area Ratio method. The first method involves (a) the development of regression relationships between percentile flows and attributes of catchments in the study area, (b) use of the relationships to construct regional FDC for the ungauged site, and (c) use of a spatial interpolation technique to decode information in FDC to construct streamflow time series for the ungauged site. Area ratio method is conventionally used to transfer streamflow related information from gauged sites to ungauged sites. Attributes that have been considered for the analysis include variables representing hydrology, climatology, topography, land-use/land- cover and soil properties corresponding to catchments in the study area. Effectiveness of the presented methods is assessed using jack knife cross-validation. Conclusions based on the study are presented and discussed. (C) 2015 The Authors. Published by Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Practical orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE), exploit multi-user diversity using very limited feedback. The best-m feedback scheme is one such limited feedback scheme, in which users report only the gains of their m best subchannels (SCs) and their indices. While the scheme has been extensively studied and adopted in standards such as LTE, an analysis of its throughput for the practically important case in which the SCs are correlated has received less attention. We derive new closed-form expressions for the throughput when the SC gains of a user are uniformly correlated. We analyze the performance of the greedy but unfair frequency-domain scheduler and the fair round-robin scheduler for the general case in which the users see statistically non-identical SCs. An asymptotic analysis is then developed to gain further insights. The analysis and extensive numerical results bring out how correlation reduces throughput.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, Mode-I fracture experiments are conducted using notched compact tension specimens machined from a rolled AZ31 Mg alloy plate having near-basal texture with load applied along rolling direction (RD) and transverse direction (TD). Moderately high notched fracture toughness of J(C) similar to 46 N/mm is obtained in both RD and TD specimens. Fracture surface shows crack tunneling at specimen mid-thickness and extensive shear lips near the free surface. Dimples are observed from SEM fractographs suggesting ductile fracture. EBSD analysis shows profuse tensile twinning in the ligament ahead of the notch. It is shown that tensile twinning plays a dual role in enhancing the toughness in the notched fracture specimens with reduced triaxiality. It provides significant dissipation in the background plastic zone and imparts hardening to the material surrounding the fracture process zone via operation of several mechanisms which retards micro-void growth and coalescence. (C) 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dry sliding wear behavior of epoxy matrix syntactic foams filled with 20, 40 and 60 wt% fly ash cenosphere is reported based on response surface methodology. Empirical models are constructed and validated based on analysis of variance. Results show that syntactic foams have higher wear resistance than the matrix resin. Among the parameters studied, the applied normal load (F) had a prominent effect on wear rate, specific wear rate (w(s)) and coefficient of friction (mu). With increasing F, the wear rate increased, whereas ws and mu decreased. With increase in filler content, the wear rate and w(s) decreased, while the mu increased. With increase in sliding velocity as well as sliding distance, the wear rate and ws show decreasing trends. Microscopy revealed broken cenospheres forming debris and extensive deformation marks on the wear surface. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fundamental question in protein folding is whether the coil to globule collapse transition occurs during the initial stages of folding (burst phase) or simultaneously with the protein folding transition. Single molecule fluorescence resonance energy transfer (FRET) and small-angle X-ray scattering (SAXS) experiments disagree on whether Protein L collapse transition occurs during the burst phase of folding. We study Protein L folding using a coarse-grained model and molecular dynamics simulations. The collapse transition in Protein L is found to be concomitant with the folding transition. In the burst phase of folding, we find that FRET experiments overestimate radius of gyration, R-g, of the protein due to the application of Gaussian polymer chain end-to-end distribution to extract R-g from the FRET efficiency. FRET experiments estimate approximate to 6 angstrom decrease in R-g when the actual decrease is approximate to 3 angstrom on guanidinium chloride denaturant dilution from 7.5 to 1 M, thereby suggesting pronounced compaction in the protein dimensions in the burst phase. The approximate to 3 angstrom decrease is close to the statistical uncertainties of the R-g data measured from SAXS experiments, which suggest no compaction, leading to a disagreement with the FRET experiments. The transition-state ensemble (TSE) structures in Protein L folding are globular and extensive in agreement with the Psi-analysis experiments. The results support the hypothesis that the TSE of single domain proteins depends on protein topology and is not stabilized by local interactions alone.