43 resultados para DETERMINISTIC TRENDS
Resumo:
We develop a simulation-based, two-timescale actor-critic algorithm for infinite horizon Markov decision processes with finite state and action spaces, with a discounted reward criterion. The algorithm is of the gradient ascent type and performs a search in the space of stationary randomized policies. The algorithm uses certain simultaneous deterministic perturbation stochastic approximation (SDPSA) gradient estimates for enhanced performance. We show an application of our algorithm on a problem of mortgage refinancing. Our algorithm obtains the optimal refinancing strategies in a computationally efficient manner
Resumo:
Third World hinterlands provide most of the settings in which the quality of human life has improved the least over the decade since Our Common Future was published. This low quality of life promotes a desire for large number of offspring, fuelling population growth and an exodus to the urban centres of the Third World, Enhancing the quality of life of these people in ways compatible with the health of their environments is therefore the most significant of the challenges from the perspective of sustainable development. Human quality of life may be viewed in terms of access to goods, services and a satisfying social role. The ongoing processes of globalization are enhancing flows of goods worldwide, but these hardly reach the poor of Third World countrysides. But processes of globalization have also vastly improved everybody's access to Information, and there are excellent opportunities of putting this to good use to enhance the quality of life of the people of Third World countrysides through better access to education and health. More importantly, better access to information could promote a more satisfying social role through strengthening grass-roots involvement in development planning and management of natural resources. I illustrate these possibilities with the help of a series of concrete experiences form the south Indian state of Kerala. Such an effort does not call for large-scare material inputs, rather it calls for a culture of inform-and-share in place place of the prevalent culture of control-and-command. It calls for openness and transparency in transactions involving government agencies, NGOs, and national and transnational business enterprises. It calls for acceptance of accountability by such agencies.
Resumo:
Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6 degrees-38 degrees N and 68 degrees-98 degrees E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1 degrees x 0.1 degrees (approximately 10 x 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.
Resumo:
The rapid emergence of infectious diseases calls for immediate attention to determine practical solutions for intervention strategies. To this end, it becomes necessary to obtain a holistic view of the complex hostpathogen interactome. Advances in omics and related technology have resulted in massive generation of data for the interacting systems at unprecedented levels of detail. Systems-level studies with the aid of mathematical tools contribute to a deeper understanding of biological systems, where intuitive reasoning alone does not suffice. In this review, we discuss different aspects of hostpathogen interactions (HPIs) and the available data resources and tools used to study them. We discuss in detail models of HPIs at various levels of abstraction, along with their applications and limitations. We also enlist a few case studies, which incorporate different modeling approaches, providing significant insights into disease. (c) 2013 Wiley Periodicals, Inc.
Resumo:
In this paper, we estimate the trends and variability in Advanced Very High Resolution Radiometer (AVHRR)-derived terrestrial net primary productivity (NPP) over India for the period 1982-2006. We find an increasing trend of 3.9% per decade (r = 0.78, R-2 = 0.61) during the analysis period. A multivariate linear regression of NPP with temperature, precipitation, atmospheric CO2 concentration, soil water and surface solar radiation (r = 0.80, R-2 = 0.65) indicates that the increasing trend is partly driven by increasing atmospheric CO2 concentration and the consequent CO2 fertilization of the ecosystems. However, human interventions may have also played a key role in the NPP increase: non-forest NPP growth is largely driven by increases in irrigated area and fertilizer use, while forest NPP is influenced by plantation and forest conservation programs. A similar multivariate regression of interannual NPP anomalies with temperature, precipitation, soil water, solar radiation and CO2 anomalies suggests that the interannual variability in NPP is primarily driven by precipitation and temperature variability. Mean seasonal NPP is largest during post-monsoon and lowest during the pre-monsoon period, thereby indicating the importance of soil moisture for vegetation productivity.
Resumo:
The first regional synthesis of long-term (back to similar to 25 years at some stations) primary data (from direct measurement) on aerosol optical depth from the ARFINET (network of aerosol observatories established under the Aerosol Radiative Forcing over India (ARFI) project of Indian Space Research Organization over Indian subcontinent) have revealed a statistically significant increasing trend with a significant seasonal variability. Examining the current values of turbidity coefficients with those reported similar to 50 years ago reveals the phenomenal nature of the increase in aerosol loading. Seasonally, the rate of increase is consistently high during the dry months (December to March) over the entire region whereas the trends are rather inconsistent and weak during the premonsoon (April to May) and summer monsoon period (June to September). The trends in the spectral variation of aerosol optical depth (AOD) reveal the significance of anthropogenic activities on the increasing trend in AOD. Examining these with climate variables such as seasonal and regional rainfall, it is seen that the dry season depicts a decreasing trend in the total number of rainy days over the Indian region. The insignificant trend in AOD observed over the Indo-Gangetic Plain, a regional hot spot of aerosols, during the premonsoon and summer monsoon season is mainly attributed to the competing effects of dust transport and wet removal of aerosols by the monsoon rain. Contributions of different aerosol chemical species to the total dust, simulated using Goddard Chemistry Aerosol Radiation and Transport model over the ARFINET stations, showed an increasing trend for all the anthropogenic components and a decreasing trend for dust, consistent with the inference deduced from trend in Angstrom exponent.
Resumo:
Cobalt ferrite (CoFe2O4) is an engineering material which is used for applications such as magnetic cores, magnetic switches, hyperthermia based tumor treatment, and as contrast agents for magnetic resonance imaging. Utility of ferrites nanoparticles hinges on its size, dispersibility in solutions, and synthetic control over its coercivity. In this work, we establish correlations between room temperature co-precipitation conditions, and these crucial materials parameters. Furthermore, post-synthesis annealing conditions are correlated with morphology, changes in crystal structure and magnetic properties. We disclose the synthesis and process conditions helpful in obtaining easily sinterable CoFe2O4 nanoparticles with coercive magnetic flux density (H-c) in the range 5.5-31.9 kA/m and M-s in the range 47.9-84.9 A.m(2)Kg(-1). At a grain size of similar to 54 +/- 2 nm (corresponding to 1073 K sintering temperature), multi-domain behavior sets in, which is indicated by a decrease in H-c. In addition, we observe an increase in lattice constant with respect to grain size, which is the inverse of what is expected of in ferrites. Our results suggest that oxygen deficiency plays a crucial role in explaining this inverse trend. We expect the method disclosed here to be a viable and scalable alternative to thermal decomposition based CoFe2O4 synthesis. The magnetic trends reported will aid in the optimization of functional CoFe2O4 nanoparticles
Resumo:
This work presents novel achievable schemes for the 2-user symmetric linear deterministic interference channel with limited-rate transmitter cooperation and perfect secrecy constraints at the receivers. The proposed achievable scheme consists of a combination of interference cancelation, relaying of the other user's data bits, time sharing, and transmission of random bits, depending on the rate of the cooperative link and the relative strengths of the signal and the interference. The results show, for example, that the proposed scheme achieves the same rate as the capacity without the secrecy constraints, in the initial part of the weak interference regime. Also, sharing random bits through the cooperative link can achieve a higher secrecy rate compared to sharing data bits, in the very high interference regime. The results highlight the importance of limited transmitter cooperation in facilitating secure communications over 2-user interference channels.
Resumo:
This paper derives outer bounds for the 2-user symmetric linear deterministic interference channel (SLDIC) with limited-rate transmitter cooperation and perfect secrecy constraints at the receivers. Five outer bounds are derived, under different assumptions of providing side information to receivers and partitioning the encoded message/output depending on the relative strength of the signal and the interference. The usefulness of these outer bounds is shown by comparing the bounds with the inner bound on the achievable secrecy rate derived by the authors in a previous work. Also, the outer bounds help to establish that sharing random bits through the cooperative link can achieve the optimal rate in the very high interference regime.
Resumo:
Retransmission protocols such as HDLC and TCP are designed to ensure reliable communication over noisy channels (i.e., channels that can corrupt messages). Thakkar et al. 15] have recently presented an algorithmic verification technique for deterministic streaming string transducer (DSST) models of such protocols. The verification problem is posed as equivalence checking between the specification and protocol DSSTs. In this paper, we argue that more general models need to be obtained using non-deterministic streaming string transducers (NSSTs). However, equivalence checking is undecidable for NSSTs. We present two classes where the models belong to a sub-class of NSSTs for which it is decidable. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
The rapid emergence of infectious diseases calls for immediate attention to determine practical solutions for intervention strategies. To this end, it becomes necessary to obtain a holistic view of the complex hostpathogen interactome. Advances in omics and related technology have resulted in massive generation of data for the interacting systems at unprecedented levels of detail. Systems-level studies with the aid of mathematical tools contribute to a deeper understanding of biological systems, where intuitive reasoning alone does not suffice. In this review, we discuss different aspects of hostpathogen interactions (HPIs) and the available data resources and tools used to study them. We discuss in detail models of HPIs at various levels of abstraction, along with their applications and limitations. We also enlist a few case studies, which incorporate different modeling approaches, providing significant insights into disease. (c) 2013 Wiley Periodicals, Inc.
Resumo:
In this work, we study the well-known r-DIMENSIONAL k-MATCHING ((r, k)-DM), and r-SET k-PACKING ((r, k)-SP) problems. Given a universe U := U-1 ... U-r and an r-uniform family F subset of U-1 x ... x U-r, the (r, k)-DM problem asks if F admits a collection of k mutually disjoint sets. Given a universe U and an r-uniform family F subset of 2(U), the (r, k)-SP problem asks if F admits a collection of k mutually disjoint sets. We employ techniques based on dynamic programming and representative families. This leads to a deterministic algorithm with running time O(2.851((r-1)k) .vertical bar F vertical bar. n log(2)n . logW) for the weighted version of (r, k)-DM, where W is the maximum weight in the input, and a deterministic algorithm with running time O(2.851((r-0.5501)k).vertical bar F vertical bar.n log(2) n . logW) for the weighted version of (r, k)-SP. Thus, we significantly improve the previous best known deterministic running times for (r, k)-DM and (r, k)-SP and the previous best known running times for their weighted versions. We rely on structural properties of (r, k)-DM and (r, k)-SP to develop algorithms that are faster than those that can be obtained by a standard use of representative sets. Incorporating the principles of iterative expansion, we obtain a better algorithm for (3, k)-DM, running in time O(2.004(3k).vertical bar F vertical bar . n log(2)n). We believe that this algorithm demonstrates an interesting application of representative families in conjunction with more traditional techniques. Furthermore, we present kernels of size O(e(r)r(k-1)(r) logW) for the weighted versions of (r, k)-DM and (r, k)-SP, improving the previous best known kernels of size O(r!r(k-1)(r) logW) for these problems.