1000 resultados para pastoral applications
Resumo:
Let X(t) be a right continuous temporally homogeneous Markov pro- cess, Tt the corresponding semigroup and A the weak infinitesimal genera- tor. Let g(t) be absolutely continuous and r a stopping time satisfying E.( S f I g(t) I dt) < oo and E.( f " I g'(t) I dt) < oo Then for f e 9iJ(A) with f(X(t)) right continuous the identity Exg(r)f(X(z)) - g(O)f(x) = E( 5 " g'(s)f(X(s)) ds) + E.( 5 " g(s)Af(X(s)) ds) is a simple generalization of Dynkin's identity (g(t) 1). With further restrictions on f and r the following identity is obtained as a corollary: Ex(f(X(z))) = f(x) + k! Ex~rkAkf(X(z))) + n-1E + (n ) )!.E,(so un-1Anf(X(u)) du). These identities are applied to processes with stationary independent increments to obtain a number of new and known results relating the moments of stopping times to the moments of the stopped processes.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
This article describes recent developments in the design and implementation of various strategies towards the development of novel therapeutics using first principles from biology and chemistry. Strategies for multi-target therapeutics and network analysis with a focus on cancer and HIV are discussed. Methods for gene and siRNA delivery are presented along with challenges and opportunities for siRNA therapeutics. Advances in protein design methodology and screening are described, with a focus on their application to the design of antibody based therapeutics. Future advances in this area relevant to vaccine design are also mentioned.
Resumo:
We derive the heat kernel for arbitrary tensor fields on S-3 and (Euclidean) AdS(3) using a group theoretic approach. We use these results to also obtain the heat kernel on certain quotients of these spaces. In particular, we give a simple, explicit expression for the one loop determinant for a field of arbitrary spin s in thermal AdS(3). We apply this to the calculation of the one loop partition function of N = 1 supergravity on AdS(3). We find that the answer factorizes into left- and right-moving super Virasoro characters built on the SL(2, C) invariant vacuum, as argued by Maloney and Witten on general grounds.
Resumo:
In this thesis we deal with the concept of risk. The objective is to bring together and conclude on some normative information regarding quantitative portfolio management and risk assessment. The first essay concentrates on return dependency. We propose an algorithm for classifying markets into rising and falling. Given the algorithm, we derive a statistic: the Trend Switch Probability, for detection of long-term return dependency in the first moment. The empirical results suggest that the Trend Switch Probability is robust over various volatility specifications. The serial dependency in bear and bull markets behaves however differently. It is strongly positive in rising market whereas in bear markets it is closer to a random walk. Realized volatility, a technique for estimating volatility from high frequency data, is investigated in essays two and three. In the second essay we find, when measuring realized variance on a set of German stocks, that the second moment dependency structure is highly unstable and changes randomly. Results also suggest that volatility is non-stationary from time to time. In the third essay we examine the impact from market microstructure on the error between estimated realized volatility and the volatility of the underlying process. With simulation-based techniques we show that autocorrelation in returns leads to biased variance estimates and that lower sampling frequency and non-constant volatility increases the error variation between the estimated variance and the variance of the underlying process. From these essays we can conclude that volatility is not easily estimated, even from high frequency data. It is neither very well behaved in terms of stability nor dependency over time. Based on these observations, we would recommend the use of simple, transparent methods that are likely to be more robust over differing volatility regimes than models with a complex parameter universe. In analyzing long-term return dependency in the first moment we find that the Trend Switch Probability is a robust estimator. This is an interesting area for further research, with important implications for active asset allocation.
Resumo:
Topics in Spatial Econometrics — With Applications to House Prices Spatial effects in data occur when geographical closeness of observations influences the relation between the observations. When two points on a map are close to each other, the observed values on a variable at those points tend to be similar. The further away the two points are from each other, the less similar the observed values tend to be. Recent technical developments, geographical information systems (GIS) and global positioning systems (GPS) have brought about a renewed interest in spatial matters. For instance, it is possible to observe the exact location of an observation and combine it with other characteristics. Spatial econometrics integrates spatial aspects into econometric models and analysis. The thesis concentrates mainly on methodological issues, but the findings are illustrated by empirical studies on house price data. The thesis consists of an introductory chapter and four essays. The introductory chapter presents an overview of topics and problems in spatial econometrics. It discusses spatial effects, spatial weights matrices, especially k-nearest neighbours weights matrices, and various spatial econometric models, as well as estimation methods and inference. Further, the problem of omitted variables, a few computational and empirical aspects, the bootstrap procedure and the spatial J-test are presented. In addition, a discussion on hedonic house price models is included. In the first essay a comparison is made between spatial econometrics and time series analysis. By restricting the attention to unilateral spatial autoregressive processes, it is shown that a unilateral spatial autoregression, which enjoys similar properties as an autoregression with time series, can be defined. By an empirical study on house price data the second essay shows that it is possible to form coordinate-based, spatially autoregressive variables, which are at least to some extent able to replace the spatial structure in a spatial econometric model. In the third essay a strategy for specifying a k-nearest neighbours weights matrix by applying the spatial J-test is suggested, studied and demonstrated. In the final fourth essay the properties of the asymptotic spatial J-test are further examined. A simulation study shows that the spatial J-test can be used for distinguishing between general spatial models with different k-nearest neighbours weights matrices. A bootstrap spatial J-test is suggested to correct the size of the asymptotic test in small samples.
Resumo:
We provide a survey of some of our recent results ([9], [13], [4], [6], [7]) on the analytical performance modeling of IEEE 802.11 wireless local area networks (WLANs). We first present extensions of the decoupling approach of Bianchi ([1]) to the saturation analysis of IEEE 802.11e networks with multiple traffic classes. We have found that even when analysing WLANs with unsaturated nodes the following state dependent service model works well: when a certain set of nodes is nonempty, their channel attempt behaviour is obtained from the corresponding fixed point analysis of the saturated system. We will present our experiences in using this approximation to model multimedia traffic over an IEEE 802.11e network using the enhanced DCF channel access (EDCA) mechanism. We have found that we can model TCP controlled file transfers, VoIP packet telephony, and streaming video in the IEEE802.11e setting by this simple approximation.
Resumo:
The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.
Resumo:
Partition of unity methods, such as the extended finite element method, allows discontinuities to be simulated independently of the mesh (Int. J. Numer. Meth. Engng. 1999; 45:601-620). This eliminates the need for the mesh to be aligned with the discontinuity or cumbersome re-meshing, as the discontinuity evolves. However, to compute the stiffness matrix of the elements intersected by the discontinuity, a subdivision of the elements into quadrature subcells aligned with the discontinuity is commonly adopted. In this paper, we use a simple integration technique, proposed for polygonal domains (Int. J. Nuttier Meth. Engng 2009; 80(1):103-134. DOI: 10.1002/nme.2589) to suppress the need for element subdivision. Numerical results presented for a few benchmark problems in the context of linear elastic fracture mechanics and a multi-material problem show that the proposed method yields accurate results. Owing to its simplicity, the proposed integration technique can be easily integrated in any existing code. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
In this paper a theory for two-person zero sum multicriterion differential games is presented. Various solution concepts based upon the notions of Pareto optimality (efficiency), security and equilibrium are defined. These are shown to have interesting applications in the formulation and analysis of two target or combat differential games. The methods for obtaining outcome regions in the state space, feedback strategies for the players and the mode of play has been discussed in the framework of bicriterion zero sum differential games. The treatment is conceptual rather than rigorous.
Resumo:
A compact, high brightness 13.56 MHz inductively coupled plasma ion source without any axial or radial multicusp magnetic fields is designed for the production of a focused ion beam. Argon ion current of density more than 30 mA/cm(2) at 4 kV potential is extracted from this ion source and is characterized by measuring the ion energy spread and brightness. Ion energy spread is measured by a variable-focusing retarding field energy analyzer that minimizes the errors due t divergence of ion beam inside the analyzer. Brightness of the ion beam is determined from the emittance measured by a fully automated and locally developed electrostatic sweep scanner. By optimizing various ion source parameters such as RF power, gas pressure and Faraday shield, ion beams with energy spread of less than 5 eV and brightness of 7100 Am(-2)sr(-1)eV(-1) have been produced. Here, we briefly report the details of the ion source, measurement and optimization of energy spread and brightness of the ion beam. (C) 2010 Elsevier B.V. All rights reserved.