818 resultados para LDPC, CUDA, GPGPU, computing, GPU, DVB, S2, SDR
Resumo:
Location management problem that arise in mobile computing networks is addressed. One method used in location management is to designate sonic of the cells in the network as "reporting cells". The other cells in the network are "non-reporting cells". Finding an optimal set of reporting cells (or reporting cell configuration) for a given network. is a difficult combinatorial optimization problem. In fact this is shown to be an NP-complete problem. in an earlier study. In this paper, we use the selective paging strategy and use an ant colony optimization method to obtain the best/optimal set of reporting cells for a given a network.
Resumo:
Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.
Resumo:
Ubiquitous computing is about making computers and computerized artefacts a pervasive part of our everyday lifes, bringing more and more activities into the realm of information. The computationalization, informationalization of everyday activities increases not only our reach, efficiency and capabilities but also the amount and kinds of data gathered about us and our activities. In this thesis, I explore how information systems can be constructed so that they handle this personal data in a reasonable manner. The thesis provides two kinds of results: on one hand, tools and methods for both the construction as well as the evaluation of ubiquitous and mobile systems---on the other hand an evaluation of the privacy aspects of a ubiquitous social awareness system. The work emphasises real-world experiments as the most important way to study privacy. Additionally, the state of current information systems as regards data protection is studied. The tools and methods in this thesis consist of three distinct contributions. An algorithm for locationing in cellular networks is proposed that does not require the location information to be revealed beyond the user's terminal. A prototyping platform for the creation of context-aware ubiquitous applications called ContextPhone is described and released as open source. Finally, a set of methodological findings for the use of smartphones in social scientific field research is reported. A central contribution of this thesis are the pragmatic tools that allow other researchers to carry out experiments. The evaluation of the ubiquitous social awareness application ContextContacts covers both the usage of the system in general as well as an analysis of privacy implications. The usage of the system is analyzed in the light of how users make inferences of others based on real-time contextual cues mediated by the system, based on several long-term field studies. The analysis of privacy implications draws together the social psychological theory of self-presentation and research in privacy for ubiquitous computing, deriving a set of design guidelines for such systems. The main findings from these studies can be summarized as follows: The fact that ubiquitous computing systems gather more data about users can be used to not only study the use of such systems in an effort to create better systems but in general to study phenomena previously unstudied, such as the dynamic change of social networks. Systems that let people create new ways of presenting themselves to others can be fun for the users---but the self-presentation requires several thoughtful design decisions that allow the manipulation of the image mediated by the system. Finally, the growing amount of computational resources available to the users can be used to allow them to use the data themselves, rather than just being passive subjects of data gathering.
Resumo:
There is an increase in the uptake of cloud computing services (CCS). CCS is adopted in the form of a utility, and it incorporates business risks of the service providers and intermediaries. Thus, the adoption of CCS will change the risk profile of an organization. In this situation, organisations need to develop competencies by reconsidering their IT governance structures to achieve a desired level of IT-business alignment and maintain their risk appetite to source business value from CCS. We use the resource-based theories to suggest that collaborative board oversight of CCS, competencies relating to CCS information and financial management, and a CCS-related continuous audit program can contribute to business process performance improvements and overall firm performance. Using survey data, we find evidence of a positive association between these IT governance considerations and business process performance. We also find evidence of positive association between business process performance improvements and overall firm performance. The results suggest that the suggested considerations on IT governance structures can contribute to CCS-related IT-business alignment and lead to anticipated business value from CCS. This study provides guidance to organizations on competencies required to secure business value from CCS.
Resumo:
The concept of cloud computing services (CCS) is appealing to small and medium enterprises (SMEs). However, while there is a significant push by various authorities on SMEs to adopt the CCS, knowledge of the key considerations to adopt the CCS is very limited. We use the technology-organization-environment (TOE) framework to suggest that a strategic and incremental intent, understanding the organizational structure and culture, understanding the external factors, and consideration of the human resource capacity can contribute to sustainable business value from CCS. Using survey data, we find evidence of a positive association between these considerations and the CCS-related business objectives. We also find evidence of positive association between the CCS-related business objectives and CCS-related financial objectives. The results suggest that the proposed considerations can ensure sustainable business value from the CCS. This study provides guidance to SMEs on a path to adopting the CCS with the intention of a long-term commitment and achieving sustainable business value from these services.
Resumo:
There is an increase in the uptake of cloud computing services (CCS). CCS is adopted in the form of a utility, and it incorporates business risks of the service providers and intermediaries. Thus, the adoption of CCS will change the risk profile of an organization. In this situation, organizations need to develop competencies by reconsidering their IT governance structures to achieve a desired level of IT-business alignment and maintain their risk appetite to source business value from CCS. We use the resource-based theories to suggest that collaborative board oversight of CCS, competencies relating to CCS information and financial management, and a CCS-related continuous audit program can contribute to business process performance improvements and overall firm performance. Using survey data, we find evidence of a positive association between these IT governance considerations and business process performance. We also find evidence of positive association between business process performance improvements and overall firm performance. The results suggest that the suggested considerations on IT governance structures can contribute to CCS-related IT-business alignment and lead to anticipated business value from CCS. This study provides guidance to organizations on competencies required to secure business value from CCS.
Resumo:
Stochastic volatility models are of fundamental importance to the pricing of derivatives. One of the most commonly used models of stochastic volatility is the Heston Model in which the price and volatility of an asset evolve as a pair of coupled stochastic differential equations. The computation of asset prices and volatilities involves the simulation of many sample trajectories with conditioning. The problem is treated using the method of particle filtering. While the simulation of a shower of particles is computationally expensive, each particle behaves independently making such simulations ideal for massively parallel heterogeneous computing platforms. In this paper, we present our portable Opencl implementation of the Heston model and discuss its performance and efficiency characteristics on a range of architectures including Intel cpus, Nvidia gpus, and Intel Many-Integrated-Core (mic) accelerators.
Resumo:
Among the iterative schemes for computing the Moore — Penrose inverse of a woll-conditioned matrix, only those which have an order of convergence three or two are computationally efficient. A Fortran programme for these schemes is provided.
Resumo:
Abstract is not available.
Resumo:
A rank-augmnented LU-algorithm is suggested for computing a generalized inverse of a matrix. Initially suitable diagonal corrections are introduced in (the symmetrized form of) the given matrix to facilitate decomposition; a backward-correction scheme then yields a desired generalized inverse.
Resumo:
We present here a theoretical approach to compute the molecular magnetic anisotropy parameters, D (M) and E (M) for single molecule magnets in any given spin eigenstate of exchange spin Hamiltonian. We first describe a hybrid constant M (S) valence bond (VB) technique of solving spin Hamiltonians employing full spatial and spin symmetry adaptation and we illustrate this technique by solving the exchange Hamiltonian of the Cu6Fe8 system. Treating the anisotropy Hamiltonian as perturbation, we compute the D (M)and E(M) values for various eigenstates of the exchange Hamiltonian. Since, the dipolar contribution to the magnetic anisotropy is negligibly small, we calculate the molecular anisotropy from the single-ion anisotropies of the metal centers. We have studied the variation of D (M) and E(M) by rotating the single-ion anisotropies in the case of Mn12Ac and Fe-8 SMMs in ground and few low-lying excited states of the exchange Hamiltonian. In both the systems, we find that the molecular anisotropy changes drastically when the single-ion anisotropies are rotated. While in Mn12Ac SMM D (M) values depend strongly on the spin of the eigenstate, it is almost independent of the spin of the eigenstate in Fe-8 SMM. We also find that the D (M)value is almost insensitive to the orientation of the anisotropy of the core Mn(IV) ions. The dependence of D (M) on the energy gap between the ground and the excited states in both the systems has also been studied by using different sets of exchange constants.
Resumo:
With the development of wearable and mobile computing technology, more and more people start using sleep-tracking tools to collect personal sleep data on a daily basis aiming at understanding and improving their sleep. While sleep quality is influenced by many factors in a person’s lifestyle context, such as exercise, diet and steps walked, existing tools simply visualize sleep data per se on a dashboard rather than analyse those data in combination with contextual factors. Hence many people find it difficult to make sense of their sleep data. In this paper, we present a cloud-based intelligent computing system named SleepExplorer that incorporates sleep domain knowledge and association rule mining for automated analysis on personal sleep data in light of contextual factors. Experiments show that the same contextual factors can play a distinct role in sleep of different people, and SleepExplorer could help users discover factors that are most relevant to their personal sleep.
Resumo:
We compute the throughput obtained by a TCP connection in a UMTS environment. For downloading data at a mobile terminal, the packets of each TCP connection are stored in separate queues at the base station (node B). Also due to fragmentation of the TCP packets into Protocol Data Units (PDU) and link layer retransmissions of PDUs there can be significant delays at the queue of the node B. In such a scenario the existing models of TCP may not be sufficient. Thus, we provide a new approximate TCP model and also obtain new closed-form expressions of mean window size. Using these we obtain the throughput of a TCP connection which matches with simulations quite well.
Resumo:
The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.
Resumo:
Conformance testing focuses on checking whether an implementation. under test (IUT) behaves according to its specification. Typically, testers are interested it? performing targeted tests that exercise certain features of the IUT This intention is formalized as a test purpose. The tester needs a "strategy" to reach the goal specified by the test purpose. Also, for a particular test case, the strategy should tell the tester whether the IUT has passed, failed. or deviated front the test purpose. In [8] Jeron and Morel show how to compute, for a given finite state machine specification and a test purpose automaton, a complete test graph (CTG) which represents all test strategies. In this paper; we consider the case when the specification is a hierarchical state machine and show how to compute a hierarchical CTG which preserves the hierarchical structure of the specification. We also propose an algorithm for an online test oracle which avoids a space overhead associated with the CTG.