994 resultados para deterministic fractals


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Shared clusters represent an excellent platform for the execution of parallel applications given their low price/performance ratio and the presence of cluster infrastructure in many organisations. The focus of recent research efforts are on parallelism management, transport and efficient access to resources, and making clusters easy to use. In this thesis, we examine reliable parallel computing on clusters. The aim of this research is to demonstrate the feasibility of developing an operating system facility providing transport fault tolerance using existing, enhanced and newly built operating system services for supporting parallel applications. In particular, we use existing process duplication and process migration services, and synthesise a group communications facility for use in a transparent checkpointing facility. This research is carried out using the methods of experimental computer science. To provide a foundation for the synthesis of the group communications and checkpointing facilities, we survey and review related work in both fields. For group communications, we examine the V Distributed System, the x-kernel and Psync, the ISIS Toolkit, and Horus. We identify a need for services that consider the placement of processes on computers in the cluster. For Checkpointing, we examine Manetho, KeyKOS, libckpt, and Diskless Checkpointing. We observe the use of remote computer memories for storing checkpoints, and the use of copy-on-write mechanisms to reduce the time to create a checkpoint of a process. We propose a group communications facility providing two sets of services: user-oriented services and system-oriented services. User-oriented services provide transparency and target application. System-oriented services supplement the user-oriented services for supporting other operating systems services and do not provide transparency. Additional flexibility is achieved by providing delivery and ordering semantics independently. An operating system facility providing transparent checkpointing is synthesised using coordinated checkpointing. To ensure a consistent set of checkpoints are generated by the facility, instead of blindly blocking the processes of a parallel application, only non-deterministic events are blocked. This allows the processes of the parallel application to continue execution during the checkpoint operation. Checkpoints are created by adapting process duplication mechanisms, and checkpoint data is transferred to remote computer memories and disk for storage using the mechanisms of process migration. The services of the group communications facility are used to coordinate the checkpoint operation, and to transport checkpoint data to remote computer memories and disk. Both the group communications facility and the checkpointing facility have been implemented in the GENESIS cluster operating system and provide proof-of-concept. GENESIS uses a microkernel and client-server based operating system architecture, and is demonstrated to provide an appropriate environment for the development of these facilities. We design a number of experiments to test the performance of both the group communications facility and checkpointing facility, and to provide proof-of-performance. We present our approach to testing, the challenges raised in testing the facilities, and how we overcome them. For group communications, we examine the performance of a number of delivery semantics. Good speed-ups are observed and system-oriented group communication services are shown to provide significant performance advantages over user-oriented semantics in the presence of packet loss. For checkpointing, we examine the scalability of the facility given different levels of resource usage and a variable number of computers. Low overheads are observed for checkpointing a parallel application. It is made clear by this research that the microkernel and client-server based cluster operating system provide an ideal environment for the development of a high performance group communications facility and a transparent checkpointing facility for generating a platform for reliable parallel computing on clusters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The fact that an occurrence of a unit root in real output is inconsistent with the notion that business cycles are stationary fluctuations around a deterministic trend makes this an important topic for empirical investigation. We examine this issue for 24 Chinese provinces using the recently developed Lagrange multiplier panel unit root test which allows for a structural break. Our main finding is that real gross domestic product (GDP) and real GDP per capita for Chinese provinces are stationary fluctuations around a deterministic trend.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, many unified learning algorithms have been developed to solve the task of principal component analysis (PCA) and minor component analysis (MCA). These unified algorithms can be used to extract principal component and if altered simply by the sign, it can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. Convergence of the existing unified algorithms is guaranteed only under the condition that the learning rates of algorithms approach zero, which is impractical in many practical applications. In this paper, we propose a unified PCA & MCA algorithm with a constant learning rate, and derive the sufficient conditions to guarantee convergence via analyzing the discrete-time dynamics of the proposed algorithm. The achieved theoretical results lay a solid foundation for the applications of our proposed algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electronic document editor software is traditionally complex to design and difficult to implement. This research resulted in REDDL, a language for the specification of electronic document editors at a simplified declarative level, employing a deterministic storage model. This approach allows rapid and simplified development of this class of software.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Constraint based tools for architectural design exploration need to satisfy aesthetic and functional criteria as well as combine discrete and continuous modes of exploration. In this paper, we examine the possibilities for stochastic processes in design space exploration.

Specifically, we address the application of a stochastic wind motion model to the subdivision of an external building envelope into smaller discrete components. Instead of deterministic subdivision constraints, we introduce explicit uncertainty into the system of subdivision. To address these aims, we develop a model of stochastic wind motion; create a subdivision scheme that is governed by the wind model and explore a design space of a facade subdivision problem. A discrete version of the facade, composed of light strips and panels, based on the bamboo elements deformed by continuous wind motion, is developed. The results of the experiments are presented in the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, we develop some deterministic metamodels to quickly and precisely predict the future of a technically complex system. The underlying system is essentially a stochastic, discrete event simulation model of a big baggage handling system. The highly detailed simulation model of this is used for conducting some experiments and logging data which are then used for training artificial neural network metamodels. Demonstrated results show that the developed metamodels are well able to predict different performance measures related to the travel time of bags within this system. In contrast to the simulation models which are computationally expensive and expertise extensive to be developed, run, and maintained, the artificial neural network metamodels could serve as real time decision aiding tools which are considerably fast, precise, simple to use, and reliable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Successfully determining competitive optimal schedules for electricity generation intimately hinges on the forecasts of loads. The nonstationarity and high volatility of loads make their accurate prediction somewhat problematic. Presence of uncertainty in data also significantly degrades accuracy of point predictions produced by deterministic load forecasting models. Therefore, operation planning utilizing these predictions will be unreliable. This paper aims at developing prediction intervals rather than producing exact point prediction. Prediction intervals are theatrically more reliable and practical than predicted values. The delta and Bayesian techniques for constructing prediction intervals for forecasted loads are implemented here. To objectively and comprehensively assess quality of constructed prediction intervals, a new index based on length and coverage probability of prediction intervals is developed. In experiments with real data, and through calculation of global statistics, it is shown that neural network point prediction performance is unreliable. In contrast, prediction intervals developed using the delta and Bayesian techniques are satisfactorily narrow, with a high coverage probability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Propagation of Peer-to-Peer (P2P) worms in the Internet is posing a serious challenge to network security research because of P2P worms' increasing complexity and sophistication. Due to the complexity of the problem, no existing work has solved the problem of modeling the propagation of P2P worms, especially when quarantine of peers is enforced. This paper presents a study on modeling the propagation of P2P worms. It also presents our applications of the proposed approach in worm propagation research.

Motivated by our aspiration to invent an easy-to-employ instrument for worm propagation research, the proposed approach models the propagation processes of P2P worms by difference equations of a logic matrix, which are essentially discrete-time deterministic propagation models of P2P worms. To the best of our knowledge, we are the first using a logic matrix in network security research in general and worm propagation modeling in particular.

Our major contributions in this paper are firstly, we propose a novel logic matrix approach to modeling the propagation of P2P worms under three different conditions; secondly, we find the impacts of two different topologies on a P2P worm's attack performance; thirdly, we find the impacts of the network-related characteristics on a P2P worm's attack performance in structured P2P networks; and fourthly, we find the impacts of the two different quarantine tactics on the propagation characteristics of P2P worms in unstructured P2P networks. The approach's ease of employment, which is demonstrated by its applications in our simulation experiments, makes it an attractive instrument to conduct worm propagation research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Recursive Auto-Associative Memory (RAAM) has come to dominate connectionist investigations into representing compositional structure. Although an adequate model when dealing with limited data, the capacity of RAAM to scale-up to real-world tasks has been frequently questioned. RAAM networks are difficult to train (due to the moving target effect) and as such training times can be lengthy. Investigations into RAAM have produced many variants in an attempt to overcome such limitations. We outline how one such model ((S)RAAM) is able to quickly produce context-sensitive representations that may be used to aid a deterministic parsing process. By substituting a symbolic stack in an existing hybrid parser, we show that (S)RAAM is more than capable of encoding the real-world data sets employed. We conclude by suggesting that models such as (S)RAAM offer valuable insights into the features of connectionist compositional representations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This brief deals with the problem of minor component analysis (MCA). Artificial neural networks can be exploited to achieve the task of MCA. Recent research works show that convergence of neural networks based MCA algorithms can be guaranteed if the learning rates are less than certain thresholds. However, the computation of these thresholds needs information about the eigenvalues of the autocorrelation matrix of data set, which is unavailable in online extraction of minor component from input data stream. In this correspondence, we introduce an adaptive learning rate into the OJAn MCA algorithm, such that its convergence condition does not depend on any unobtainable information, and can be easily satisfied in practical applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Random fluctuations of the electrical quantities (electrode potential and cell current) in electrochemical systems commonly are referred to as electrochemical noise (ECN). The ECN signal for the corrosion of mild steel in reinforced concrete specimen was analyzed with the Continuous Wavelet Transform (CWT). The original signal was transformed into a time-frequency phase plane with colors representing the coefficients of the CWT. The signal shows a self-similarity structure in the phase plane. Through this way, the chaotic nature of corrosion process is manifested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A grid computing system consists of a group of programs and resources that are spread across machines in the grid. A grid system has a dynamic environment and decentralized distributed resources, so it is important to provide efficient scheduling for applications. Task scheduling is an NP-hard problem and deterministic algorithms are inadequate and heuristic algorithms such as particle swarm optimization (PSO) are needed to solve the problem. PSO is a simple parallel algorithm that can be applied in different ways to resolve optimization problems. PSO searches the problem space globally and needs to be combined with other methods to search locally as well. In this paper, we propose a hybrid-scheduling algorithm to solve the independent task- scheduling problem in grid computing. We have combined PSO with the gravitational emulation local search (GELS) algorithm to form a new method, PSO–GELS. Our experimental results demonstrate the effectiveness of PSO–GELS compared to other algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim  To examine the exploitation, recovery and current status of green turtles (Chelonia mydas) nesting at Ascension Island. Location  Ascension Island (UK) (7°57′ S, 14°22′ W), South Atlantic Ocean. Methods  We analysed records of the harvest of green turtles nesting at Ascension Island between 1822 and 1935, illustrating the decline in numbers over this period. Using a deterministic age-class structured model we predict the initial number of breeding females present in the population prior to the recorded harvest and compare this to our estimate of the current population based upon our recent annual surveys (1999–2004). Results  Prior to 1822 we estimate the nesting population of green turtles to have been at least 19,000–22,000 individuals in order for the population to have survived the level of harvest recorded. From recent data (1999–2004), we estimate the current breeding population of green turtles at this site to be 11,000–15,000 females. Our results illustrate a dramatic recovery of the population, which is still increasing exponentially and shows no evidence of slowing, suggesting it has not reached 50% of its carrying capacity. Main conclusions  We estimate that, since the 1970s, the Ascension Island population of green turtles has increased by 285% and question the recent listing of this species as endangered by the IUCN (World Conservation Union), in particular in the Atlantic Ocean, where 75% of the populations assessed by the IUCN are increasing. Indeed, we estimate the global population of this species to be in excess of 2.2 million individuals. We suggest that the IUCN's global listing process detracts attention from those populations that are truly threatened with extinction and should not, in its present form, be applied to globally distributed long-lived species such as marine turtles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article we describe how concepts of risk are both generated by and used to reinforce a neoliberal agenda in relation to the health and well-being of young people. We examine how risk may be used as a tool to advance ideals such as rational choice and individual responsibility, and how this can further disadvantage young people living within the contexts of structural disadvantage (such as geographic areas of long-term unemployment; communities that experience racial discrimination). We also identify the ways in which risk is applied in uneven ways within structurally disadvantaged contexts. To suggest a way forward, we articulate a set of principles and strategies that offer up a means of resisting neoliberal imperatives and suggest how these might play out at the micro-, meso- and macro-levels. To do this, we discuss examples from the UK, Canadian and Australian contexts to illustrate how young people resist being labelled as risky, and how it is possible to engage in health equity-enhancing actions, despite seemingly deterministic forces. The cases we describe reveal some of the vulnerabilities (and hence opportunities) within the seemingly impenetrable world view and powers of neoliberals, and point towards the potential to formulate an agenda of resistance and new directions for young people's health promotion.