77 resultados para Discrete Time Branching Processes
Resumo:
With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.
In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.
Resumo:
Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the Docker container technology, which allows a user-defined memory size and number of CPU cores of the VM to be benchmarked. The tool incorporates two benchmarking methods - the first referred to as the native method employs containers to benchmark a small portion of the VM and generate performance ranks, and the second uses historic benchmark data along with the native method as a hybrid to generate VM ranks. The proposed methods are evaluated on three use-cases and are observed to be up to 91 times faster than benchmarking the entire VM. In both methods, small containers provide the same quality of rankings as a large container. The native method generates ranks with over 90% and 86% accuracy for sequential and parallel execution of an application compared against benchmarking the whole VM. The hybrid method did not improve the quality of the rankings significantly.
Resumo:
Two direct sampling correlator-type receivers for differential chaos shift keying (DCSK) communication systems under frequency non-selective fading channels are proposed. These receivers operate based on the same hardware platform with different architectures. In the first scheme, namely sum-delay-sum (SDS) receiver, the sum of all samples in a chip period is correlated with its delayed version. The correlation value obtained in each bit period is then compared with a fixed threshold to decide the binary value of recovered bit at the output. On the other hand, the second scheme, namely delay-sum-sum (DSS) receiver, calculates the correlation value of all samples with its delayed version in a chip period. The sum of correlation values in each bit period is then compared with the threshold to recover the data. The conventional DCSK transmitter, frequency non-selective Rayleigh fading channel, and two proposed receivers are mathematically modelled in discrete-time domain. The authors evaluated the bit error rate performance of the receivers by means of both theoretical analysis and numerical simulation. The performance comparison shows that the two proposed receivers can perform well under the studied channel, where the performances get better when the number of paths increases and the DSS receiver outperforms the SDS one.
Resumo:
The measurement of fast changing temperature fluctuations is a challenging problem due to the inherent limited bandwidth of temperature sensors. This results in a measured signal that is a lagged and attenuated version of the input. Compensation can be performed provided an accurate, parameterised sensor model is available. However, to account for the in influence of the measurement environment and changing conditions such as gas velocity, the model must be estimated in-situ. The cross-relation method of blind deconvolution is one approach for in-situ characterisation of sensors. However, a drawback with the method is that it becomes positively biased and unstable at high noise levels. In this paper, the cross-relation method is cast in the discrete-time domain and a bias compensation approach is developed. It is shown that the proposed compensation scheme is robust and yields unbiased estimates with lower estimation variance than the uncompensated version. All results are verified using Monte-Carlo simulations.
Resumo:
Impactive contact between a vibrating string and a barrier is a strongly nonlinear phenomenon that presents several challenges in the design of numerical models for simulation and sound synthesis of musical string instruments. These are addressed here by applying Hamiltonian methods to incorporate distributed contact forces into a modal framework for discrete-time simulation of the dynamics of a stiff, damped string. The resulting algorithms have spectral accuracy, are unconditionally stable, and require solving a multivariate nonlinear equation that is guaranteed to have a unique solution. Exemplifying results are presented and discussed in terms of accuracy, convergence, and spurious high-frequency oscillations.
Resumo:
We consider a linear precoder design for an underlay cognitive radio multiple-input multiple-output broadcast channel, where the secondary system consisting of a secondary base-station (BS) and a group of secondary users (SUs) is allowed to share the same spectrum with the primary system. All the transceivers are equipped with multiple antennas, each of which has its own maximum power constraint. Assuming zero-forcing method to eliminate the multiuser interference, we study the sum rate maximization problem for the secondary system subject to both per-antenna power constraints at the secondary BS and the interference power constraints at the primary users. The problem of interest differs from the ones studied previously that often assumed a sum power constraint and/or single antenna employed at either both the primary and secondary receivers or the primary receivers. To develop an efficient numerical algorithm, we first invoke the rank relaxation method to transform the considered problem into a convex-concave problem based on a downlink-uplink result. We then propose a barrier interior-point method to solve the resulting saddle point problem. In particular, in each iteration of the proposed method we find the Newton step by solving a system of discrete-time Sylvester equations, which help reduce the complexity significantly, compared to the conventional method. Simulation results are provided to demonstrate fast convergence and effectiveness of the proposed algorithm.
Resumo:
The measurement of fast changing temperature fluctuations is a challenging problem due to the inherent limited bandwidth of temperature sensors. This results in a measured signal that is a lagged and attenuated version of the input. Compensation can be performed provided an accurate, parameterised sensor model is available. However, to account for the influence of the measurement environment and changing conditions such as gas velocity, the model must be estimated in-situ. The cross-relation method of blind deconvolution is one approach for in-situ characterisation of sensors. However, a drawback with the method is that it becomes positively biased and unstable at high noise levels. In this paper, the cross-relation method is cast in the discrete-time domain and a bias compensation approach is developed. It is shown that the proposed compensation scheme is robust and yields unbiased estimates with lower estimation variance than the uncompensated version. All results are verified using Monte-Carlo simulations.
Resumo:
We describe an ab initio nonperturbative time-dependent R-matrix theory for ultrafast atomic processes. This theory enables investigations of the interaction of few-femtosecond and -attosecond pulse lasers with complex multielectron atoms and atomic ions. A derivation and analysis of the basic equations are given, which propagate the atomic wave function in the presence of the laser field forward in time in the internal and external R-matrix regions. To verify the accuracy of the approach, we investigate two-photon ionization of Ne irradiated by an intense laser pulse and compare current results with those obtained using the R-matrix Floquet method and an alternative time-dependent method. We also verify the capability of the current approach by applying it to the study of two-dimensional momentum distributions of electrons ejected from Ne due to irradiation by a sequence of 2 as light pulses in the presence of a 780 nm laser field.
Resumo:
In dielectronic recombination of hydrogenlike ions an intermediate doubly excited heliumlike ion is formed. Since the K shell is empty, both excited electrons can decay sequentially to the ground state. In this paper we analyze the x-ray radiation emitted from doubly and singly excited heliumlike titanium ions produced inside the Tokyo electron beam ion trap. Theoretical population densities of the singly excited states after the first transition and the transition probabilities of these states into the ground state were also calculated. This allowed theoretical branching ratios to be determined for each manifold. These branching ratios are compared to the experimentally obtained x-ray distribution by fitting across the relevant peak using a convolution of the theoretically obtained resonance strengths and energies. By taking into account 2E1 transitions which are not observed in the experiment, the measured and calculated ratios agree well. This method provides a valuable insight into the transition dynamics of excited highly charged ions.
Resumo:
Discrete Conditional Phase-type (DC-Ph) models are a family of models which represent skewed survival data conditioned on specific inter-related discrete variables. The survival data is modeled using a Coxian phase-type distribution which is associated with the inter-related variables using a range of possible data mining approaches such as Bayesian networks (BNs), the Naïve Bayes Classification method and classification regression trees. This paper utilizes the Discrete Conditional Phase-type model (DC-Ph) to explore the modeling of patient waiting times in an Accident and Emergency Department of a UK hospital. The resulting DC-Ph model takes on the form of the Coxian phase-type distribution conditioned on the outcome of a logistic regression model.
Resumo:
The aim of this paper is to develop a new generation of extruder control system for recycled materials which has ability to automatically maintain constant a polymer melt viscosity of mixed recycled polymers during extrusion, regardless of variations in the Melt Flow Index (MFI) of recycled mixed grade high density polyethylene (HDPE) feedstock. The variations in MFI are due to differences in the source of the recycled material used. The work describes how melt viscosity for specific extruder/die system is calculated in real time using the rheological properties of the materials, the pressure drop through the extruder die and the actual throughput measurements using a gravimetric loss-in-weight hopper feeder. A closed-loop controller is also developed to automatically regulate screw speed and barrel temperature profile to achieve constant viscosity and enable consistent processing of variable grade recycled HDPE materials. Such a system will improve processability of mixed MFI polymers may also reduce the risk of polymer melt degradation, reduce producing large volumes of scrap/waste and lead to improvement in product quality. The experimental results of real time viscosity measurement and control using a 38 mm single screw extruder with different recycled HDPEs with widely different MFIs are reported in this work.
Resumo:
We review the development of the time-dependent close-coupling method to study atomic and molecular few body dynamics. Applications include electron and photon collisions with atoms, molecules, and their ions.
Resumo:
As the development of a viable quantum computer nears, existing widely used public-key cryptosystems, such as RSA, will no longer be secure. Thus, significant effort is being invested into post-quantum cryptography (PQC). Lattice-based cryptography (LBC) is one such promising area of PQC, which offers versatile, efficient, and high performance security services. However, the vulnerabilities of these implementations against side-channel attacks (SCA) remain significantly understudied. Most, if not all, lattice-based cryptosystems require noise samples generated from a discrete Gaussian distribution, and a successful timing analysis attack can render the whole cryptosystem broken, making the discrete Gaussian sampler the most vulnerable module to SCA. This research proposes countermeasures against timing information leakage with FPGA-based designs of the CDT-based discrete Gaussian samplers with constant response time, targeting encryption and signature scheme parameters. The proposed designs are compared against the state-of-the-art and are shown to significantly outperform existing implementations. For encryption, the proposed sampler is 9x faster in comparison to the only other existing time-independent CDT sampler design. For signatures, the first time-independent CDT sampler in hardware is proposed.
Resumo:
Estimating a time interval and temporally coordinating movements in space are fundamental skills, but the relationships between these different forms of timing, and the neural processes that they incur, are not well understood. While different theories have been proposed to account for time perception, time estimation, and the temporal patterns of coordination, there are no general mechanisms which unify these various timing skills. This study considers whether a model of perceptuo-motor timing, the tau(GUIDE), can also describe how certain judgements of elapsed time are made. To evaluate this, an equation for determining interval estimates was derived from the tau(GUIDE) model and tested in a task where participants had to throw a ball and estimate when it would hit the floor. The results showed that in accordance with the model, very accurate judgements could be made without vision (mean timing error -19.24 msec), and the model was a good predictor of skilled participants' estimate timing. It was concluded that since the tau(GUIDE) principle provides temporal information in a generic form, it could be a unitary process that links different forms of timing.
Resumo:
The fundamental controls on the initiation and development of gravel-dominated deposits (beaches and barriers) on paraglacial coasts are particle size and shape, sediment supply, storm wave activity (primarily runup), relative sea-level (RSL) change, and terrestrial basement structure (primarily as it affects accommodation space). This paper examines the stochastic basis for barrier organisation as shown by variation in gravel barrier architecture. We recognise punctuated self-organisation of barrier development that is disrupted by short phases of barrier instability. The latter results from positive feedback causing barrier breakdown when sediment supply is exhausted. We examine published typologies for gravel barriers and advocate a consolidated perspective using rate of RSL change and sediment supply. We also consider the temporal variation in controls on barrier development. These are examined in terms of a simple behavioural model (BARCH) for prograding gravel barrier architecture and its sensitivity to such controls. The nature of macroscale (102–103 years) gravel barrier development, including inherited characteristics that influence barrier genesis, as well as forcing from changing RSL, sediment supply, headland control and barrier inertia, is examined in the context of long-surviving barriers along the southern England coastline.