830 resultados para Computation time delay
Resumo:
The research reported here is based on the standard laboratory experiments routinely performed in order to measure various geotechnical parameters. These experiments require consolidation of fine-grained samples in triaxial or stress path apparatus. The time required for the consolidation is dependent on the permeability of the soil and the length of the drainage path. The consolidation time is often of the order of several weeks in large clay-dominated samples. Long testing periods can be problematic, as they can delay decisions on design and construction methods. Acceleration of the consolidation process would require a reduction in effective drainage length and this is usually achieved by placing filter drains around the sample. The purpose of the research reported in this paper is to assess if these filter drains work effectively and, if not, to determine what modifications to the filter drains are needed. The findings have shown that use of a double filter reduces the consolidation time several fold.
Resumo:
In this letter, we propose a simple space-time code to simultaneously achieve both the space and time diversities over time dispersive channels by using two-dimensional lattice constellations and Alamouti codes. The proposed scheme still reserves full space diversity and double-real-symbols joint maximum likelihood decoding which has the similar computation complexity as the Alamouti code.
Resumo:
As a potential alternative to CMOS technology, QCA provides an interesting paradigm in both communication and computation. However, QCAs unique four-phase clocking scheme and timing constraints present serious timing issues for interconnection and feedback. In this work, a cut-set retiming design procedure is proposed to resolve these QCA timing issues. The proposed design procedure can accommodate QCAs unique characteristics by performing delay-transfer and time-scaling to reallocate the existing delays so as to achieve efficient clocking zone assignment. Cut-set retiming makes it possible to effectively design relatively complex QCA circuits that include feedback. It utilizes the similar characteristics of synchronization, deep pipelines and local interconnections common to both QCA and systolic architectures. As a case study, a systolic Montgomery modular multiplier is designed to illustrate the procedure. Furthermore, a nonsystolic architecture, an S27 benchmark circuit, is designed and compared with previous designs. The comparison shows that the cut-set retiming method achieves a more efficient design, with a reduction of 22%, 44%, and 46% in terms of cell count, area, and latency, respectively.
Resumo:
Androgen withdrawal induces hypoxia in androgen-sensitive tissue; this is important as in the tumour microenvironment hypoxia is known to drive malignant progression. This study examined the time-dependent effect of androgen deprivation therapy (ADT) on tumour oxygenation and investigated the role of ADT-induced hypoxia on malignant progression in prostate tumours. LNCaP xenografted tumours were treated with anti-androgens and tumour oxygenation measured. Dorsal skin fold chambers (DSF) were used to image tumour vasculature in vivo. Quantitative PCR (QPCR) identified differential gene expression following treatment with bicalutamide. Bicalutamide and vehicle-only treated tumours were re-established in vitro and invasion and sensitivity to docetaxel were measured. Tumour growth delay was calculated following treatment with bicalutamide combined with the bioreductive drug AQ4N. Tumour oxygenation measurements showed a precipitate decrease following initiation of ADT. A clinically relevant dose of bicalutamide (2mg/kg/day) decreased tumour oxygenation by 45% within 24h, reaching a nadir of 0.09% oxygen (0.67±0.06 mmHg) by day 7; this persisted until day 14 when it increased up to day 28. Using DSF chambers, LNCaP tumours treated with bicalutamide showed loss of small vessels at days 7 and 14 with revascularization occurring by day 21. QPCR showed changes in gene expression consistent with the vascular changes and malignant progression. Cells from bicalutamide-treated tumours were more malignant than vehicle-treated controls. Combining bicalutamide with AQ4N (50mg/kg; single dose) caused greater tumour growth delay than bicalutamide alone. This study shows that bicalutamide-induced hypoxia selects for cells that show malignant progression; targeting hypoxic cells may provide greater clinical benefit.
Resumo:
In view of both the delay in obtaining identification by conventional methods following blood-culture positivity in patients with candidaemia and the close relationship between species and fluconazole (FLC) susceptibility, early speciation of positive blood cultures has the potential to influence therapeutic decisions. The aim was to develop a rapid test to differentiate FLC-resistant from FLC-sensitive Candida species. Three TaqMan-based real-time PCR assays were developed to identify up to six Candida species directly from BacT/Alert blood-culture bottles that showed yeast cells on Gram staining at the time of initial positivity. Target sequences in the rRNA gene complex were amplified, using a consensus two-step PCR protocol, to identify Candida albicans, Candida parapsilosis, Candida tropicalis, Candida dubliniensis, Candida glabrata and Candida krusei; these are the most commonly encountered Candida species in blood cultures. The first four of these (the characteristically FLC-sensitive group) were identified in a single reaction tube using one fluorescent TaqMan probe targeting 1 8S rRNA sequences conserved in the four species. The FLC-resistant species C. krusei and C. glabrata were detected in two further reactions, each with species-specific probes. This method was validated with clinical specimens (blood cultures) positive for yeast (n=33 sets) and the results were 100% concordant with those of phenotypic identification carried out concomitantly. The reported assay significantly reduces the time required to identify the presence of C. glabrata and C. krusei in comparison with a conventional phenotypic method, from ~72 to
Resumo:
We study the charge transfer between colliding ions, atoms, or molecules, within time-dependent density functional theory. Two particular cases are presented, the collision between a proton and a Helium atom, and between a gold atom and a butane molecule. In the first case, proton kinetic energies between 16 keV and 1.2 MeV are considered, with impact parameters between 0.31 and 1.9 angstrom. The partial transfer of charge is monitored with time. The total cross-section is obtained as a function of the proton kinetic energy. In the second case, we analyze one trajectory and discuss spin-dependent charge transfer between the different fragments.
Resumo:
We present theoretical delay times and rates of thermonuclear explosions that are thought to produce Type Ia supernovae (SNe Ia), including the double-detonation sub-Chandrasekhar mass model, using the population synthesis binary evolution code startrack. If detonations of sub-Chandrasekhar mass carbon-oxygen white dwarfs following a detonation in an accumulated layer of helium on the white dwarf's surface ('double-detonation' models) are able to produce thermonuclear explosions which are characteristically similar to those of SNe Ia, then these sub-Chandrasekhar mass explosions may account for at least some substantial fraction of the observed SN Ia rate. Regardless of whether all double-detonations look like 'normal' SNe Ia, in any case the explosions are expected to be bright and thus potentially detectable. Additionally, we find that the delay time distribution of double-detonation sub-Chandrasekhar mass SNe Ia can be divided into two distinct formation channels: the 'prompt' helium-star channel with delay times
Resumo:
Particle-in-cell (PIC) simulations of relativistic shocks are in principle capable of predicting the spectra of photons that are radiated incoherently by the accelerated particles. The most direct method evaluates the spectrum using the fields given by the Lienard-Wiechart potentials. However, for relativistic particles this procedure is computationally expensive. Here we present an alternative method that uses the concept of the photon formation length. The algorithm is suitable for evaluating spectra both from particles moving in a specific realization of a turbulent electromagnetic field or from trajectories given as a finite, discrete time series by a PIC simulation. The main advantage of the method is that it identifies the intrinsic spectral features and filters out those that are artifacts of the limited time resolution and finite duration of input trajectories.
Resumo:
This article introduces a resource allocation solution capable of handling mixed media applications within the constraints of a 60 GHz wireless network. The challenges of multimedia wireless transmission include high bandwidth requirements, delay intolerance and wireless channel availability. A new Channel Time Allocation Particle Swarm Optimization (CTA-PSO) is proposed to solve the network utility maximization (NUM) resource allocation problem. CTA-PSO optimizes the time allocated to each device in the network in order to maximize the Quality of Service (QoS) experienced by each user. CTA-PSO introduces network-linked swarm size, an increased diversity function and a learning method based on the personal best, Pbest, results of the swarm. These additional developments to the PSO produce improved convergence speed with respect to Adaptive PSO while maintaining the QoS improvement of the NUM. Specifically, CTA-PSO supports applications described by both convex and non-convex utility functions. The multimedia resource allocation solution presented in this article provides a practical solution for real-time wireless networks.
Resumo:
In this paper, we propose a design paradigm for energy efficient and variation-aware operation of next-generation multicore heterogeneous platforms. The main idea behind the proposed approach lies on the observation that not all operations are equally important in shaping the output quality of various applications and of the overall system. Based on such an observation, we suggest that all levels of the software design stack, including the programming model, compiler, operating system (OS) and run-time system should identify the critical tasks and ensure correct operation of such tasks by assigning them to dynamically adjusted reliable cores/units. Specifically, based on error rates and operating conditions identified by a sense-and-adapt (SeA) unit, the OS selects and sets the right mode of operation of the overall system. The run-time system identifies the critical/less-critical tasks based on special directives and schedules them to the appropriate units that are dynamically adjusted for highly-accurate/approximate operation by tuning their voltage/frequency. Units that execute less significant operations can operate at voltages less than what is required for correct operation and consume less power, if required, since such tasks do not need to be always exact as opposed to the critical ones. Such scheme can lead to energy efficient and reliable operation, while reducing the design cost and overheads of conventional circuit/micro-architecture level techniques.
Resumo:
We develop a continuous-time asset price model to capture the timeseries momentum documented recently. The underlying stochastic delay differentialsystem facilitates the analysis of effects of different time horizons used bymomentum trading. By studying an optimal asset allocation problem, we find thatthe performance of time series momentum strategy can be significantly improvedby combining with market fundamentals and timing opportunity with respect tomarket trend and volatility. Furthermore, the results also hold for different timehorizons, the out-of-sample tests and with short-sale constraints. The outperformanceof the optimal strategy is immune to market states, investor sentiment andmarket volatility.
Resumo:
A multiuser scheduling multiple-input multiple-output (MIMO) cognitive radio network (CRN) with space-time block coding (STBC) is considered in this paper, where one secondary base station (BS) communicates with one secondary user (SU) selected from K candidates. The joint impact of imperfect channel state information (CSI) in BS → SUs and BS → PU due to channel estimation errors and feedback delay on the outage performance is firstly investigated. We obtain the exact outage probability expressions for the considered network under the peak interference power IP at PU and maximum transmit power Pm at BS which cover perfect/imperfect CSI scenarios in BS → SUs and BS → PU. In addition, asymptotic expressions of outage probability in high SNR region are also derived from which we obtain several important insights into the system design. For example, only with perfect CSIs in BS → SUs, i.e., without channel estimation errors and feedback delay, the multiuser diversity can be exploited. Finally, simulation results confirm the correctness of our analysis.
Resumo:
We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.
Resumo:
Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.