973 resultados para Short Loadlength, Fast Algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

3-D full-wave method of moments (MoM) based electromagnetic analysis is a popular means toward accurate solution of Maxwell's equations. The time and memory bottlenecks associated with such a solution have been addressed over the last two decades by linear complexity fast solver algorithms. However, the accurate solution of 3-D full-wave MoM on an arbitrary mesh of a package-board structure does not guarantee accuracy, since the discretization may not be fine enough to capture spatial changes in the solution variable. At the same time, uniform over-meshing on the entire structure generates a large number of solution variables and therefore requires an unnecessarily large matrix solution. In this paper, different refinement criteria are studied in an adaptive mesh refinement platform. Consequently, the most suitable conductor mesh refinement criterion for MoM-based electromagnetic package-board extraction is identified and the advantages of this adaptive strategy are demonstrated from both accuracy and speed perspectives. The results are also compared with those of the recently reported integral equation-based h-refinement strategy. Finally, a new methodology to expedite each adaptive refinement pass is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The time division multiple access (TDMA) based channel access mechanisms perform better than the contention based channel access mechanisms, in terms of channel utilization, reliability and power consumption, specially for high data rate applications in wireless sensor networks (WSNs). Most of the existing distributed TDMA scheduling techniques can be classified as either static or dynamic. The primary purpose of static TDMA scheduling algorithms is to improve the channel utilization by generating a schedule of smaller length. But, they usually take longer time to schedule, and hence, are not suitable for WSNs, in which the network topology changes dynamically. On the other hand, dynamic TDMA scheduling algorithms generate a schedule quickly, but they are not efficient in terms of generated schedule length. In this paper, we propose a novel scheme for TDMA scheduling in WSNs, which can generate a compact schedule similar to static scheduling algorithms, while its runtime performance can be matched with those of dynamic scheduling algorithms. Furthermore, the proposed distributed TDMA scheduling algorithm has the capability to trade-off schedule length with the time required to generate the schedule. This would allow the developers of WSNs, to tune the performance, as per the requirement of prevalent WSN applications, and the requirement to perform re-scheduling. Finally, the proposed TDMA scheduling is fault-tolerant to packet loss due to erroneous wireless channel. The algorithm has been simulated using the Castalia simulator to compare its performance with those of others in terms of generated schedule length and the time required to generate the TDMA schedule. Simulation results show that the proposed algorithm generates a compact schedule in a very less time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional (3-D) full-wave electromagnetic simulation using method of moments (MoM) under the framework of fast solver algorithms like fast multipole method (FMM) is often bottlenecked by the speed of convergence of the Krylov-subspace-based iterative process. This is primarily because the electric field integral equation (EFIE) matrix, even with cutting-edge preconditioning techniques, often exhibits bad spectral properties arising from frequency or geometry-based ill-conditioning, which render iterative solvers slow to converge or stagnate occasionally. In this communication, a novel technique to expedite the convergence of MoMmatrix solution at a specific frequency is proposed, by extracting and applying Eigen-vectors from a previously solved neighboring frequency in an augmented generalized minimum residual (AGMRES) iterative framework. This technique can be applied in unison with any preconditioner. Numerical results demonstrate up to 40% speed-up in convergence using the proposed Eigen-AGMRES method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A heterostructure of graphene and zinc oxide (ZnO) nanowires (NWs) is fabricated by sandwiching an array of ZnO NWs between two graphene layers for an ultraviolet (UV) photodetector. This unique structure allows NWs to be in direct contact with the graphene layers, minimizing the effect of the substrate or metal electrodes. In this device, graphene layers act as highly conducting electrodes with a high mobility of the generated charge carriers. An excellent sensitivity is demonstrated towards UV illumination, with a reversible photoresponse even for a short period of UV illumination. Response and recovery times of a few milliseconds demonstrated a much faster photoresponse than most of the conventional ZnO nanostructure-based photodetectors. It is shown that the generation of a built-in electric field between the interface of graphene and ZnO NWs effectively contributes to the separation of photogenerated electron-hole pairs for photocurrent generation without applying any external bias. Upon application of external bias voltage, the electric field further increases the drift velocity of photogenerated electrons by reducing the charge recombination rates, and results in an enhancement of the photocurrent. Therefore, the graphene-based heterostructure (G/ZnO NW/G) opens avenues to constructing a novel heterostructure with a combination of two functionally dissimilar materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A heterostructure of graphene and zinc oxide (ZnO) nanowires (NWs) is fabricated by sandwiching an array of ZnO NWs between two graphene layers for an ultraviolet (UV) photodetector. This unique structure allows NWs to be in direct contact with the graphene layers, minimizing the effect of the substrate or metal electrodes. In this device, graphene layers act as highly conducting electrodes with a high mobility of the generated charge carriers. An excellent sensitivity is demonstrated towards UV illumination, with a reversible photoresponse even for a short period of UV illumination. Response and recovery times of a few milliseconds demonstrated a much faster photoresponse than most of the conventional ZnO nanostructure-based photodetectors. It is shown that the generation of a built-in electric field between the interface of graphene and ZnO NWs effectively contributes to the separation of photogenerated electron-hole pairs for photocurrent generation without applying any external bias. Upon application of external bias voltage, the electric field further increases the drift velocity of photogenerated electrons by reducing the charge recombination rates, and results in an enhancement of the photocurrent. Therefore, the graphene-based heterostructure (G/ZnO NW/G) opens avenues to constructing a novel heterostructure with a combination of two functionally dissimilar materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Survival from out-of-hospital cardiac arrest depends largely on two factors: early cardiopulmonary resuscitation (CPR) and early defibrillation. CPR must be interrupted for a reliable automated rhythm analysis because chest compressions induce artifacts in the ECG. Unfortunately, interrupting CPR adversely affects survival. In the last twenty years, research has been focused on designing methods for analysis of ECG during chest compressions. Most approaches are based either on adaptive filters to remove the CPR artifact or on robust algorithms which directly diagnose the corrupted ECG. In general, all the methods report low specificity values when tested on short ECG segments, but how to evaluate the real impact on CPR delivery of continuous rhythm analysis during CPR is still unknown. Recently, researchers have proposed a new methodology to measure this impact. Moreover, new strategies for fast rhythm analysis during ventilation pauses or high-specificity algorithms have been reported. Our objective is to present a thorough review of the field as the starting point for these late developments and to underline the open questions and future lines of research to be explored in the following years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The propagation of the fast muon population mainly due to collisional effect in a dense deuterium-tritium (DT for short) mixture is investigated and analysed within the framework of the relativistic Fokker-Planck equation. Without the approximation that the muons propagate straightly in the DT mixture, the muon penetration length, the straggling length, and the mean transverse dispersion radius are calculated for different initial energies, and especially for different densities of the densely compressed DT mixture in our suggested muon-driven fast ignition (FI). Unlike laser-driven FI requiring super-high temperature, muons can catalyze DT fusion at lower temperatures and may generate an ignition sparkle before the self-heating fusion follows. Our calculation is important for the feasibility and the experimental study of muon-driven FI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fast electron propagation in an inverse cone target is investigated computationally and experimentally. Two-dimensional particle-in-cell simulation shows that fast electrons with substantial numbers are generated at the outer tip of an inverse cone target irradiated by a short intense laser pulse. These electrons are guided and confined to propagate along the inverse cone wall, forming a large surface current. The propagation induces strong transient electric and magnetic fields which guide and confine the surface electron current. The experiment qualitatively verifies the guiding and confinement of the strong electron current in the wall surface. The large surface current and induced strong fields are of importance for fast ignition related researches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.

Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.

To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of electron temperature on the radiation fields and the resistance of a short dipole antenna embedded in a uniaxial plasma have been studied. It is found that for ω < ω_p the antenna excites two waves, a slow wave and a fast wave. These waves propagate only within a cone whose axis is parallel to the biasing magnetostatic field B_o and whose semicone angle is slightly less than sin ^(-1) (ω/ω_p). In the case of ω > ω_p the antenna excites two separate modes of radiation. One of the modes is the electromagnetic mode, while the other mode is of hot plasma origin. A characteristic interference structure is noted in the angular distribution of the field. The far fields are evaluated by asymptotic methods, while the near fields are calculated numerically. The effects of antenna length ℓ, electron thermal speed, collisional and Landau damping on the near field patterns have been studied.

The input and the radiation resistances are calculated and are shown to remain finite for nonzero electron thermal velocities. The effect of Landau damping and the antenna length on the input and radiation resistances has been considered.

The radiation condition for solving Maxwell's equations is discussed and the phase and group velocities for propagation given. It is found that for ω < ω_p in the radial direction (cylindrical coordinates) the power flow is in the opposite direction to that of the phase propagation. For ω > ω_p the hot plasma mode has similar characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-throughput DNA sequencing (HTS) instruments today are capable of generating millions of sequencing reads in a short period of time, and this represents a serious challenge to current bioinformatics pipeline in processing such an enormous amount of data in a fast and economical fashion. Modern graphics cards are powerful processing units that consist of hundreds of scalar processors in parallel in order to handle the rendering of high-definition graphics in real-time. It is this computational capability that we propose to harness in order to accelerate some of the time-consuming steps in analyzing data generated by the HTS instruments. We have developed BarraCUDA, a novel sequence mapping software that utilizes the parallelism of NVIDIA CUDA graphics cards to map sequencing reads to a particular location on a reference genome. While delivering a similar mapping fidelity as other mainstream programs , BarraCUDA is a magnitude faster in mapping throughput compared to its CPU counterparts. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the mapping throughput. BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the mapping of millions of sequencing reads generated by HTS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available at http://seqbarracuda.sf.net

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents new methods for computing the step sizes of the subband-adaptive iterative shrinkage-thresholding algorithms proposed by Bayram & Selesnick and Vonesch & Unser. The method yields tighter wavelet-domain bounds of the system matrix, thus leading to improved convergence speeds. It is directly applicable to non-redundant wavelet bases, and we also adapt it for cases of redundant frames. It turns out that the simplest and most intuitive setting for the step sizes that ignores subband aliasing is often satisfactory in practice. We show that our methods can be used to advantage with reweighted least squares penalty functions as well as L1 penalties. We emphasize that the algorithms presented here are suitable for performing inverse filtering on very large datasets, including 3D data, since inversions are applied only to diagonal matrices and fast transforms are used to achieve all matrix-vector products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An established Stochastic Reactor Model (SRM) is used to simulate the transition from Spark Ignition (SI) to Homogeneous Charge Compression Ignition (HCCI) combustion mode in a four cylinder in-line four-stroke naturally aspirated direct injection SI engine with cam profile switching. The SRM is coupled with GT-Power, a one-dimensional engine simulation tool used for modelling engine breathing during the open valve portion of the engine cycle, enabling multi-cycle simulations. The mode change is achieved by switching the cam profiles and phasing, resulting in a Negative Valve Overlap (NVO), opening the throttle, advancing the spark timing and reducing the fuel mass as well as using a pilot injection. A proven technique for tabulating the model is used to create look-up tables in both SI and HCCI modes. In HCCI mode several tables are required, including tables for the first NVO, transient valve timing NVO, transient valve timing HCCI and steady valve timing HCCI and NVO. This results in the ability to simulate the transition with detailed chemistry in very short computation times. The tables are then used to optimise the transition with the goal of reducing NO x emissions and fluctuations in IMEP. Copyright © 2010 SAE International.