377 resultados para dynamic decomposition
Resumo:
Dendritic rnicroenvironments defined by dynamic internal cavities of a dendrimer were probed through geometric isomerization of stilbene and azobenzene. A third-generation poly(alkyl aryl ether) dendrimer with hydrophilic exterior and hydrophobic interior was used as a reaction cavity in aqueous medium. The dynamic inner cavity sizes were varied by utilizing alkyl linkers that connect the branch junctures from ethyl to n-pentyl moiety (C(2)G(3)-C(5)G(3)). Dendrimers constituted with n-pentyl linker were found to afford higher solubilities of stilbene and azobenzene. Direct irradiation of trans-stilbene showed that C(5)G(3) and C(4)G(3) dendrimers afforded considerable phenanthrene formation, in addition to cis-stilbene, whereas C(3)G(3) and C(2)G(3) gave only cis-stilbene. An electron-transfer sensitized trans-cis isomerization, using cresyl violet perchlorate as the sensitizer, also led to similar results. Thermal isomerization of cis-azobenzene to trans-azobenzene within dendritic microenvironments revealed that the activation energy of the cis- to trans-isomer was increasing in the series C(5)G(3) < C(4)G(3) < C(3)G(3)
Resumo:
A generalized power tracking algorithm that minimizes power consumption of digital circuits by dynamic control of supply voltage and the body bias is proposed. A direct power monitoring scheme is proposed that does not need any replica and hence can sense total power consumed by load circuit across process, voltage, and temperature corners. Design details and performance of power monitor and tracking algorithm are examined by a simulation framework developed using UMC 90-nm CMOS triple well process. The proposed algorithm with direct power monitor achieves a power savings of 42.2% for activity of 0.02 and 22.4% for activity of 0.04. Experimental results from test chip fabricated in AMS 350 nm process shows power savings of 46.3% and 65% for load circuit operating in super threshold and near sub-threshold region, respectively. Measured resolution of power monitor is around 0.25 mV and it has a power overhead of 2.2% of die power. Issues with loop convergence and design tradeoff for power monitor are also discussed in this paper.
Resumo:
In this paper, the well-known Adomian Decomposition Method (ADM) is modified to solve the fracture laminated multi-directional problems. The results are compared with the existing analytical/exact or experimental method. The already known existing ADM is modified to improve the accuracy and convergence. Thus, the modified method is named as Modified Adomian Decomposition Method (MADM). The results fromMADM are found to converge very quickly, simple to apply for fracture(singularity) problems and are more accurate compared to experimental and analytical methods. MADM is quite efficient and is practically well-suited for use in these problems. Several examples are given to check the reliability of the present method. In the present paper, the principle of the decomposition method is described, and its advantages form the analyses of fracture of laminated uni-directional composites.
Resumo:
This work focuses on the design of torsional microelectromechanical systems (MEMS) varactors to achieve highdynamic range of capacitances. MEMS varactors fabricated through the polyMUMPS process are characterized at low and high frequencies for their capacitance-voltage characteristics and electrical parasitics. The effect of parasitic capacitances on tuning ratio is studied and an equivalent circuit is developed. Two variants of torsional varactors that help to improve the dynamic range of torsional varactors despite the parasitics are proposed and characterized. A tuning ratio of 1:8, which is the highest reported in literature, has been obtained. We also demonstrate through simulations that much higher tuning ratios can be obtained with the designs proposed. The designs and experimental results presented are relevant to CMOS fabrication processes that use low resistivity substrate. (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). DOI: 10.1117/1.JMM.11.1.013006]
Resumo:
The importance of air bearing design is growing in engineering. As the trend to precision and ultra precision manufacture gains pace and the drive to higher quality and more reliable products continues, the advantages which can be gained from applying aerostatic bearings to machine tools, instrumentation and test rigs is becoming more apparent. The inlet restrictor design is significant for air bearings because it affects the static and dynamic performance of the air bearing. For instance pocketed orifice bearings give higher load capacity as compared to inherently compensated orifice type bearings, however inherently compensated orifices, also known as laminar flow restrictors are known to give highly stable air bearing systems (less prone to pneumatic hammer) as compared to pocketed orifice air bearing systems. However, they are not commonly used because of the difficulties encountered in manufacturing and assembly of the orifice designs. This paper aims to analyse the static and dynamic characteristics of inherently compensated orifice based flat pad air bearing system. Based on Reynolds equation and mass conservation equation for incompressible flow, the steady state characteristics are studied while the dynamic state characteristics are performed in a similar manner however, using the above equations for compressible flow. Steady state experiments were also performed for a single orifice air bearing and the results are compared to that obtained from theoretical studies. A technique to ease the assembly of orifices with the air bearing plate has also been discussed so as to make the manufacturing of the inherently compensated bearings more commercially viable. (c) 2012 Elsevier Inc. All rights reserved.
Resumo:
Many problems of state estimation in structural dynamics permit a partitioning of system states into nonlinear and conditionally linear substructures. This enables a part of the problem to be solved exactly, using the Kalman filter, and the remainder using Monte Carlo simulations. The present study develops an algorithm that combines sequential importance sampling based particle filtering with Kalman filtering to a fairly general form of process equations and demonstrates the application of a substructuring scheme to problems of hidden state estimation in structures with local nonlinearities, response sensitivity model updating in nonlinear systems, and characterization of residual displacements in instrumented inelastic structures. The paper also theoretically demonstrates that the sampling variance associated with the substructuring scheme used does not exceed the sampling variance corresponding to the Monte Carlo filtering without substructuring. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Traditional image reconstruction methods in rapid dynamic diffuse optical tomography employ l(2)-norm-based regularization, which is known to remove the high-frequency components in the reconstructed images and make them appear smooth. The contrast recovery in these type of methods is typically dependent on the iterative nature of method employed, where the nonlinear iterative technique is known to perform better in comparison to linear techniques (noniterative) with a caveat that nonlinear techniques are computationally complex. Assuming that there is a linear dependency of solution between successive frames resulted in a linear inverse problem. This new framework with the combination of l(1)-norm based regularization can provide better robustness to noise and provide better contrast recovery compared to conventional l(2)-based techniques. Moreover, it is shown that the proposed l(1)-based technique is computationally efficient compared to its counterpart (l(2)-based one). The proposed framework requires a reasonably close estimate of the actual solution for the initial frame, and any suboptimal estimate leads to erroneous reconstruction results for the subsequent frames.
Resumo:
Dynamic Voltage and Frequency Scaling (DVFS) offers a huge potential for designing trade-offs involving energy, power, temperature and performance of computing systems. In this paper, we evaluate three different DVFS schemes - our enhancement of a Petri net performance model based DVFS method for sequential programs to stream programs, a simple profile based Linear Scaling method, and an existing hardware based DVFS method for multithreaded applications - using multithreaded stream applications, in a full system Chip Multiprocessor (CMP) simulator. From our evaluation, we find that the software based methods achieve significant Energy/Throughput2(ET−2) improvements. The hardware based scheme degrades performance heavily and suffers ET−2 loss. Our results indicate that the simple profile based scheme achieves the benefits of the complex Petri net based scheme for stream programs, and present a strong case for the need for independent voltage/frequency control for different cores of CMPs, which is lacking in most of the state-of-the-art CMPs. This is in contrast to the conclusions of a recent evaluation of per-core DVFS schemes for multithreaded applications for CMPs.
Resumo:
The assignment of tasks to multiple resources becomes an interesting game theoretic problem, when both the task owner and the resources are strategic. In the classical, nonstrategic setting, where the states of the tasks and resources are observable by the controller, this problem is that of finding an optimal policy for a Markov decision process (MDP). When the states are held by strategic agents, the problem of an efficient task allocation extends beyond that of solving an MDP and becomes that of designing a mechanism. Motivated by this fact, we propose a general mechanism which decides on an allocation rule for the tasks and resources and a payment rule to incentivize agents' participation and truthful reports. In contrast to related dynamic strategic control problems studied in recent literature, the problem studied here has interdependent values: the benefit of an allocation to the task owner is not simply a function of the characteristics of the task itself and the allocation, but also of the state of the resources. We introduce a dynamic extension of Mezzetti's two phase mechanism for interdependent valuations. In this changed setting, the proposed dynamic mechanism is efficient, within period ex-post incentive compatible, and within period ex-post individually rational.
Resumo:
We address the classical problem of delta feature computation, and interpret the operation involved in terms of Savitzky- Golay (SG) filtering. Features such as themel-frequency cepstral coefficients (MFCCs), obtained based on short-time spectra of the speech signal, are commonly used in speech recognition tasks. In order to incorporate the dynamics of speech, auxiliary delta and delta-delta features, which are computed as temporal derivatives of the original features, are used. Typically, the delta features are computed in a smooth fashion using local least-squares (LS) polynomial fitting on each feature vector component trajectory. In the light of the original work of Savitzky and Golay, and a recent article by Schafer in IEEE Signal Processing Magazine, we interpret the dynamic feature vector computation for arbitrary derivative orders as SG filtering with a fixed impulse response. This filtering equivalence brings in significantly lower latency with no loss in accuracy, as validated by results on a TIMIT phoneme recognition task. The SG filters involved in dynamic parameter computation can be viewed as modulation filters, proposed by Hermansky.
Resumo:
Effects of dynamic contact angle models on the flow dynamics of an impinging droplet in sharp interface simulations are presented in this article. In the considered finite element scheme, the free surface is tracked using the arbitrary Lagrangian-Eulerian approach. The contact angle is incorporated into the model by replacing the curvature with the Laplace-Beltrami operator and integration by parts. Further, the Navier-slip with friction boundary condition is used to avoid stress singularities at the contact line. Our study demonstrates that the contact angle models have almost no influence on the flow dynamics of the non-wetting droplets. In computations of the wetting and partially wetting droplets, different contact angle models induce different flow dynamics, especially during recoiling. It is shown that a large value for the slip number has to be used in computations of the wetting and partially wetting droplets in order to reduce the effects of the contact angle models. Among all models, the equilibrium model is simple and easy to implement. Further, the equilibrium model also incorporates the contact angle hysteresis. Thus, the equilibrium contact angle model is preferred in sharp interface numerical schemes.
Resumo:
with the development of large scale wireless networks, there has been short comings and limitations in traditional network topology management systems. In this paper, an adaptive algorithm is proposed to maintain topology of hybrid wireless superstore network by considering the transactions and individual network load. The adaptations include to choose the best network connection for the response, and to perform network Connection switching when network situation changes. At the same time, in terms of the design for topology management systems, aiming at intelligence, real-time, the study makes a step-by-step argument and research on the overall topology management scheme. Architecture for the adaptive topology management of hybrid wireless networking resources is available to user’s mobile device. Simulation results describes that the new scheme has outperformed the original topology management and it is simpler than the original rate borrowing scheme.
A dynamic bandwidth allocation scheme for interactive multimedia applications over cellular networks
Resumo:
Cellular networks played key role in enabling high level of bandwidth for users by employing traditional methods such as guaranteed QoS based on application category at radio access stratum level for various classes of QoSs. Also, the newer multimode phones (e.g., phones that support LTE (Long Term Evolution standard), UMTS, GSM, WIFI all at once) are capable to use multiple access methods simulta- neously and can perform seamless handover among various supported technologies to remain connected. With various types of applications (including interactive ones) running on these devices, which in turn have different QoS requirements, this work discusses as how QoS (measured in terms of user level response time, delay, jitter and transmission rate) can be achieved for interactive applications using dynamic bandwidth allocation schemes over cellular networks. In this work, we propose a dynamic bandwidth allocation scheme for interactive multimedia applications with/without background load in the cellular networks. The system has been simulated for many application types running in parallel and it has been observed that if interactive applications are to be provided with decent response time, a periodic overhauling of policy at admission control has to be done by taking into account history, criticality of applications. The results demonstrate that interactive appli- cations can be provided with good service if policy database at admission control is reviewed dynamically.
Resumo:
We propose an iterative algorithm to detect transient segments in audio signals. Short time Fourier transform(STFT) is used to detect rapid local changes in the audio signal. The algorithm has two steps that iteratively - (a) calculate a function of the STFT and (b) build a transient signal. A dynamic thresholding scheme is used to locate the potential positions of transients in the signal. The iterative procedure ensures that genuine transients are built up while the localised spectral noise are suppressed by using an energy criterion. The extracted transient signal is later compared to a ground truth dataset. The algorithm performed well on two databases. On the EBU-SQAM database of monophonic sounds, the algorithm achieved an F-measure of 90% while on our database of polyphonic audio an F-measure of 91% was achieved. This technique is being used as a preprocessing step for a tempo analysis algorithm and a TSR (Transients + Sines + Residue) decomposition scheme.