981 resultados para High-fidelity


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Inflatable aerodynamic decelerators have potential advantages for planetary re-entry in robotic and human exploration missions. It is theorized that volume-mass characteristics of these decelerators are superior to those of common supersonic/subsonic parachutes and after deployment they may suffer no instabilities at high Mach numbers. A high fidelity computational fluid-structure interaction model is employed to investigate the behavior of tension cone inflatable aeroshells at supersonic speeds up to Mach 2.0. The computational framework targets the large displacements regime encountered during the inflation of the decelerator using fast level set techniques to incorporate boundary conditions of the moving structure. The preliminary results indicate large but steady aeroshell displacement with rich dynamics, including buckling of the inflatable torus that maintains the decelerator open under normal operational conditions, owing to interactions with the turbulent wake. Copyright © 2009 by the American Institute of Aeronautics and Astronautics, Inc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Accurate simulation of rolling-tyre vibrations, and the associated noise, requires knowledge of road-surface topology. Full scans of the surface types in common use are, however, not widely available, and are likely to remain so. Ways of producing simulated surfaces from incomplete starting information are thus needed. In this paper, a simulation methodology based solely on line measurements is developed, and validated against a full two-dimensional height map of a real asphalt surface. First the tribological characteristics-asperity height, curvature and nearest-neighbour distributions-of the real surface are analysed. It is then shown that a standard simulation technique, which matches the (isotropic) spectrum and the probability distribution of the height measurements, is unable to reproduce these characteristics satisfactorily. A modification, whereby the inherent granularity of the surface is enforced at the initialisation stage, is introduced, and found to produce simulations whose tribological characteristics are in excellent agreement with the measurements. This method will thus make high-fidelity tyre-vibration calculations feasible for researchers with access to line-scan data only. In addition, the approach to surface tribological characterisation set out here provides a template for efficient cataloguing of road textures, as long as the resulting information can subsequently be used to produce sample realisations. A third simulation algorithm, which successfully addresses this requirement, is therefore also presented. © 2011 Elsevier B.V.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The introduction of new materials and processes to microfabrication has, in large part, enabled many important advances in microsystems, labon- a-chip devices, and their applications. In particular, capabilities for cost-effective fabrication of polymer microstructures were transformed by the advent of soft lithography and other micromolding techniques 1,2, and this led a revolution in applications of microfabrication to biomedical engineering and biology. Nevertheless, it remains challenging to fabricate microstructures with well-defined nanoscale surface textures, and to fabricate arbitrary 3D shapes at the micro-scale. Robustness of master molds and maintenance of shape integrity is especially important to achieve high fidelity replication of complex structures and preserving their nanoscale surface texture. The combination of hierarchical textures, and heterogeneous shapes, is a profound challenge to existing microfabrication methods that largely rely upon top-down etching using fixed mask templates. On the other hand, the bottom-up synthesis of nanostructures such as nanotubes and nanowires can offer new capabilities to microfabrication, in particular by taking advantage of the collective self-organization of nanostructures, and local control of their growth behavior with respect to microfabricated patterns. Our goal is to introduce vertically aligned carbon nanotubes (CNTs), which we refer to as CNT "forests", as a new microfabrication material. We present details of a suite of related methods recently developed by our group: fabrication of CNT forest microstructures by thermal CVD from lithographically patterned catalyst thin films; self-directed elastocapillary densification of CNT microstructures; and replica molding of polymer microstructures using CNT composite master molds. In particular, our work shows that self-directed capillary densification ("capillary forming"), which is performed by condensation of a solvent onto the substrate with CNT microstructures, significantly increases the packing density of CNTs. This process enables directed transformation of vertical CNT microstructures into straight, inclined, and twisted shapes, which have robust mechanical properties exceeding those of typical microfabrication polymers. This in turn enables formation of nanocomposite CNT master molds by capillary-driven infiltration of polymers. The replica structures exhibit the anisotropic nanoscale texture of the aligned CNTs, and can have walls with sub-micron thickness and aspect ratios exceeding 50:1. Integration of CNT microstructures in fabrication offers further opportunity to exploit the electrical and thermal properties of CNTs, and diverse capabilities for chemical and biochemical functionalization 3. © 2012 Journal of Visualized Experiments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

IGBTs realise high-performance power converters. Unfortunately, with fast switching of the IGBT-free wheel diode chopper cell, such circuits are intrinsic sources of high-level EMI. Therefore, costly EMI filters or shielding are normally needed on the load and supply side. In order to design these EMI suppression components, designers need to predict the EMI level with reasonable accuracy for a given structure and operating mode. Simplifying the transient IGBT switching current and voltage into a multiple slope switching waveform approximation offers a feasible way to estimate conducted EMI with some accuracy. This method is dependent on the availability of high-fidelity measurements. Also, that multiple slope approximation needs careful and time-costly IGBT parameters optimisation process to approach the real switching waveform. In this paper, Active Voltage Control Gate Drive(AVC GD) is employed to shape IGBT switching into several defined slopes. As a result, Conducted EMI prediction by multiple slope switching approximation could be more accurate, less costly but more friendly for implementation. © 2013 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Access to robust and information-rich human cardiac tissue models would accelerate drug-based strategies for treating heart disease. Despite significant effort, the generation of high-fidelity adult-like human cardiac tissue analogs remains challenging. We used computational modeling of tissue contraction and assembly mechanics in conjunction with microfabricated constraints to guide the design of aligned and functional 3D human pluripotent stem cell (hPSC)-derived cardiac microtissues that we term cardiac microwires (CMWs). Miniaturization of the platform circumvented the need for tissue vascularization and enabled higher-throughput image-based analysis of CMW drug responsiveness. CMW tissue properties could be tuned using electromechanical stimuli and cell composition. Specifically, controlling self-assembly of 3D tissues in aligned collagen, and pacing with point stimulation electrodes, were found to promote cardiac maturation-associated gene expression and in vivo-like electrical signal propagation. Furthermore, screening a range of hPSC-derived cardiac cell ratios identified that 75% NKX2 Homeobox 5 (NKX2-5)+ cardiomyocytes and 25% Cluster of Differentiation 90 OR (CD90)+ nonmyocytes optimized tissue remodeling dynamics and yielded enhanced structural and functional properties. Finally, we demonstrate the utility of the optimized platform in a tachycardic model of arrhythmogenesis, an aspect of cardiac electrophysiology not previously recapitulated in 3D in vitro hPSC-derived cardiac microtissue models. The design criteria identified with our CMW platform should accelerate the development of predictive in vitro assays of human heart tissue function.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Screech is a high frequency oscillation that is usually characterized by instabilities caused by large-scale coherent flow structures in the wake of bluff-body flameholders and shear layers. Such oscillations can lead to changes in flame surface area which can cause the flame to burn unsteadily, but also couple with the acoustic modes and inherent fluid-mechanical instabilities that are present in the system. In this study, the flame response to hydrodynamic oscillations is analyzed in a controlled manner using high-fidelity Computational Fluid Dynamics (CFD) with an unsteady Reynolds-averaged Navier-Stokes approach. The response of a premixed flame with and without transverse velocity forcing is analyzed. When unforced, the flame is shown to exhibit a self-excitation that is attributed to the anti-symmetric shedding of vortices in the wake of the flameholder. The flame is also forced using two different kinds of low-amplitude out-of-phase inlet velocity forcing signals. The first forcing method is harmonic forcing with a single characteristic frequency, while the second forcing method involves a broadband forcing signal with frequencies in the range of 500 - 1000 Hz. For the harmonic forcing method, the flame is perturbed only lightly about its mean position and exhibits a limit cycle oscillation that is characteristic of the forcing frequency. For the broadband forcing method, larger changes in the flame surface area and detachment of the flame sheet can be seen. Transition to a complicated trajectory in the phase space is observed. When analyzed systematically with system identification methods, the CFD results, expressed in the form of the Flame Transfer Function (FTF) are capable of elucidating the flame response to the imposed perturbation. The FTF also serves to identify, both spatially and temporally, regions where the flame responds linearly and nonlinearly. Locking-in between the flame's natural self-excited frequency and the subharmonic frequencies of the broadband forcing signal is found to alter the dynamical behaviour of the flame. Copyright © 2013 by ASME.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Rainbow trout historic H3 (RH3) promoter was cloned via high fidelity PCR. The cloned RH3 promoter was inserted into a promoter-lacked vector pEGFP-1, resulting in an expression vector pRH3FGFP-1. The linearized pRH3EGFP-1 was microinjected into fertilized eggs of rare minnows and the sequential embryogenetic processes were monitored under a fluorescent microscope. Strong green fluorescence was ubiquitously observed at as early as the gastrula stage and then in various tissues at the fry stage. The results indicate that RH3 promoter, as a piscine promoter, could serve in producing transgenic Cyprinoid such as rare minnow. Promoter activity of RH3, CMV and common carp beta-actin (CA) were compared in rare minnow by the expression of respective recombinant EGFP vectors. The expression of pCMVEGFP occurred earlier than the following one, pRH3EGFP-1, and then pCAEGFP during the embryogenesis of the transgenics. Their expression activities demonstrated that the CMV promoter is the strongest one, followed by the CA and then the RH3.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper applies data coding thought, which based on the virtual information source modeling put forward by the author, to propose the image coding (compression) scheme based on neural network and SVM. This scheme is composed by "the image coding (compression) scheme based oil SVM" embedded "the lossless data compression scheme based oil neural network". The experiments show that the scheme has high compression ratio under the slightly damages condition, partly solve the contradiction which 'high fidelity' and 'high compression ratio' cannot unify in image coding system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article we present a mechanical pattern transfer process where a thermosetting polymer mold instead of a metal, dielectric, ceramic, or semiconductor master made by conventional lithography was used as the master to pattern thermoplastic polymers in hot embossing lithography. The thermosetting polymer mold was fabricated by a soft lithography strategy, microtransfer molding. For comparison, the thermosetting polymer mold and the silicon wafer master were both used to imprint the thermoplastic polymer, polymethylmethacrylate. Replication of the thermosetting polymer mold and the silicon wafer master was of the same quality. This indicates that the thermosetting polymer mold could be used for thermoplastic polymer patterning in hot embossing lithography with high fidelity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Based on the fractal theories, contractive mapping principles as well as the fixed point theory, by means of affine transform, this dissertation develops a novel Explicit Fractal Interpolation Function(EFIF)which can be used to reconstruct the seismic data with high fidelity and precision. Spatial trace interpolation is one of the important issues in seismic data processing. Under the ideal circumstances, seismic data should be sampled with a uniform spatial coverage. However, practical constraints such as the complex surface conditions indicate that the sampling density may be sparse or for other reasons some traces may be lost. The wide spacing between receivers can result in sparse sampling along traverse lines, thus result in a spatial aliasing of short-wavelength features. Hence, the method of interpolation is of very importance. It not only needs to make the amplitude information obvious but the phase information, especially that of the point that the phase changes acutely. Many people put forward several interpolation methods, yet this dissertation focuses attention on a special class of fractal interpolation function, referred to as explicit fractal interpolation function to improve the accuracy of the interpolation reconstruction and to make the local information obvious. The traditional fractal interpolation method mainly based on the randomly Fractional Brown Motion (FBM) model, furthermore, the vertical scaling factor which plays a critical role in the implementation of fractal interpolation is assigned the same value during the whole interpolating process, so it can not make the local information obvious. In addition, the maximal defect of the traditional fractal interpolation method is that it cannot obtain the function values on each interpolating nodes, thereby it cannot analyze the node error quantitatively and cannot evaluate the feasibility of this method. Detailed discussions about the applications of fractal interpolation in seismology have not been given by the pioneers, let alone the interpolating processing of the single trace seismogram. On the basis of the previous work and fractal theory this dissertation discusses the fractal interpolation thoroughly and the stability of this special kind of interpolating function is discussed, at the same time the explicit presentation of the vertical scaling factor which controls the precision of the interpolation has been proposed. This novel method develops the traditional fractal interpolation method and converts the fractal interpolation with random algorithms into the interpolation with determined algorithms. The data structure of binary tree method has been applied during the process of interpolation, and it avoids the process of iteration that is inevitable in traditional fractal interpolation and improves the computation efficiency. To illustrate the validity of the novel method, this dissertation develops several theoretical models and synthesizes the common shot gathers and seismograms and reconstructs the traces that were erased from the initial section using the explicit fractal interpolation method. In order to compare the differences between the theoretical traces that were erased in the initial section and the resulting traces after reconstruction on waveform and amplitudes quantitatively, each missing traces are reconstructed and the residuals are analyzed. The numerical experiments demonstrate that the novel fractal interpolation method is not only applicable to reconstruct the seismograms with small offset but to the seismograms with large offset. The seismograms reconstructed by explicit fractal interpolation method resemble the original ones well. The waveform of the missing traces could be estimated very well and also the amplitudes of the interpolated traces are a good approximation of the original ones. The high precision and computational efficiency of the explicit fractal interpolation make it a useful tool to reconstruct the seismic data; it can not only make the local information obvious but preserve the overall characteristics of the object investigated. To illustrate the influence of the explicit fractal interpolation method to the accuracy of the imaging of the structure in the earth’s interior, this dissertation applies the method mentioned above to the reverse-time migration. The imaging sections obtained by using the fractal interpolated reflected data resemble the original ones very well. The numerical experiments demonstrate that even with the sparse sampling we can still obtain the high accurate imaging of the earth’s interior’s structure by means of the explicit fractal interpolation method. So we can obtain the imaging results of the earth’s interior with fine quality by using relatively small number of seismic stations. With the fractal interpolation method we will improve the efficiency and the accuracy of the reverse-time migration under economic conditions. To verify the application effect to real data of the method presented in this paper, we tested the method by using the real data provided by the Broadband Seismic Array Laboratory, IGGCAS. The results demonstrate that the accuracy of explicit fractal interpolation is still very high even with the real data with large epicenter and large offset. The amplitudes and the phase of the reconstructed station data resemble the original ones that were erased in the initial section very well. Altogether, the novel fractal interpolation function provides a new and useful tool to reconstruct the seismic data with high precision and efficiency, and presents an alternative to image the deep structure of the earth accurately.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the development of oil and gas exploration, the exploration of the continental oil and gas turns into the exploration of the subtle oil and gas reservoirs from the structural oil and gas reservoirs in China. The reserves of the found subtle oil and gas reservoirs account for more than 60 percent of the in the discovered oil and gas reserves. Exploration of the subtle oil and gas reservoirs is becoming more and more important and can be taken as the main orientation for the increase of the oil and gas reserves. The characteristics of the continental sedimentary facies determine the complexities of the lithological exploration. Most of the continental rift basins in East China have entered exploration stages of medium and high maturity. Although the quality of the seismic data is relatively good, this areas have the characteristics of the thin sand thickness, small faults, small range of the stratum. It requests that the seismic data have high resolution. It is a important task how to improve the signal/noise ratio of the high frequency of seismic data. In West China, there are the complex landforms, the deep embedding the targets of the prospecting, the complex geological constructs, many ruptures, small range of the traps, the low rock properties, many high pressure stratums and difficulties of boring well. Those represent low signal/noise ratio and complex kinds of noise in the seismic records. This needs to develop the method and technique of the noise attenuation in the data acquisition and processing. So that, oil and gas explorations need the high resolution technique of the geophysics in order to solve the implementation of the oil resources strategy for keep oil production and reserves stable in Ease China and developing the crude production and reserves in West China. High signal/noise ratio of seismic data is the basis. It is impossible to realize for the high resolution and high fidelity without the high signal/noise ratio. We play emphasis on many researches based on the structure analysis for improving signal/noise ratio of the complex areas. Several methods are put forward for noise attenuation to truly reflect the geological features. Those can reflect the geological structures, keep the edges of geological construction and improve the identifications of the oil and gas traps. The ideas of emphasize the foundation, give prominence to innovate, and pay attention to application runs through the paper. The dip-scanning method as the center of the scanned point inevitably blurs the edges of geological features, such as fault and fractures. We develop the new dip scanning method in the shap of end with two sides scanning to solve this problem. We bring forward the methods of signal estimation with the coherence, seismic wave characteristc with coherence, the most homogeneous dip-sanning for the noise attenuation using the new dip-scanning method. They can keep the geological characters, suppress the random noise and improve the s/n ratio and resolution. The rutine dip-scanning is in the time-space domain. Anew method of dip-scanning in the frequency-wavenumber domain for the noise attenuation is put forward. It use the quality of distinguishing between different dip events of the reflection in f-k domain. It can reduce the noise and gain the dip information. We describe a methodology for studying and developing filtering methods based on differential equations. It transforms the filtering equations in the frequency domain or the f-k domain into time or time-space domains, and uses a finite-difference algorithm to solve these equations. This method does not require that seismic data be stationary, so their parameters can vary at every temporal and spatial point. That enhances the adaptability of the filter. It is computationally efficient. We put forward a method of matching pursuits for the noise suppression. This method decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. It can extract the effective signal from the noisy signal and reduce the noise. We introduce the beamforming filtering method for the noise elimination. Real seismic data processing shows that it is effective in attenuating multiples and internal multiples. The s/n ratio and resolution are improved. The effective signals have the high fidelity. Through calculating in the theoretic model and applying it to the real seismic data processing, it is proved that the methods in this paper can effectively suppress the random noise, eliminate the cohence noise, and improve the resolution of the seismic data. Their practicability is very better. And the effect is very obvious.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Eight experiments tested how object array structure and learning location influenced the establishing and utilization of self-to-object and object-to-object spatial representations in locomotion and reorientation. In Experiment 1 to 4, participants learned either at the periphery of or amidst regular or irregular object array, and then pointed to objects while blindfolded in three conditions: before turning (baseline), after rotating 240 degrees (updating), and after disorientation (disorientation). In Experiment 5 to 8, participants received instruction to keep track of self-to-object or object-to-object spatial representations before rotation. In each condition, the configuration error, which means the standard deviation of the means per target object of the signed pointing errors, was calculated as the index of the fidelity of representation used in each condition. Results indicate that participants form both self-to-object and object-to-object spatial representations after learning an object-array. Object-array structure influences the selection of representation during updating. By default, object-to-object spatial representation is updated when people learned the regular object-array structure, and self-to-object spatial representation is updated when people learned the irregular object array. But people could also update the other representation when they are required to do so. The fidelity of representations will confine this kind of “switch”. People could only “switch” from a low fidelity representation to a high fidelity representation or between two representations of similar fidelity. They couldn’t “switch” from a high fidelity representation to a low fidelity representation. Leaning location might influence the fidelity of representations. When people learned at the periphery of object array, they could acquire both self-to-object and object-to-object spatial representations of high fidelity. But when people learned amidst the object array, they could only acquire self-to-object spatial representation of high fidelity, and the fidelity of object-to-object spatial representation was low.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A fundamental understanding of the information carrying capacity of optical channels requires the signal and physical channel to be modeled quantum mechanically. This thesis considers the problems of distributing multi-party quantum entanglement to distant users in a quantum communication system and determining the ability of quantum optical channels to reliably transmit information. A recent proposal for a quantum communication architecture that realizes long-distance, high-fidelity qubit teleportation is reviewed. Previous work on this communication architecture is extended in two primary ways. First, models are developed for assessing the effects of amplitude, phase, and frequency errors in the entanglement source of polarization-entangled photons, as well as fiber loss and imperfect polarization restoration, on the throughput and fidelity of the system. Second, an error model is derived for an extension of this communication architecture that allows for the production and storage of three-party entangled Greenberger-Horne-Zeilinger states. A performance analysis of the quantum communication architecture in qubit teleportation and quantum secret sharing communication protocols is presented. Recent work on determining the channel capacity of optical channels is extended in several ways. Classical capacity is derived for a class of Gaussian Bosonic channels representing the quantum version of classical colored Gaussian-noise channels. The proof is strongly mo- tivated by the standard technique of whitening Gaussian noise used in classical information theory. Minimum output entropy problems related to these channel capacity derivations are also studied. These single-user Bosonic capacity results are extended to a multi-user scenario by deriving capacity regions for single-mode and wideband coherent-state multiple access channels. An even larger capacity region is obtained when the transmitters use non- classical Gaussian states, and an outer bound on the ultimate capacity region is presented

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wireless sensor networks are characterized by limited energy resources. To conserve energy, application-specific aggregation (fusion) of data reports from multiple sensors can be beneficial in reducing the amount of data flowing over the network. Furthermore, controlling the topology by scheduling the activity of nodes between active and sleep modes has often been used to uniformly distribute the energy consumption among all nodes by de-synchronizing their activities. We present an integrated analytical model to study the joint performance of in-network aggregation and topology control. We define performance metrics that capture the tradeoffs among delay, energy, and fidelity of the aggregation. Our results indicate that to achieve high fidelity levels under medium to high event reporting load, shorter and fatter aggregation/routing trees (toward the sink) offer the best delay-energy tradeoff as long as topology control is well coordinated with routing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis I theoretically study quantum states of ultracold atoms. The majority of the Chapters focus on engineering specific quantum states of single atoms with high fidelity in experimentally realistic systems. In the sixth Chapter, I investigate the stability and dynamics of new multidimensional solitonic states that can be created in inhomogeneous atomic Bose-Einstein condensates. In Chapter three I present two papers in which I demonstrate how the coherent tunnelling by adiabatic passage (CTAP) process can be implemented in an experimentally realistic atom chip system, to coherently transfer the centre-of-mass of a single atom between two spatially distinct magnetic waveguides. In these works I also utilise GPU (Graphics Processing Unit) computing which offers a significant performance increase in the numerical simulation of the Schrödinger equation. In Chapter four I investigate the CTAP process for a linear arrangement of radio frequency traps where the centre-of-mass of both, single atoms and clouds of interacting atoms, can be coherently controlled. In Chapter five I present a theoretical study of adiabatic radio frequency potentials where I use Floquet theory to more accurately model situations where frequencies are close and/or field amplitudes are large. I also show how one can create highly versatile 2D adiabatic radio frequency potentials using multiple radio frequency fields with arbitrary field orientation and demonstrate their utility by simulating the creation of ring vortex solitons. In the sixth Chapter I discuss the stability and dynamics of a family of multidimensional solitonic states created in harmonically confined Bose-Einstein condensates. I demonstrate that these solitonic states have interesting dynamical instabilities, where a continuous collapse and revival of the initial state occurs. Through Bogoliubov analysis, I determine the modes responsible for the observed instabilities of each solitonic state and also extract information related to the time at which instability can be observed.