957 resultados para High fidelity
Resumo:
Access to robust and information-rich human cardiac tissue models would accelerate drug-based strategies for treating heart disease. Despite significant effort, the generation of high-fidelity adult-like human cardiac tissue analogs remains challenging. We used computational modeling of tissue contraction and assembly mechanics in conjunction with microfabricated constraints to guide the design of aligned and functional 3D human pluripotent stem cell (hPSC)-derived cardiac microtissues that we term cardiac microwires (CMWs). Miniaturization of the platform circumvented the need for tissue vascularization and enabled higher-throughput image-based analysis of CMW drug responsiveness. CMW tissue properties could be tuned using electromechanical stimuli and cell composition. Specifically, controlling self-assembly of 3D tissues in aligned collagen, and pacing with point stimulation electrodes, were found to promote cardiac maturation-associated gene expression and in vivo-like electrical signal propagation. Furthermore, screening a range of hPSC-derived cardiac cell ratios identified that 75% NKX2 Homeobox 5 (NKX2-5)+ cardiomyocytes and 25% Cluster of Differentiation 90 OR (CD90)+ nonmyocytes optimized tissue remodeling dynamics and yielded enhanced structural and functional properties. Finally, we demonstrate the utility of the optimized platform in a tachycardic model of arrhythmogenesis, an aspect of cardiac electrophysiology not previously recapitulated in 3D in vitro hPSC-derived cardiac microtissue models. The design criteria identified with our CMW platform should accelerate the development of predictive in vitro assays of human heart tissue function.
Resumo:
Screech is a high frequency oscillation that is usually characterized by instabilities caused by large-scale coherent flow structures in the wake of bluff-body flameholders and shear layers. Such oscillations can lead to changes in flame surface area which can cause the flame to burn unsteadily, but also couple with the acoustic modes and inherent fluid-mechanical instabilities that are present in the system. In this study, the flame response to hydrodynamic oscillations is analyzed in a controlled manner using high-fidelity Computational Fluid Dynamics (CFD) with an unsteady Reynolds-averaged Navier-Stokes approach. The response of a premixed flame with and without transverse velocity forcing is analyzed. When unforced, the flame is shown to exhibit a self-excitation that is attributed to the anti-symmetric shedding of vortices in the wake of the flameholder. The flame is also forced using two different kinds of low-amplitude out-of-phase inlet velocity forcing signals. The first forcing method is harmonic forcing with a single characteristic frequency, while the second forcing method involves a broadband forcing signal with frequencies in the range of 500 - 1000 Hz. For the harmonic forcing method, the flame is perturbed only lightly about its mean position and exhibits a limit cycle oscillation that is characteristic of the forcing frequency. For the broadband forcing method, larger changes in the flame surface area and detachment of the flame sheet can be seen. Transition to a complicated trajectory in the phase space is observed. When analyzed systematically with system identification methods, the CFD results, expressed in the form of the Flame Transfer Function (FTF) are capable of elucidating the flame response to the imposed perturbation. The FTF also serves to identify, both spatially and temporally, regions where the flame responds linearly and nonlinearly. Locking-in between the flame's natural self-excited frequency and the subharmonic frequencies of the broadband forcing signal is found to alter the dynamical behaviour of the flame. Copyright © 2013 by ASME.
Resumo:
Rainbow trout historic H3 (RH3) promoter was cloned via high fidelity PCR. The cloned RH3 promoter was inserted into a promoter-lacked vector pEGFP-1, resulting in an expression vector pRH3FGFP-1. The linearized pRH3EGFP-1 was microinjected into fertilized eggs of rare minnows and the sequential embryogenetic processes were monitored under a fluorescent microscope. Strong green fluorescence was ubiquitously observed at as early as the gastrula stage and then in various tissues at the fry stage. The results indicate that RH3 promoter, as a piscine promoter, could serve in producing transgenic Cyprinoid such as rare minnow. Promoter activity of RH3, CMV and common carp beta-actin (CA) were compared in rare minnow by the expression of respective recombinant EGFP vectors. The expression of pCMVEGFP occurred earlier than the following one, pRH3EGFP-1, and then pCAEGFP during the embryogenesis of the transgenics. Their expression activities demonstrated that the CMV promoter is the strongest one, followed by the CA and then the RH3.
Resumo:
This paper applies data coding thought, which based on the virtual information source modeling put forward by the author, to propose the image coding (compression) scheme based on neural network and SVM. This scheme is composed by "the image coding (compression) scheme based oil SVM" embedded "the lossless data compression scheme based oil neural network". The experiments show that the scheme has high compression ratio under the slightly damages condition, partly solve the contradiction which 'high fidelity' and 'high compression ratio' cannot unify in image coding system.
Resumo:
In this article we present a mechanical pattern transfer process where a thermosetting polymer mold instead of a metal, dielectric, ceramic, or semiconductor master made by conventional lithography was used as the master to pattern thermoplastic polymers in hot embossing lithography. The thermosetting polymer mold was fabricated by a soft lithography strategy, microtransfer molding. For comparison, the thermosetting polymer mold and the silicon wafer master were both used to imprint the thermoplastic polymer, polymethylmethacrylate. Replication of the thermosetting polymer mold and the silicon wafer master was of the same quality. This indicates that the thermosetting polymer mold could be used for thermoplastic polymer patterning in hot embossing lithography with high fidelity.
Resumo:
Based on the fractal theories, contractive mapping principles as well as the fixed point theory, by means of affine transform, this dissertation develops a novel Explicit Fractal Interpolation Function(EFIF)which can be used to reconstruct the seismic data with high fidelity and precision. Spatial trace interpolation is one of the important issues in seismic data processing. Under the ideal circumstances, seismic data should be sampled with a uniform spatial coverage. However, practical constraints such as the complex surface conditions indicate that the sampling density may be sparse or for other reasons some traces may be lost. The wide spacing between receivers can result in sparse sampling along traverse lines, thus result in a spatial aliasing of short-wavelength features. Hence, the method of interpolation is of very importance. It not only needs to make the amplitude information obvious but the phase information, especially that of the point that the phase changes acutely. Many people put forward several interpolation methods, yet this dissertation focuses attention on a special class of fractal interpolation function, referred to as explicit fractal interpolation function to improve the accuracy of the interpolation reconstruction and to make the local information obvious. The traditional fractal interpolation method mainly based on the randomly Fractional Brown Motion (FBM) model, furthermore, the vertical scaling factor which plays a critical role in the implementation of fractal interpolation is assigned the same value during the whole interpolating process, so it can not make the local information obvious. In addition, the maximal defect of the traditional fractal interpolation method is that it cannot obtain the function values on each interpolating nodes, thereby it cannot analyze the node error quantitatively and cannot evaluate the feasibility of this method. Detailed discussions about the applications of fractal interpolation in seismology have not been given by the pioneers, let alone the interpolating processing of the single trace seismogram. On the basis of the previous work and fractal theory this dissertation discusses the fractal interpolation thoroughly and the stability of this special kind of interpolating function is discussed, at the same time the explicit presentation of the vertical scaling factor which controls the precision of the interpolation has been proposed. This novel method develops the traditional fractal interpolation method and converts the fractal interpolation with random algorithms into the interpolation with determined algorithms. The data structure of binary tree method has been applied during the process of interpolation, and it avoids the process of iteration that is inevitable in traditional fractal interpolation and improves the computation efficiency. To illustrate the validity of the novel method, this dissertation develops several theoretical models and synthesizes the common shot gathers and seismograms and reconstructs the traces that were erased from the initial section using the explicit fractal interpolation method. In order to compare the differences between the theoretical traces that were erased in the initial section and the resulting traces after reconstruction on waveform and amplitudes quantitatively, each missing traces are reconstructed and the residuals are analyzed. The numerical experiments demonstrate that the novel fractal interpolation method is not only applicable to reconstruct the seismograms with small offset but to the seismograms with large offset. The seismograms reconstructed by explicit fractal interpolation method resemble the original ones well. The waveform of the missing traces could be estimated very well and also the amplitudes of the interpolated traces are a good approximation of the original ones. The high precision and computational efficiency of the explicit fractal interpolation make it a useful tool to reconstruct the seismic data; it can not only make the local information obvious but preserve the overall characteristics of the object investigated. To illustrate the influence of the explicit fractal interpolation method to the accuracy of the imaging of the structure in the earth’s interior, this dissertation applies the method mentioned above to the reverse-time migration. The imaging sections obtained by using the fractal interpolated reflected data resemble the original ones very well. The numerical experiments demonstrate that even with the sparse sampling we can still obtain the high accurate imaging of the earth’s interior’s structure by means of the explicit fractal interpolation method. So we can obtain the imaging results of the earth’s interior with fine quality by using relatively small number of seismic stations. With the fractal interpolation method we will improve the efficiency and the accuracy of the reverse-time migration under economic conditions. To verify the application effect to real data of the method presented in this paper, we tested the method by using the real data provided by the Broadband Seismic Array Laboratory, IGGCAS. The results demonstrate that the accuracy of explicit fractal interpolation is still very high even with the real data with large epicenter and large offset. The amplitudes and the phase of the reconstructed station data resemble the original ones that were erased in the initial section very well. Altogether, the novel fractal interpolation function provides a new and useful tool to reconstruct the seismic data with high precision and efficiency, and presents an alternative to image the deep structure of the earth accurately.
Resumo:
With the development of oil and gas exploration, the exploration of the continental oil and gas turns into the exploration of the subtle oil and gas reservoirs from the structural oil and gas reservoirs in China. The reserves of the found subtle oil and gas reservoirs account for more than 60 percent of the in the discovered oil and gas reserves. Exploration of the subtle oil and gas reservoirs is becoming more and more important and can be taken as the main orientation for the increase of the oil and gas reserves. The characteristics of the continental sedimentary facies determine the complexities of the lithological exploration. Most of the continental rift basins in East China have entered exploration stages of medium and high maturity. Although the quality of the seismic data is relatively good, this areas have the characteristics of the thin sand thickness, small faults, small range of the stratum. It requests that the seismic data have high resolution. It is a important task how to improve the signal/noise ratio of the high frequency of seismic data. In West China, there are the complex landforms, the deep embedding the targets of the prospecting, the complex geological constructs, many ruptures, small range of the traps, the low rock properties, many high pressure stratums and difficulties of boring well. Those represent low signal/noise ratio and complex kinds of noise in the seismic records. This needs to develop the method and technique of the noise attenuation in the data acquisition and processing. So that, oil and gas explorations need the high resolution technique of the geophysics in order to solve the implementation of the oil resources strategy for keep oil production and reserves stable in Ease China and developing the crude production and reserves in West China. High signal/noise ratio of seismic data is the basis. It is impossible to realize for the high resolution and high fidelity without the high signal/noise ratio. We play emphasis on many researches based on the structure analysis for improving signal/noise ratio of the complex areas. Several methods are put forward for noise attenuation to truly reflect the geological features. Those can reflect the geological structures, keep the edges of geological construction and improve the identifications of the oil and gas traps. The ideas of emphasize the foundation, give prominence to innovate, and pay attention to application runs through the paper. The dip-scanning method as the center of the scanned point inevitably blurs the edges of geological features, such as fault and fractures. We develop the new dip scanning method in the shap of end with two sides scanning to solve this problem. We bring forward the methods of signal estimation with the coherence, seismic wave characteristc with coherence, the most homogeneous dip-sanning for the noise attenuation using the new dip-scanning method. They can keep the geological characters, suppress the random noise and improve the s/n ratio and resolution. The rutine dip-scanning is in the time-space domain. Anew method of dip-scanning in the frequency-wavenumber domain for the noise attenuation is put forward. It use the quality of distinguishing between different dip events of the reflection in f-k domain. It can reduce the noise and gain the dip information. We describe a methodology for studying and developing filtering methods based on differential equations. It transforms the filtering equations in the frequency domain or the f-k domain into time or time-space domains, and uses a finite-difference algorithm to solve these equations. This method does not require that seismic data be stationary, so their parameters can vary at every temporal and spatial point. That enhances the adaptability of the filter. It is computationally efficient. We put forward a method of matching pursuits for the noise suppression. This method decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. It can extract the effective signal from the noisy signal and reduce the noise. We introduce the beamforming filtering method for the noise elimination. Real seismic data processing shows that it is effective in attenuating multiples and internal multiples. The s/n ratio and resolution are improved. The effective signals have the high fidelity. Through calculating in the theoretic model and applying it to the real seismic data processing, it is proved that the methods in this paper can effectively suppress the random noise, eliminate the cohence noise, and improve the resolution of the seismic data. Their practicability is very better. And the effect is very obvious.
Resumo:
Eight experiments tested how object array structure and learning location influenced the establishing and utilization of self-to-object and object-to-object spatial representations in locomotion and reorientation. In Experiment 1 to 4, participants learned either at the periphery of or amidst regular or irregular object array, and then pointed to objects while blindfolded in three conditions: before turning (baseline), after rotating 240 degrees (updating), and after disorientation (disorientation). In Experiment 5 to 8, participants received instruction to keep track of self-to-object or object-to-object spatial representations before rotation. In each condition, the configuration error, which means the standard deviation of the means per target object of the signed pointing errors, was calculated as the index of the fidelity of representation used in each condition. Results indicate that participants form both self-to-object and object-to-object spatial representations after learning an object-array. Object-array structure influences the selection of representation during updating. By default, object-to-object spatial representation is updated when people learned the regular object-array structure, and self-to-object spatial representation is updated when people learned the irregular object array. But people could also update the other representation when they are required to do so. The fidelity of representations will confine this kind of “switch”. People could only “switch” from a low fidelity representation to a high fidelity representation or between two representations of similar fidelity. They couldn’t “switch” from a high fidelity representation to a low fidelity representation. Leaning location might influence the fidelity of representations. When people learned at the periphery of object array, they could acquire both self-to-object and object-to-object spatial representations of high fidelity. But when people learned amidst the object array, they could only acquire self-to-object spatial representation of high fidelity, and the fidelity of object-to-object spatial representation was low.
Resumo:
A fundamental understanding of the information carrying capacity of optical channels requires the signal and physical channel to be modeled quantum mechanically. This thesis considers the problems of distributing multi-party quantum entanglement to distant users in a quantum communication system and determining the ability of quantum optical channels to reliably transmit information. A recent proposal for a quantum communication architecture that realizes long-distance, high-fidelity qubit teleportation is reviewed. Previous work on this communication architecture is extended in two primary ways. First, models are developed for assessing the effects of amplitude, phase, and frequency errors in the entanglement source of polarization-entangled photons, as well as fiber loss and imperfect polarization restoration, on the throughput and fidelity of the system. Second, an error model is derived for an extension of this communication architecture that allows for the production and storage of three-party entangled Greenberger-Horne-Zeilinger states. A performance analysis of the quantum communication architecture in qubit teleportation and quantum secret sharing communication protocols is presented. Recent work on determining the channel capacity of optical channels is extended in several ways. Classical capacity is derived for a class of Gaussian Bosonic channels representing the quantum version of classical colored Gaussian-noise channels. The proof is strongly mo- tivated by the standard technique of whitening Gaussian noise used in classical information theory. Minimum output entropy problems related to these channel capacity derivations are also studied. These single-user Bosonic capacity results are extended to a multi-user scenario by deriving capacity regions for single-mode and wideband coherent-state multiple access channels. An even larger capacity region is obtained when the transmitters use non- classical Gaussian states, and an outer bound on the ultimate capacity region is presented
Resumo:
Wireless sensor networks are characterized by limited energy resources. To conserve energy, application-specific aggregation (fusion) of data reports from multiple sensors can be beneficial in reducing the amount of data flowing over the network. Furthermore, controlling the topology by scheduling the activity of nodes between active and sleep modes has often been used to uniformly distribute the energy consumption among all nodes by de-synchronizing their activities. We present an integrated analytical model to study the joint performance of in-network aggregation and topology control. We define performance metrics that capture the tradeoffs among delay, energy, and fidelity of the aggregation. Our results indicate that to achieve high fidelity levels under medium to high event reporting load, shorter and fatter aggregation/routing trees (toward the sink) offer the best delay-energy tradeoff as long as topology control is well coordinated with routing.
Resumo:
In this thesis I theoretically study quantum states of ultracold atoms. The majority of the Chapters focus on engineering specific quantum states of single atoms with high fidelity in experimentally realistic systems. In the sixth Chapter, I investigate the stability and dynamics of new multidimensional solitonic states that can be created in inhomogeneous atomic Bose-Einstein condensates. In Chapter three I present two papers in which I demonstrate how the coherent tunnelling by adiabatic passage (CTAP) process can be implemented in an experimentally realistic atom chip system, to coherently transfer the centre-of-mass of a single atom between two spatially distinct magnetic waveguides. In these works I also utilise GPU (Graphics Processing Unit) computing which offers a significant performance increase in the numerical simulation of the Schrödinger equation. In Chapter four I investigate the CTAP process for a linear arrangement of radio frequency traps where the centre-of-mass of both, single atoms and clouds of interacting atoms, can be coherently controlled. In Chapter five I present a theoretical study of adiabatic radio frequency potentials where I use Floquet theory to more accurately model situations where frequencies are close and/or field amplitudes are large. I also show how one can create highly versatile 2D adiabatic radio frequency potentials using multiple radio frequency fields with arbitrary field orientation and demonstrate their utility by simulating the creation of ring vortex solitons. In the sixth Chapter I discuss the stability and dynamics of a family of multidimensional solitonic states created in harmonically confined Bose-Einstein condensates. I demonstrate that these solitonic states have interesting dynamical instabilities, where a continuous collapse and revival of the initial state occurs. Through Bogoliubov analysis, I determine the modes responsible for the observed instabilities of each solitonic state and also extract information related to the time at which instability can be observed.
Resumo:
BACKGROUND: Shared decision-making has become the standard of care for most medical treatments. However, little is known about physician communication practices in the decision making for unstable critically ill patients with known end-stage disease. OBJECTIVE: To describe communication practices of physicians making treatment decisions for unstable critically ill patients with end-stage cancer, using the framework of shared decision-making. DESIGN: Analysis of audiotaped encounters between physicians and a standardized patient, in a high-fidelity simulation scenario, to identify best practice communication behaviors. The simulation depicted a 78-year-old man with metastatic gastric cancer, life-threatening hypoxia, and stable preferences to avoid intensive care unit (ICU) admission and intubation. Blinded coders assessed the encounters for verbal communication behaviors associated with handling emotions and discussion of end-of-life goals. We calculated a score for skill at handling emotions (0-6) and at discussing end of life goals (0-16). SUBJECTS: Twenty-seven hospital-based physicians. RESULTS: Independent variables included physician demographics and communication behaviors. We used treatment decisions (ICU admission and initiation of palliation) as a proxy for accurate identification of patient preferences. Eight physicians admitted the patient to the ICU, and 16 initiated palliation. Physicians varied, but on average demonstrated low skill at handling emotions (mean, 0.7) and moderate skill at discussing end-of-life goals (mean, 7.4). We found that skill at discussing end-of-life goals was associated with initiation of palliation (p = 0.04). CONCLUSIONS: It is possible to analyze the decision making of physicians managing unstable critically ill patients with end-stage cancer using the framework of shared decision-making.
Resumo:
To assess the effect of targeted myocardial beta-adrenergic receptor (AR) stimulation on relaxation and phospholamban regulation, we studied the physiological and biochemical alterations associated with overexpression of the human beta2-AR gene in transgenic mice. These mice have an approximately 200-fold increase in beta-AR density and a 2-fold increase in basal adenylyl cyclase activity relative to negative littermate controls. Mice were catheterized with a high fidelity micromanometer and hemodynamic recordings were obtained in vivo. Overexpression of the beta2-AR altered parameters of relaxation. At baseline, LV dP/dt(min) and the time constant of LV pressure isovolumic decay (Tau) in the transgenic mice were significantly shorter compared with controls, indicating markedly enhanced myocardial relaxation. Isoproterenol stimulation resulted in shortening of relaxation velocity in control mice but not in the transgenic mice, indicating maximal relaxation in these animals. Immunoblotting analysis revealed a selective decrease in the amount of phospholamban protein, without a significant change in the content for either sarcoplasmic reticulum Ca2+ ATPase or calsequestrin, in the transgenic hearts compared with controls. This study indicates that myocardial relaxation is both markedly enhanced and maximal in these mice and that conditions associated with chronic beta-AR stimulation can result in a selective reduction of phospholamban protein.
Resumo:
Single-molecule sequencing instruments can generate multikilobase sequences with the potential to greatly improve genome and transcriptome assembly. However, the error rates of single-molecule reads are high, which has limited their use thus far to resequencing bacteria. To address this limitation, we introduce a correction algorithm and assembly strategy that uses short, high-fidelity sequences to correct the error in single-molecule sequences. We demonstrate the utility of this approach on reads generated by a PacBio RS instrument from phage, prokaryotic and eukaryotic whole genomes, including the previously unsequenced genome of the parrot Melopsittacus undulatus, as well as for RNA-Seq reads of the corn (Zea mays) transcriptome. Our long-read correction achieves >99.9% base-call accuracy, leading to substantially better assemblies than current sequencing strategies: in the best example, the median contig size was quintupled relative to high-coverage, second-generation assemblies. Greater gains are predicted if read lengths continue to increase, including the prospect of single-contig bacterial chromosome assembly.
Resumo:
PURPOSE: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D+dual energy+time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. METHODS: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. RESULTS: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. CONCLUSIONS: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time.