10 resultados para bio-inspired computing

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biological machines are active devices that are comprised of cells and other biological components. These functional devices are best suited for physiological environments that support cellular function and survival. Biological machines have the potential to revolutionize the engineering of biomedical devices intended for implantation, where the human body can provide the required physiological environment. For engineering such cell-based machines, bio-inspired design can serve as a guiding platform as it provides functionally proven designs that are attainable by living cells. In the present work, a systematic approach was used to tissue engineer one such machine by exclusively using biological building blocks and by employing a bio-inspired design. Valveless impedance pumps were constructed based on the working principles of the embryonic vertebrate heart and by using cells and tissue derived from rats. The function of these tissue-engineered muscular pumps was characterized by exploring their spatiotemporal and flow behavior in order to better understand the capabilities and limitations of cells when used as the engines of biological machines.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cardiovascular diseases (CVDs) have reached an epidemic proportion in the US and worldwide with serious consequences in terms of human suffering and economic impact. More than one third of American adults are suffering from CVDs. The total direct and indirect costs of CVDs are more than $500 billion per year. Therefore, there is an urgent need to develop noninvasive diagnostics methods, to design minimally invasive assist devices, and to develop economical and easy-to-use monitoring systems for cardiovascular diseases. In order to achieve these goals, it is necessary to gain a better understanding of the subsystems that constitute the cardiovascular system. The aorta is one of these subsystems whose role in cardiovascular functioning has been underestimated. Traditionally, the aorta and its branches have been viewed as resistive conduits connected to an active pump (left ventricle of the heart). However, this perception fails to explain many observed physiological results. My goal in this thesis is to demonstrate the subtle but important role of the aorta as a system, with focus on the wave dynamics in the aorta.

The operation of a healthy heart is based on an optimized balance between its pumping characteristics and the hemodynamics of the aorta and vascular branches. The delicate balance between the aorta and heart can be impaired due to aging, smoking, or disease. The heart generates pulsatile flow that produces pressure and flow waves as it enters into the compliant aorta. These aortic waves propagate and reflect from reflection sites (bifurcations and tapering). They can act constructively and assist the blood circulation. However, they may act destructively, promoting diseases or initiating sudden cardiac death. These waves also carry information about the diseases of the heart, vascular disease, and coupling of heart and aorta. In order to elucidate the role of the aorta as a dynamic system, the interplay between the dominant wave dynamic parameters is investigated in this study. These parameters are heart rate, aortic compliance (wave speed), and locations of reflection sites. Both computational and experimental approaches have been used in this research. In some cases, the results are further explained using theoretical models.

The main findings of this study are as follows: (i) developing a physiologically realistic outflow boundary condition for blood flow modeling in a compliant vasculature; (ii) demonstrating that pulse pressure as a single index cannot predict the true level of pulsatile workload on the left ventricle; (iii) proving that there is an optimum heart rate in which the pulsatile workload of the heart is minimized and that the optimum heart rate shifts to a higher value as aortic rigidity increases; (iv) introducing a simple bio-inspired device for correction and optimization of aortic wave reflection that reduces the workload on the heart; (v) deriving a non-dimensional number that can predict the optimum wave dynamic state in a mammalian cardiovascular system; (vi) demonstrating that waves can create a pumping effect in the aorta; (vii) introducing a system parameter and a new medical index, Intrinsic Frequency, that can be used for noninvasive diagnosis of heart and vascular diseases; and (viii) proposing a new medical hypothesis for sudden cardiac death in young athletes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.

A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.

On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.

Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.

An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.

As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lipid bilayer membranes are models for cell membranes--the structure that helps regulate cell function. Cell membranes are heterogeneous, and the coupling between composition and shape gives rise to complex behaviors that are important to regulation. This thesis seeks to systematically build and analyze complete models to understand the behavior of multi-component membranes.

We propose a model and use it to derive the equilibrium and stability conditions for a general class of closed multi-component biological membranes. Our analysis shows that the critical modes of these membranes have high frequencies, unlike single-component vesicles, and their stability depends on system size, unlike in systems undergoing spinodal decomposition in flat space. An important implication is that small perturbations may nucleate localized but very large deformations. We compare these results with experimental observations.

We also study open membranes to gain insight into long tubular membranes that arise for example in nerve cells. We derive a complete system of equations for open membranes by using the principle of virtual work. Our linear stability analysis predicts that the tubular membranes tend to have coiling shapes if the tension is small, cylindrical shapes if the tension is moderate, and beading shapes if the tension is large. This is consistent with experimental observations reported in the literature in nerve fibers. Further, we provide numerical solutions to the fully nonlinear equilibrium equations in some problems, and show that the observed mode shapes are consistent with those suggested by linear stability. Our work also proves that beadings of nerve fibers can appear purely as a mechanical response of the membrane.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inspired by key experimental and analytical results regarding Shape Memory Alloys (SMAs), we propose a modelling framework to explore the interplay between martensitic phase transformations and plastic slip in polycrystalline materials, with an eye towards computational efficiency. The resulting framework uses a convexified potential for the internal energy density to capture the stored energy associated with transformation at the meso-scale, and introduces kinetic potentials to govern the evolution of transformation and plastic slip. The framework is novel in the way it treats plasticity on par with transformation.

We implement the framework in the setting of anti-plane shear, using a staggered implicit/explict update: we first use a Fast-Fourier Transform (FFT) solver based on an Augmented Lagrangian formulation to implicitly solve for the full-field displacements of a simulated polycrystal, then explicitly update the volume fraction of martensite and plastic slip using their respective stick-slip type kinetic laws. We observe that, even in this simple setting with an idealized material comprising four martensitic variants and four slip systems, the model recovers a rich variety of SMA type behaviors. We use this model to gain insight into the isothermal behavior of stress-stabilized martensite, looking at the effects of the relative plastic yield strength, the memory of deformation history under non-proportional loading, and several others.

We extend the framework to the generalized 3-D setting, for which the convexified potential is a lower bound on the actual internal energy, and show that the fully implicit discrete time formulation of the framework is governed by a variational principle for mechanical equilibrium. We further propose an extension of the method to finite deformations via an exponential mapping. We implement the generalized framework using an existing Optimal Transport Mesh-free (OTM) solver. We then model the $\alpha$--$\gamma$ and $\alpha$--$\varepsilon$ transformations in pure iron, with an initial attempt in the latter to account for twinning in the parent phase. We demonstrate the scalability of the framework to large scale computing by simulating Taylor impact experiments, observing nearly linear (ideal) speed-up through 256 MPI tasks. Finally, we present preliminary results of a simulated Split-Hopkinson Pressure Bar (SHPB) experiment using the $\alpha$--$\varepsilon$ model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes a compositional framework for developing situation awareness applications: applications that provide ongoing information about a user's changing environment. The thesis describes how the framework is used to develop a situation awareness application for earthquakes. The applications are implemented as Cloud computing services connected to sensors and actuators. The architecture and design of the Cloud services are described and measurements of performance metrics are provided. The thesis includes results of experiments on earthquake monitoring conducted over a year. The applications developed by the framework are (1) the CSN --- the Community Seismic Network --- which uses relatively low-cost sensors deployed by members of the community, and (2) SAF --- the Situation Awareness Framework --- which integrates data from multiple sources, including the CSN, CISN --- the California Integrated Seismic Network, a network consisting of high-quality seismometers deployed carefully by professionals in the CISN organization and spread across Southern California --- and prototypes of multi-sensor platforms that include carbon monoxide, methane, dust and radiation sensors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.

The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.

Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.

Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.

A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.

The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.

Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical microscopy has become an indispensable tool for biological researches since its invention, mostly owing to its sub-cellular spatial resolutions, non-invasiveness, instrumental simplicity, and the intuitive observations it provides. Nonetheless, obtaining reliable, quantitative spatial information from conventional wide-field optical microscopy is not always intuitive as it appears to be. This is because in the acquired images of optical microscopy the information about out-of-focus regions is spatially blurred and mixed with in-focus information. In other words, conventional wide-field optical microscopy transforms the three-dimensional spatial information, or volumetric information about the objects into a two-dimensional form in each acquired image, and therefore distorts the spatial information about the object. Several fluorescence holography-based methods have demonstrated the ability to obtain three-dimensional information about the objects, but these methods generally rely on decomposing stereoscopic visualizations to extract volumetric information and are unable to resolve complex 3-dimensional structures such as a multi-layer sphere.

The concept of optical-sectioning techniques, on the other hand, is to detect only two-dimensional information about an object at each acquisition. Specifically, each image obtained by optical-sectioning techniques contains mainly the information about an optically thin layer inside the object, as if only a thin histological section is being observed at a time. Using such a methodology, obtaining undistorted volumetric information about the object simply requires taking images of the object at sequential depths.

Among existing methods of obtaining volumetric information, the practicability of optical sectioning has made it the most commonly used and most powerful one in biological science. However, when applied to imaging living biological systems, conventional single-point-scanning optical-sectioning techniques often result in certain degrees of photo-damages because of the high focal intensity at the scanning point. In order to overcome such an issue, several wide-field optical-sectioning techniques have been proposed and demonstrated, although not without introducing new limitations and compromises such as low signal-to-background ratios and reduced axial resolutions. As a result, single-point-scanning optical-sectioning techniques remain the most widely used instrumentations for volumetric imaging of living biological systems to date.

In order to develop wide-field optical-sectioning techniques that has equivalent optical performance as single-point-scanning ones, this thesis first introduces the mechanisms and limitations of existing wide-field optical-sectioning techniques, and then brings in our innovations that aim to overcome these limitations. We demonstrate, theoretically and experimentally, that our proposed wide-field optical-sectioning techniques can achieve diffraction-limited optical sectioning, low out-of-focus excitation and high-frame-rate imaging in living biological systems. In addition to such imaging capabilities, our proposed techniques can be instrumentally simple and economic, and are straightforward for implementation on conventional wide-field microscopes. These advantages together show the potential of our innovations to be widely used for high-speed, volumetric fluorescence imaging of living biological systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bio-orthogonal non-canonical amino acid tagging (BONCAT) is an analytical method that allows the selective analysis of the subset of newly synthesized cellular proteins produced in response to a biological stimulus. In BONCAT, cells are treated with the non-canonical amino acid L-azidohomoalanine (Aha), which is utilized in protein synthesis in place of methionine by wild-type translational machinery. Nascent, Aha-labeled proteins are selectively ligated to affinity tags for enrichment and subsequently identified via mass spectrometry. The work presented in this thesis exhibits advancements in and applications of the BONCAT technology that establishes it as an effective tool for analyzing proteome dynamics with time-resolved precision.

Chapter 1 introduces the BONCAT method and serves as an outline for the thesis as a whole. I discuss motivations behind the methodological advancements in Chapter 2 and the biological applications in Chapters 2 and 3.

Chapter 2 presents methodological developments that make BONCAT a proteomic tool capable of, in addition to identifying newly synthesized proteins, accurately quantifying rates of protein synthesis. I demonstrate that this quantitative BONCAT approach can measure proteome-wide patterns of protein synthesis at time scales inaccessible to alternative techniques.

In Chapter 3, I use BONCAT to study the biological function of the small RNA regulator CyaR in Escherichia coli. I correctly identify previously known CyaR targets, and validate several new CyaR targets, expanding the functional roles of the sRNA regulator.

In Chapter 4, I use BONCAT to measure the proteomic profile of the quorum sensing bacterium Vibrio harveyi during the time-dependent transition from individual- to group-behaviors. My analysis reveals new quorum-sensing-regulated proteins with diverse functions, including transcription factors, chemotaxis proteins, transport proteins, and proteins involved in iron homeostasis.

Overall, this work describes how to use BONCAT to perform quantitative, time-resolved proteomic analysis and demonstrates that these measurements can be used to study a broad range of biological processes.