8 resultados para Psychic suffer

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.

Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.

Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We simulate incompressible, MHD turbulence using a pseudo-spectral code. Our major conclusions are as follows.

1) MHD turbulence is most conveniently described in terms of counter propagating shear Alfvén and slow waves. Shear Alfvén waves control the cascade dynamics. Slow waves play a passive role and adopt the spectrum set by the shear Alfvén waves. Cascades composed entirely of shear Alfvén waves do not generate a significant measure of slow waves.

2) MHD turbulence is anisotropic with energy cascading more rapidly along k than along k, where k and k refer to wavevector components perpendicular and parallel to the local magnetic field. Anisotropy increases with increasing k such that excited modes are confined inside a cone bounded by k ∝ kγ where γ less than 1. The opening angle of the cone, θ(k) ∝ k-(1-γ), defines the scale dependent anisotropy.

3) MHD turbulence is generically strong in the sense that the waves which comprise it suffer order unity distortions on timescales comparable to their periods. Nevertheless, turbulent fluctuations are small deep inside the inertial range. Their energy density is less than that of the background field by a factor θ2 (k)≪1.

4) MHD cascades are best understood geometrically. Wave packets suffer distortions as they move along magnetic field lines perturbed by counter propagating waves. Field lines perturbed by unidirectional waves map planes perpendicular to the local field into each other. Shear Alfvén waves are responsible for the mapping's shear and slow waves for its dilatation. The amplitude of the former exceeds that of the latter by 1/θ(k) which accounts for dominance of the shear Alfvén waves in controlling the cascade dynamics.

5) Passive scalars mixed by MHD turbulence adopt the same power spectrum as the velocity and magnetic field perturbations.

6) Decaying MHD turbulence is unstable to an increase of the imbalance between the flux of waves propagating in opposite directions along the magnetic field. Forced MHD turbulence displays order unity fluctuations with respect to the balanced state if excited at low k by δ(t) correlated forcing. It appears to be statistically stable to the unlimited growth of imbalance.

7) Gradients of the dynamic variables are focused into sheets aligned with the magnetic field whose thickness is comparable to the dissipation scale. Sheets formed by oppositely directed waves are uncorrelated. We suspect that these are vortex sheets which the mean magnetic field prevents from rolling up.

8) Items (1)-(5) lend support to the model of strong MHD turbulence put forth by Goldreich and Sridhar (1995, 1997). Results from our simulations are also consistent with the GS prediction γ = 2/3. The sole not able discrepancy is that the 1D power law spectra, E(k) ∝ k-∝, determined from our simulations exhibit ∝ ≈ 3/2, whereas the GS model predicts ∝ = 5/3.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches.

A fundamental question that motivates the modeling of foams is ‘how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?’ A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,“Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes,” J. Mech.Phys. Solids, 59, pp. 2227–2237, Erratum 60, 1753–1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like
1) The initial linear elastic response.
2) One or more nonlinear instabilities, yielding, and hardening.

The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Isotopic fractionation due to sputtering has been investigated via a collector type experiment in which targets of known isotopic composition have been bombarded with several keV Ar+ and Xe+ ions with fluences down to 3.0x1014 ions/cm2 , believed to be the lowest fluences for which such detailed measurements have ever been made. The isotopes were sputtered onto carbon collectors and analyzed with Secondary Ion Mass Spectroscopy (SIMS.) There is clear indication of preferential effects several times that predicted by the dominant analytical theory. Results also show a fairly strong angular variation in the fractionation. The maximum effect is usually seen in the near normal direction, measured from the target surface, falling continuously, by a few percent in some cases, to a minimum in the oblique direction. Measurements have been made using Mo isotopes: 100Mo and 92Mo and a liquid metal system of In:Ga eutectic. The light isotope of Mo is found to suffer a 53 ± 5‰ (note: 1.0‰ ≡ 0.1%) enrichment in the sputtered flux in the near normal direction, compared to the steady state near normal sputtered composition, under 5.0 keV Xe+ bombardment of 3.0 x 1014 ions/cm2. In the liquid metal study only the angular dependence of the fractionation could be measured due to the lack of a well defined reference and the nature of the liquid surface, which is able to 'repair' itself during the course of a bombardment. The results show that 113In is preferentially sputtered over 115In in the near normal direction by about 8.7 ± 2.7‰ compared to the oblique direction. 69Ga, on the other hand, is sputtered preferentially over 71Ga in the oblique direction by about 13 ± 4.4‰ with respect to the near normal direction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surface plasma waves arise from the collective oscillations of billions of electrons at the surface of a metal in unison. The simplest way to quantize these waves is by direct analogy to electromagnetic fields in free space, with the surface plasmon, the quantum of the surface plasma wave, playing the same role as the photon. It follows that surface plasmons should exhibit all of the same quantum phenomena that photons do, including quantum interference and entanglement.

Unlike photons, however, surface plasmons suffer strong losses that arise from the scattering of free electrons from other electrons, phonons, and surfaces. Under some circumstances, these interactions might also cause “pure dephasing,” which entails a loss of coherence without absorption. Quantum descriptions of plasmons usually do not account for these effects explicitly, and sometimes ignore them altogether. In light of this extra microscopic complexity, it is necessary for experiments to test quantum models of surface plasmons.

In this thesis, I describe two such tests that my collaborators and I performed. The first was a plasmonic version of the Hong-Ou-Mandel experiment, in which we observed two-particle quantum interference between plasmons with a visibility of 93 ± 1%. This measurement confirms that surface plasmons faithfully reproduce this effect with the same visibility and mutual coherence time, to within measurement error, as in the photonic case.

The second experiment demonstrated path entanglement between surface plasmons with a visibility of 95 ± 2%, confirming that a path-entangled state can indeed survive without measurable decoherence. This measurement suggests that elastic scattering mechanisms of the type that might cause pure dephasing must have been weak enough not to significantly perturb the state of the metal under the experimental conditions we investigated.

These two experiments add quantum interference and path entanglement to a growing list of quantum phenomena that surface plasmons appear to exhibit just as clearly as photons, confirming the predictions of the simplest quantum models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Systems-level studies of biological systems rely on observations taken at a resolution lower than the essential unit of biology, the cell. Recent technical advances in DNA sequencing have enabled measurements of the transcriptomes in single cells excised from their environment, but it remains a daunting technical problem to reconstruct in situ gene expression patterns from sequencing data. In this thesis I develop methods for the routine, quantitative in situ measurement of gene expression using fluorescence microscopy.

The number of molecular species that can be measured simultaneously by fluorescence microscopy is limited by the pallet of spectrally distinct fluorophores. Thus, fluorescence microscopy is traditionally limited to the simultaneous measurement of only five labeled biomolecules at a time. The two methods described in this thesis, super-resolution barcoding and temporal barcoding, represent strategies for overcoming this limitation to monitor expression of many genes in a single cell. Super-resolution barcoding employs optical super-resolution microscopy (SRM) and combinatorial labeling via-smFISH (single molecule fluorescence in situ hybridization) to uniquely label individual mRNA species with distinct barcodes resolvable at nanometer resolution. This method dramatically increases the optical space in a cell, allowing a large numbers of barcodes to be visualized simultaneously. As a proof of principle this technology was used to study the S. cerevisiae calcium stress response. The second method, sequential barcoding, reads out a temporal barcode through multiple rounds of oligonucleotide hybridization to the same mRNA. The multiplexing capacity of sequential barcoding increases exponentially with the number of rounds of hybridization, allowing over a hundred genes to be profiled in only a few rounds of hybridization.

The utility of sequential barcoding was further demonstrated by adapting this method to study gene expression in mammalian tissues. Mammalian tissues suffer both from a large amount of auto-fluorescence and light scattering, making detection of smFISH probes on mRNA difficult. An amplified single molecule detection technology, smHCR (single molecule hairpin chain reaction), was developed to allow for the quantification of mRNA in tissue. This technology is demonstrated in combination with light sheet microscopy and background reducing tissue clearing technology, enabling whole-organ sequential barcoding to monitor in situ gene expression directly in intact mammalian tissue.

The methods presented in this thesis, specifically sequential barcoding and smHCR, enable multiplexed transcriptional observations in any tissue of interest. These technologies will serve as a general platform for future transcriptomic studies of complex tissues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Constitutive modeling in granular materials has historically been based on macroscopic experimental observations that, while being usually effective at predicting the bulk behavior of these type of materials, suffer important limitations when it comes to understanding the physics behind grain-to-grain interactions that induce the material to macroscopically behave in a given way when subjected to certain boundary conditions.

The advent of the discrete element method (DEM) in the late 1970s helped scientists and engineers to gain a deeper insight into some of the most fundamental mechanisms furnishing the grain scale. However, one of the most critical limitations of classical DEM schemes has been their inability to account for complex grain morphologies. Instead, simplified geometries such as discs, spheres, and polyhedra have typically been used. Fortunately, in the last fifteen years, there has been an increasing development of new computational as well as experimental techniques, such as non-uniform rational basis splines (NURBS) and 3D X-ray Computed Tomography (3DXRCT), which are contributing to create new tools that enable the inclusion of complex grain morphologies into DEM schemes.

Yet, as the scientific community is still developing these new tools, there is still a gap in thoroughly understanding the physical relations connecting grain and continuum scales as well as in the development of discrete techniques that can predict the emergent behavior of granular materials without resorting to phenomenology, but rather can directly unravel the micro-mechanical origin of macroscopic behavior.

In order to contribute towards closing the aforementioned gap, we have developed a micro-mechanical analysis of macroscopic peak strength, critical state, and residual strength in two-dimensional non-cohesive granular media, where typical continuum constitutive quantities such as frictional strength and dilation angle are explicitly related to their corresponding grain-scale counterparts (e.g., inter-particle contact forces, fabric, particle displacements, and velocities), providing an across-the-scale basis for better understanding and modeling granular media.

In the same way, we utilize a new DEM scheme (LS-DEM) that takes advantage of a mathematical technique called level set (LS) to enable the inclusion of real grain shapes into a classical discrete element method. After calibrating LS-DEM with respect to real experimental results, we exploit part of its potential to study the dependency of critical state (CS) parameters such as the critical state line (CSL) slope, CSL intercept, and CS friction angle on the grain's morphology, i.e., sphericity, roundness, and regularity.

Finally, we introduce a first computational algorithm to ``clone'' the grain morphologies of a sample of real digital grains. This cloning algorithm allows us to generate an arbitrary number of cloned grains that satisfy the same morphological features (e.g., roundness and aspect ratio) displayed by their real parents and can be included into a DEM simulation of a given mechanical phenomenon. In turn, this will help with the development of discrete techniques that can directly predict the engineering scale behavior of granular media without resorting to phenomenology.