17 resultados para Real example
em CaltechTHESIS
Resumo:
Proton transfer reactions at the interface of water with hydrophobic media, such as air or lipids, are ubiquitous on our planet. These reactions orchestrate a host of vital phenomena in the environment including, for example, acidification of clouds, enzymatic catalysis, chemistries of aerosol and atmospheric gases, and bioenergetic transduction. Despite their importance, however, quantitative details underlying these interactions have remained unclear. Deeper insight into these interfacial reactions is also required in addressing challenges in green chemistry, improved water quality, self-assembly of materials, the next generation of micro-nanofluidics, adhesives, coatings, catalysts, and electrodes. This thesis describes experimental and theoretical investigation of proton transfer reactions at the air-water interface as a function of hydration gradients, electrochemical potential, and electrostatics. Since emerging insights hold at the lipid-water interface as well, this work is also expected to aid understanding of complex biological phenomena associated with proton migration across membranes.
Based on our current understanding, it is known that the physicochemical properties of the gas-phase water are drastically different from those of bulk water. For example, the gas-phase hydronium ion, H3O+(g), can protonate most (non-alkane) organic species, whereas H3O+(aq) can neutralize only relatively strong bases. Thus, to be able to understand and engineer water-hydrophobe interfaces, it is imperative to investigate this fluctuating region of molecular thickness wherein the ‘function’ of chemical species transitions from one phase to another via steep gradients in hydration, dielectric constant, and density. Aqueous interfaces are difficult to approach by current experimental techniques because designing experiments to specifically sample interfacial layers (< 1 nm thick) is an arduous task. While recent advances in surface-specific spectroscopies have provided valuable information regarding the structure of aqueous interfaces, but structure alone is inadequate to decipher the function. By similar analogy, theoretical predictions based on classical molecular dynamics have remained limited in their scope.
Recently, we have adapted an analytical electrospray ionization mass spectrometer (ESIMS) for probing reactions at the gas-liquid interface in real time. This technique is direct, surface-specific,and provides unambiguous mass-to-charge ratios of interfacial species. With this innovation, we have been able to investigate the following:
1. How do anions mediate proton transfers at the air-water interface?
2. What is the basis for the negative surface potential at the air-water interface?
3. What is the mechanism for catalysis ‘on-water’?
In addition to our experiments with the ESIMS, we applied quantum mechanics and molecular dynamics to simulate our experiments toward gaining insight at the molecular scale. Our results unambiguously demonstrated the role of electrostatic-reorganization of interfacial water during proton transfer events. With our experimental and theoretical results on the ‘superacidity’ of the surface of mildly acidic water, we also explored implications on atmospheric chemistry and green chemistry. Our most recent results explained the basis for the negative charge of the air-water interface and showed that the water-hydrophobe interface could serve as a site for enhanced autodissociation of water compared to the condensed phase.
Resumo:
Hypervelocity impact of meteoroids and orbital debris poses a serious and growing threat to spacecraft. To study hypervelocity impact phenomena, a comprehensive ensemble of real-time concurrently operated diagnostics has been developed and implemented in the Small Particle Hypervelocity Impact Range (SPHIR) facility. This suite of simultaneously operated instrumentation provides multiple complementary measurements that facilitate the characterization of many impact phenomena in a single experiment. The investigation of hypervelocity impact phenomena described in this work focuses on normal impacts of 1.8 mm nylon 6/6 cylinder projectiles and variable thickness aluminum targets. The SPHIR facility two-stage light-gas gun is capable of routinely launching 5.5 mg nylon impactors to speeds of 5 to 7 km/s. Refinement of legacy SPHIR operation procedures and the investigation of first-stage pressure have improved the velocity performance of the facility, resulting in an increase in average impact velocity of at least 0.57 km/s. Results for the perforation area indicate the considered range of target thicknesses represent multiple regimes describing the non-monotonic scaling of target perforation with decreasing target thickness. The laser side-lighting (LSL) system has been developed to provide ultra-high-speed shadowgraph images of the impact event. This novel optical technique is demonstrated to characterize the propagation velocity and two-dimensional optical density of impact-generated debris clouds. Additionally, a debris capture system is located behind the target during every experiment to provide complementary information regarding the trajectory distribution and penetration depth of individual debris particles. The utilization of a coherent, collimated illumination source in the LSL system facilitates the simultaneous measurement of impact phenomena with near-IR and UV-vis spectrograph systems. Comparison of LSL images to concurrent IR results indicates two distinctly different phenomena. A high-speed, pressure-dependent IR-emitting cloud is observed in experiments to expand at velocities much higher than the debris and ejecta phenomena observed using the LSL system. In double-plate target configurations, this phenomena is observed to interact with the rear-wall several micro-seconds before the subsequent arrival of the debris cloud. Additionally, dimensional analysis presented by Whitham for blast waves is shown to describe the pressure-dependent radial expansion of the observed IR-emitting phenomena. Although this work focuses on a single hypervelocity impact configuration, the diagnostic capabilities and techniques described can be used with a wide variety of impactors, materials, and geometries to investigate any number of engineering and scientific problems.
Resumo:
This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.
Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.
However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.
It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.
With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.
Resumo:
The proliferation of smartphones and other internet-enabled, sensor-equipped consumer devices enables us to sense and act upon the physical environment in unprecedented ways. This thesis considers Community Sense-and-Response (CSR) systems, a new class of web application for acting on sensory data gathered from participants' personal smart devices. The thesis describes how rare events can be reliably detected using a decentralized anomaly detection architecture that performs client-side anomaly detection and server-side event detection. After analyzing this decentralized anomaly detection approach, the thesis describes how weak but spatially structured events can be detected, despite significant noise, when the events have a sparse representation in an alternative basis. Finally, the thesis describes how the statistical models needed for client-side anomaly detection may be learned efficiently, using limited space, via coresets.
The Caltech Community Seismic Network (CSN) is a prototypical example of a CSR system that harnesses accelerometers in volunteers' smartphones and consumer electronics. Using CSN, this thesis presents the systems and algorithmic techniques to design, build and evaluate a scalable network for real-time awareness of spatial phenomena such as dangerous earthquakes.
Resumo:
The laminar to turbulent transition process in boundary layer flows in thermochemical nonequilibrium at high enthalpy is measured and characterized. Experiments are performed in the T5 Hypervelocity Reflected Shock Tunnel at Caltech, using a 1 m length 5-degree half angle axisymmetric cone instrumented with 80 fast-response annular thermocouples, complemented by boundary layer stability computations using the STABL software suite. A new mixing tank is added to the shock tube fill apparatus for premixed freestream gas experiments, and a new cleaning procedure results in more consistent transition measurements. Transition location is nondimensionalized using a scaling with the boundary layer thickness, which is correlated with the acoustic properties of the boundary layer, and compared with parabolized stability equation (PSE) analysis. In these nondimensionalized terms, transition delay with increasing CO2 concentration is observed: tests in 100% and 50% CO2, by mass, transition up to 25% and 15% later, respectively, than air experiments. These results are consistent with previous work indicating that CO2 molecules at elevated temperatures absorb acoustic instabilities in the MHz range, which is the expected frequency of the Mack second-mode instability at these conditions, and also consistent with predictions from PSE analysis. A strong unit Reynolds number effect is observed, which is believed to arise from tunnel noise. NTr for air from 5.4 to 13.2 is computed, substantially higher than previously reported for noisy facilities. Time- and spatially-resolved heat transfer traces are used to track the propagation of turbulent spots, and convection rates at 90%, 76%, and 63% of the boundary layer edge velocity, respectively, are observed for the leading edge, centroid, and trailing edge of the spots. A model constructed with these spot propagation parameters is used to infer spot generation rates from measured transition onset to completion distance. Finally, a novel method to control transition location with boundary layer gas injection is investigated. An appropriate porous-metal injector section for the cone is designed and fabricated, and the efficacy of injected CO2 for delaying transition is gauged at various mass flow rates, and compared with both no injection and chemically inert argon injection cases. While CO2 injection seems to delay transition, and argon injection seems to promote it, the experimental results are inconclusive and matching computations do not predict a reduction in N factor from any CO2 injection condition computed.
Resumo:
Understanding friction and adhesion in static and sliding contact of surfaces is important in numerous physical phenomena and technological applications. Most surfaces are rough at the microscale, and thus the real area of contact is only a fraction of the nominal area. The macroscopic frictional and adhesive response is determined by the collective behavior of the population of evolving and interacting microscopic contacts. This collective behavior can be very different from the behavior of individual contacts. It is thus important to understand how the macroscopic response emerges from the microscopic one. In this thesis, we develop a theoretical and computational framework to study the collective behavior. Our philosophy is to assume a simple behavior of a single asperity and study the collective response of an ensemble. Our work bridges the existing well-developed studies of single asperities with phenomenological laws that describe macroscopic rate-and-state behavior of frictional interfaces. We find that many aspects of the macroscopic behavior are robust with respect to the microscopic response. This explains why qualitatively similar frictional features are seen for a diverse range of materials. We first show that the collective response of an ensemble of one-dimensional independent viscoelastic elements interacting through a mean field reproduces many qualitative features of static and sliding friction evolution. The resulting macroscopic behavior is different from the microscopic one: for example, even if each contact is velocity-strengthening, the macroscopic behavior can be velocity-weakening. The framework is then extended to incorporate three-dimensional rough surfaces, long- range elastic interactions between contacts, and time-dependent material behaviors such as viscoelasticity and viscoplasticity. Interestingly, the mean field behavior dominates and the elastic interactions, though important from a quantitative perspective, do not change the qualitative macroscopic response. Finally, we examine the effect of adhesion on the frictional response as well as develop a force threshold model for adhesion and mode I interfacial cracks.
Resumo:
In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.
In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.
Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.
In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.
Resumo:
As borne out by everyday social experience, social cognition is highly dependent on context, modulated by a host of factors that arise from the social environment in which we live. While streamlined laboratory research provides excellent experimental control, it can be limited to telling us about the capabilities of the brain under artificial conditions, rather than elucidating the processes that come into play in the real world. Consideration of the impact of ecologically valid contextual cues on social cognition will improve the generalizability of social neuroscience findings also to pathology, e.g., to psychiatric illnesses. To help bridge between laboratory research and social cognition as we experience it in the real world, this thesis investigates three themes: (1) increasing the naturalness of stimuli with richer contextual cues, (2) the potentially special contextual case of social cognition when two people interact directly, and (3) a third theme of experimental believability, which runs in parallel to the first two themes. Focusing on the first two themes, in work with two patient populations, we explore neural contributions to two topics in social cognition. First, we document a basic approach bias in rare patients with bilateral lesions of the amygdala. This finding is then related to the contextual factor of ambiguity, and further investigated together with other contextual cues in a sample of healthy individuals tested over the internet, finally yielding a hierarchical decision tree for social threat evaluation. Second, we demonstrate that neural processing of eye gaze in brain structures related to face, gaze, and social processing is differently modulated by the direct presence of another live person. This question is investigated using fMRI in people with autism and controls. Across a range of topics, we demonstrate that two themes of ecological validity — integration of naturalistic contextual cues, and social interaction — influence social cognition, that particular brain structures mediate this processing, and that it will be crucial to study interaction in order to understand disorders of social interaction such as autism.
Resumo:
The microscopic properties of a two-dimensional model dense fluid of Lennard-Jones disks have been studied using the so-called "molecular dynamics" method. Analyses of the computer-generated simulation data in terms of "conventional" thermodynamic and distribution functions verify the physical validity of the model and the simulation technique.
The radial distribution functions g(r) computed from the simulation data exhibit several subsidiary features rather similar to those appearing in some of the g(r) functions obtained by X-ray and thermal neutron diffraction measurements on real simple liquids. In the case of the model fluid, these "anomalous" features are thought to reflect the existence of two or more alternative configurations for local ordering.
Graphical display techniques have been used extensively to provide some intuitive insight into the various microscopic phenomena occurring in the model. For example, "snapshots" of the instantaneous system configurations for different times show that the "excess" area allotted to the fluid is collected into relatively large, irregular, and surprisingly persistent "holes". Plots of the particle trajectories over intervals of 2.0 to 6.0 x 10-12 sec indicate that the mechanism for diffusion in the dense model fluid is "cooperative" in nature, and that extensive diffusive migration is generally restricted to groups of particles in the vicinity of a hole.
A quantitative analysis of diffusion in the model fluid shows that the cooperative mechanism is not inconsistent with the statistical predictions of existing theories of singlet, or self-diffusion in liquids. The relative diffusion of proximate particles is, however, found to be retarded by short-range dynamic correlations associated with the cooperative mechanism--a result of some importance from the standpoint of bimolecular reaction kinetics in solution.
A new, semi-empirical treatment for relative diffusion in liquids is developed, and is shown to reproduce the relative diffusion phenomena observed in the model fluid quite accurately. When incorporated into the standard Smoluchowski theory of diffusion-controlled reaction kinetics, the more exact treatment of relative diffusion is found to lower the predicted rate of reaction appreciably.
Finally, an entirely new approach to an understanding of the liquid state is suggested. Our experience in dealing with the simulation data--and especially, graphical displays of the simulation data--has led us to conclude that many of the more frustrating scientific problems involving the liquid state would be simplified considerably, were it possible to describe the microscopic structures characteristic of liquids in a concise and precise manner. To this end, we propose that the development of a formal language of partially-ordered structures be investigated.
Resumo:
I. Crossing transformations constitute a group of permutations under which the scattering amplitude is invariant. Using Mandelstem's analyticity, we decompose the amplitude into irreducible representations of this group. The usual quantum numbers, such as isospin or SU(3), are "crossing-invariant". Thus no higher symmetry is generated by crossing itself. However, elimination of certain quantum numbers in intermediate states is not crossing-invariant, and higher symmetries have to be introduced to make it possible. The current literature on exchange degeneracy is a manifestation of this statement. To exemplify application of our analysis, we show how, starting with SU(3) invariance, one can use crossing and the absence of exotic channels to derive the quark-model picture of the tensor nonet. No detailed dynamical input is used.
II. A dispersion relation calculation of the real parts of forward π±p and K±p scattering amplitudes is carried out under the assumption of constant total cross sections in the Serpukhov energy range. Comparison with existing experimental results as well as predictions for future high energy experiments are presented and discussed. Electromagnetic effects are found to be too small to account for the expected difference between the π-p and π+p total cross sections at higher energies.
Resumo:
A Riesz space with a Hausdorff, locally convex topology determined by Riesz seminorms is called a locally convex Riesz space. A sequence {xn} in a locally convex Riesz space L is said to converge locally to x ϵ L if for some topologically bounded set B and every real r ˃ 0 there exists N (r) and n ≥ N (r) implies x – xn ϵ rb. Local Cauchy sequences are defined analogously, and L is said to be locally complete if every local Cauchy sequence converges locally. Then L is locally complete if and only if every monotone local Cauchy sequence has a least upper bound. This is a somewhat more general form of the completeness criterion for Riesz – normed Riesz spaces given by Luxemburg and Zaanen. Locally complete, bound, locally convex Riesz spaces are barrelled. If the space is metrizable, local completeness and topological completeness are equivalent.
Two measures of the non-archimedean character of a non-archimedean Riesz space L are the smallest ideal Ao (L) such that quotient space is Archimedean and the ideal I (L) = { x ϵ L: for some 0 ≤ v ϵ L, n |x| ≤ v for n = 1, 2, …}. In general Ao (L) ᴝ I (L). If L is itself a quotient space, a necessary and sufficient condition that Ao (L) = I (L) is given. There is an example where Ao (L) ≠ I (L).
A necessary and sufficient condition that a Riesz space L have every quotient space Archimedean is that for every 0 ≤ u, v ϵ L there exist u1 = sup (inf (n v, u): n = 1, 2, …), and real numbers m1 and m2 such that m1 u1 ≥ v1 and m2 v1 ≥ u1. If, in addition, L is Dedekind σ – complete, then L may be represented as the space of all functions which vanish off finite subsets of some non-empty set.
Resumo:
An experimental method combined with boundary layer theory is given for evaluating the added mass of a sphere moving along the axis of a circular cylinder filled with water or oil. The real fluid effects are separated from ideal fluid effects.
The experimental method consists essentially of a magnetic steel sphere propelled from rest by an electromagnetic coil in which the current is accurately controlled so that it only supplies force for a short time interval which is within the laminar flow regime of the fluid. The motion of the sphere as a function of time is recorded on single frame photographs using a short-arc multiple flash lamp with accurately controlled time intervals between flashes.
A concept of the effect of boundary layer displacement on the fluid flow around a sphere is introduced to evaluate the real fluid effects on the added mass. Surprisingly accurate agreement between experiment and theory is achieved.
Resumo:
The experimental consequence of Regge cuts in the angular momentum plane are investigated. The principle tool in the study is the set of diagrams originally proposed by Amati, Fubini, and Stanghellini. Mandelstam has shown that the AFS cuts are actually cancelled on the physical sheet, but they may provide a useful guide to the properties of the real cuts. Inclusion of cuts modifies the simple Regge pole predictions for high-energy scattering data. As an example, an attempt is made to fit high energy elastic scattering data for pp, ṗp, π±p, and K±p, by replacing the Igi pole by terms representing the effect of a Regge cut. The data seem to be compatible with either a cut or the Igi pole.
Resumo:
Multi-finger caging offers a rigorous and robust approach to robot grasping. This thesis provides several novel algorithms for caging polygons and polyhedra in two and three dimensions. Caging refers to a robotic grasp that does not necessarily immobilize an object, but prevents it from escaping to infinity. The first algorithm considers caging a polygon in two dimensions using two point fingers. The second algorithm extends the first to three dimensions. The third algorithm considers caging a convex polygon in two dimensions using three point fingers, and considers robustness of this cage to variations in the relative positions of the fingers.
This thesis describes an algorithm for finding all two-finger cage formations of planar polygonal objects based on a contact-space formulation. It shows that two-finger cages have several useful properties in contact space. First, the critical points of the cage representation in the hand’s configuration space appear as critical points of the inter-finger distance function in contact space. Second, these critical points can be graphically characterized directly on the object’s boundary. Third, contact space admits a natural rectangular decomposition such that all critical points lie on the rectangle boundaries, and the sublevel sets of contact space and free space are topologically equivalent. These properties lead to a caging graph that can be readily constructed in contact space. Starting from a desired immobilizing grasp of a polygonal object, the caging graph is searched for the minimal, intermediate, and maximal caging regions surrounding the immobilizing grasp. An example constructed from real-world data illustrates and validates the method.
A second algorithm is developed for finding caging formations of a 3D polyhedron for two point fingers using a lower dimensional contact-space formulation. Results from the two-dimensional algorithm are extended to three dimension. Critical points of the inter-finger distance function are shown to be identical to the critical points of the cage. A decomposition of contact space into 4D regions having useful properties is demonstrated. A geometric analysis of the critical points of the inter-finger distance function results in a catalog of grasps in which the cages change topology, leading to a simple test to classify critical points. With these properties established, the search algorithm from the two-dimensional case may be applied to the three-dimensional problem. An implemented example demonstrates the method.
This thesis also presents a study of cages of convex polygonal objects using three point fingers. It considers a three-parameter model of the relative position of the fingers, which gives complete generality for three point fingers in the plane. It analyzes robustness of caging grasps to variations in the relative position of the fingers without breaking the cage. Using a simple decomposition of free space around the polygon, we present an algorithm which gives all caging placements of the fingers and a characterization of the robustness of these cages.
Resumo:
If E and F are real Banach spaces let Cp,q(E, F) O ≤ q ≤ p ≤ ∞, denote those maps from E to F which have p continuous Frechet derivatives of which the first q derivatives are bounded. A Banach space E is defined to be Cp,q smooth if Cp,q(E,R) contains a nonzero function with bounded support. This generalizes the standard Cp smoothness classification.
If an Lp space, p ≥ 1, is Cq smooth then it is also Cq,q smooth so that in particular Lp for p an even integer is C∞,∞ smooth and Lp for p an odd integer is Cp-1,p-1 smooth. In general, however, a Cp smooth B-space need not be Cp,p smooth. Co is shown to be a non-C2,2 smooth B-space although it is known to be C∞ smooth. It is proved that if E is Cp,1 smooth then Co(E) is Cp,1 smooth and if E has an equivalent Cp norm then co(E) has an equivalent Cp norm.
Various consequences of Cp,q smoothness are studied. If f ϵ Cp,q(E,F), if F is Cp,q smooth and if E is non-Cp,q smooth, then the image under f of the boundary of any bounded open subset U of E is dense in the image of U. If E is separable then E is Cp,q smooth if and only if E admits Cp,q partitions of unity; E is Cp,psmooth, p ˂∞, if and only if every closed subset of E is the zero set of some CP function.
f ϵ Cq(E,F), 0 ≤ q ≤ p ≤ ∞, is said to be Cp,q approximable on a subset U of E if for any ϵ ˃ 0 there exists a g ϵ Cp(E,F) satisfying
sup/xϵU, O≤k≤q ‖ Dk f(x) - Dk g(x) ‖ ≤ ϵ.
It is shown that if E is separable and Cp,q smooth and if f ϵ Cq(E,F) is Cp,q approximable on some neighborhood of every point of E, then F is Cp,q approximable on all of E.
In general it is unknown whether an arbitrary function in C1(l2, R) is C2,1 approximable and an example of a function in C1(l2, R) which may not be C2,1 approximable is given. A weak form of C∞,q, q≥1, to functions in Cq(l2, R) is proved: Let {Uα} be a locally finite cover of l2 and let {Tα} be a corresponding collection of Hilbert-Schmidt operators on l2. Then for any f ϵ Cq(l2,F) such that for all α
sup ‖ Dk(f(x)-g(x))[Tαh]‖ ≤ 1.
xϵUα,‖h‖≤1, 0≤k≤q