13 resultados para Modular functions.
em CaltechTHESIS
Resumo:
The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.
First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.
Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.
Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.
Resumo:
Data were taken in 1979-80 by the CCFRR high energy neutrino experiment at Fermilab. A total of 150,000 neutrino and 23,000 antineutrino charged current events in the approximate energy range 25 < E_v < 250GeV are measured and analyzed. The structure functions F2 and xF_3 are extracted for three assumptions about σ_L/σ_T:R=0., R=0.1 and R= a QCD based expression. Systematic errors are estimated and their significance is discussed. Comparisons or the X and Q^2 behaviour or the structure functions with results from other experiments are made.
We find that statistical errors currently dominate our knowledge of the valence quark distribution, which is studied in this thesis. xF_3 from different experiments has, within errors and apart from level differences, the same dependence on x and Q^2, except for the HPWF results. The CDHS F_2 shows a clear fall-off at low-x from the CCFRR and EMC results, again apart from level differences which are calculable from cross-sections.
The result for the the GLS rule is found to be 2.83±.15±.09±.10 where the first error is statistical, the second is an overall level error and the third covers the rest of the systematic errors. QCD studies of xF_3 to leading and second order have been done. The QCD evolution of xF_3, which is independent of R and the strange sea, does not depend on the gluon distribution and fits yield
ʌ_(LO) = 88^(+163)_(-78) ^(+113)_(-70) MeV
The systematic errors are smaller than the statistical errors. Second order fits give somewhat different values of ʌ, although α_s (at Q^2_0 = 12.6 GeV^2) is not so different.
A fit using the better determined F_2 in place of xF_3 for x > 0.4 i.e., assuming q = 0 in that region, gives
ʌ_(LO) = 266^(+114)_(-104) ^(+85)_(-79) MeV
Again, the statistical errors are larger than the systematic errors. An attempt to measure R was made and the measurements are described. Utilizing the inequality q(x)≥0 we find that in the region x > .4 R is less than 0.55 at the 90% confidence level.
Resumo:
The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.
A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.
On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.
Resumo:
Iterative in situ click chemistry (IISCC) is a robust general technology for development of high throughput, inexpensive protein detection agents. In IISCC, the target protein acts as a template and catalyst, and assembles its own ligand from modular blocks of peptides. This process of ligand discovery is iterated to add peptide arms to develop a multivalent ligand with increased affinity and selectivity. The peptide based protein capture agents (PCC) should ideally have the same degree of selectivity and specificity as a monoclonal antibody, along with improved chemical stability. We had previously reported developing a PCC agent against bovine carbonic anhydrase II (bCAII) that could replace a polyclonal antibody. To further enhance the affinity or specificity of the PCC agent, I explore branching the peptide arms to develop branched PCC agents against bCAII. The developed branched capture agents have two to three fold higher affinities for the target protein. In the second part of my thesis, I describe the epitope targeting strategy, a strategy for directing the development of a peptide ligand against specific region or fragment of the protein. The strategy is successfully demonstrated by developing PCC agents with low nanomolar binding affinities that target the C-terminal hydrophobic motif of Akt2 kinase. One of the developed triligands inhibits the kinase activity of Akt. This suggests that, if targeted against the right epitope, the PCC agents can also influence the functional properties of the protein. The exquisite control of the epitope targeting strategy is further demonstrated by developing a cyclic ligand against Akt2. The cyclic ligand acts as an inhibitor by itself, without any iteration of the ligand discovery process. The epitope targeting strategy is a cornerstone of the IISCC technology and opens up new opportunities, leading to the development of protein detection agents and of modulators of protein functions.
Resumo:
The aim of this paper is to investigate to what extent the known theory of subdifferentiability and generic differentiability of convex functions defined on open sets can be carried out in the context of convex functions defined on not necessarily open sets. Among the main results obtained I would like to mention a Kenderov type theorem (the subdifferential at a generic point is contained in a sphere), a generic Gâteaux differentiability result in Banach spaces of class S and a generic Fréchet differentiability result in Asplund spaces. At least two methods can be used to prove these results: first, a direct one, and second, a more general one, based on the theory of monotone operators. Since this last theory was previously developed essentially for monotone operators defined on open sets, it was necessary to extend it to the context of monotone operators defined on a larger class of sets, our "quasi open" sets. This is done in Chapter III. As a matter of fact, most of these results have an even more general nature and have roots in the theory of minimal usco maps, as shown in Chapter II.
Resumo:
The applicability of the white-noise method to the identification of a nonlinear system is investigated. Subsequently, the method is applied to certain vertebrate retinal neuronal systems and nonlinear, dynamic transfer functions are derived which describe quantitatively the information transformations starting with the light-pattern stimulus and culminating in the ganglion response which constitutes the visually-derived input to the brain. The retina of the catfish, Ictalurus punctatus, is used for the experiments.
The Wiener formulation of the white-noise theory is shown to be impractical and difficult to apply to a physical system. A different formulation based on crosscorrelation techniques is shown to be applicable to a wide range of physical systems provided certain considerations are taken into account. These considerations include the time-invariancy of the system, an optimum choice of the white-noise input bandwidth, nonlinearities that allow a representation in terms of a small number of characterizing kernels, the memory of the system and the temporal length of the characterizing experiment. Error analysis of the kernel estimates is made taking into account various sources of error such as noise at the input and output, bandwidth of white-noise input and the truncation of the gaussian by the apparatus.
Nonlinear transfer functions are obtained, as sets of kernels, for several neuronal systems: Light → Receptors, Light → Horizontal, Horizontal → Ganglion, Light → Ganglion and Light → ERG. The derived models can predict, with reasonable accuracy, the system response to any input. Comparison of model and physical system performance showed close agreement for a great number of tests, the most stringent of which is comparison of their responses to a white-noise input. Other tests include step and sine responses and power spectra.
Many functional traits are revealed by these models. Some are: (a) the receptor and horizontal cell systems are nearly linear (small signal) with certain "small" nonlinearities, and become faster (latency-wise and frequency-response-wise) at higher intensity levels, (b) all ganglion systems are nonlinear (half-wave rectification), (c) the receptive field center to ganglion system is slower (latency-wise and frequency-response-wise) than the periphery to ganglion system, (d) the lateral (eccentric) ganglion systems are just as fast (latency and frequency response) as the concentric ones, (e) (bipolar response) = (input from receptors) - (input from horizontal cell), (f) receptive field center and periphery exert an antagonistic influence on the ganglion response, (g) implications about the origin of ERG, and many others.
An analytical solution is obtained for the spatial distribution of potential in the S-space, which fits very well experimental data. Different synaptic mechanisms of excitation for the external and internal horizontal cells are implied.
Resumo:
This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.
The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.
The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.
The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.
Resumo:
Several patients of P. J. Vogel who had undergone cerebral commissurotomy for the control of intractable epilepsy were tested on a variety of tasks to measure aspects of cerebral organization concerned with lateralization in hemispheric function. From tests involving identification of shapes it was inferred that in the absence of the neocortical commissures, the left hemisphere still has access to certain types of information from the ipsilateral field. The major hemisphere can still make crude differentiations between various left-field stimuli, but is unable to specify exact stimulus properties. Most of the time the major hemisphere, having access to some ipsilateral stimuli, dominated the minor hemisphere in control of the body.
Competition for control of the body between the hemispheres is seen most clearly in tests of minor hemisphere language competency, in which it was determined that though the minor hemisphere does possess some minimal ability to express language, the major hemisphere prevented its expression much of the time. The right hemisphere was superior to the left in tests of perceptual visualization, and the two hemispheres appeared to use different strategies in attempting to solve the problems, namely, analysis for the left hemisphere and synthesis for the right hemisphere.
Analysis of the patients' verbal and performance I.Q.'s, as well as observations made throughout testing, suggest that the corpus callosum plays a critical role in activities that involve functions in which the minor hemisphere normally excels, that the motor expression of these functions may normally come through the major hemisphere by way of the corpus callosum.
Lateral specialization is thought to be an evolutionary adaptation which overcame problems of a functional antagonism between the abilities normally associated with the two hemispheres. The tests of perception suggested that this function lateralized into the mute hemisphere because of an active counteraction by language. This latter idea was confirmed by the finding that left-handers, in whom there is likely to be bilateral language centers, are greatly deficient on tests of perception.
Resumo:
Biomolecular circuit engineering is critical for implementing complex functions in vivo, and is a baseline method in the synthetic biology space. However, current methods for conducting biomolecular circuit engineering are time-consuming and tedious. A complete design-build-test cycle typically takes weeks' to months' time due to the lack of an intermediary between design ex vivo and testing in vivo. In this work, we explore the development and application of a "biomolecular breadboard" composed of an in-vitro transcription-translation (TX-TL) lysate to rapidly speed up the engineering design-build-test cycle. We first developed protocols for creating and using lysates for conducting biological circuit design. By doing so we simplified the existing technology to an affordable ($0.03/uL) and easy to use three-tube reagent system. We then developed tools to accelerate circuit design by allowing for linear DNA use in lieu of plasmid DNA, and by utilizing principles of modular assembly. This allowed the design-build-test cycle to be reduced to under a business day. We then characterized protein degradation dynamics in the breadboard to aid to implementing complex circuits. Finally, we demonstrated that the breadboard could be applied to engineer complex synthetic circuits in vitro and in vivo. Specifically, we utilized our understanding of linear DNA prototyping, modular assembly, and protein degradation dynamics to characterize the repressilator oscillator and to prototype novel three- and five-node negative feedback oscillators both in vitro and in vivo. We therefore believe the biomolecular breadboard has wide application for acting as an intermediary for biological circuit engineering.
Resumo:
A locally integrable function is said to be of vanishing mean oscillation (VMO) if its mean oscillation over cubes in Rd converges to zero with the volume of the cubes. We establish necessary and sufficient conditions for a locally integrable function defined on a bounded measurable set of positive measure to be the restriction to that set of a VMO function.
We consider the similar extension problem pertaining to BMO(ρ) functions; that is, those VMO functions whose mean oscillation over any cube is O(ρ(l(Q))) where l(Q) is the length of Q and ρ is a positive, non-decreasing function with ρ(0+) = 0.
We apply these results to obtain sufficient conditions for a Blaschke sequence to be the zeros of an analytic BMO(ρ) function on the unit disc.
Resumo:
Let E be a compact subset of the n-dimensional unit cube, 1n, and let C be a collection of convex bodies, all of positive n-dimensional Lebesgue measure, such that C contains bodies with arbitrarily small measure. The dimension of E with respect to the covering class C is defined to be the number
dC(E) = sup(β:Hβ, C(E) > 0),
where Hβ, C is the outer measure
inf(Ʃm(Ci)β:UCi Ↄ E, Ci ϵ C) .
Only the one and two-dimensional cases are studied. Moreover, the covering classes considered are those consisting of intervals and rectangles, parallel to the coordinate axes, and those closed under translations. A covering class is identified with a set of points in the left-open portion, 1’n, of 1n, whose closure intersects 1n - 1’n. For n = 2, the outer measure Hβ, C is adopted in place of the usual:
Inf(Ʃ(diam. (Ci))β: UCi Ↄ E, Ci ϵ C),
for the purpose of studying the influence of the shape of the covering sets on the dimension dC(E).
If E is a closed set in 11, let M(E) be the class of all non-decreasing functions μ(x), supported on E with μ(x) = 0, x ≤ 0 and μ(x) = 1, x ≥ 1. Define for each μ ϵ M(E),
dC(μ) = lim/c → inf/0 log ∆μ(c)/log c , (c ϵ C)
where ∆μ(c) = v/x (μ(x+c) – μ(x)). It is shown that
dC(E) = sup (dC(μ):μ ϵ M(E)).
This notion of dimension is extended to a certain class Ӻ of sub-additive functions, and the problem of studying the behavior of dC(E) as a function of the covering class C is reduced to the study of dC(f) where f ϵ Ӻ. Specifically, the set of points in 11,
(*) {dB(F), dC(f)): f ϵ Ӻ}
is characterized by a comparison of the relative positions of the points of B and C. A region of the form (*) is always closed and doubly-starred with respect to the points (0, 0) and (1, 1). Conversely, given any closed region in 12, doubly-starred with respect to (0, 0) and (1, 1), there are covering classes B and C such that (*) is exactly that region. All of the results are shown to apply to the dimension of closed sets E. Similar results can be obtained when a finite number of covering classes are considered.
In two dimensions, the notion of dimension is extended to the class M, of functions f(x, y), non-decreasing in x and y, supported on 12 with f(x, y) = 0 for x · y = 0 and f(1, 1) = 1, by the formula
dC(f) = lim/s · t → inf/0 log ∆f(s, t)/log s · t , (s, t) ϵ C
where
∆f(s, t) = V/x, y (f(x+s, y+t) – f(x+s, y) – f(x, y+t) + f(x, t)).
A characterization of the equivalence dC1(f) = dC2(f) for all f ϵ M, is given by comparison of the gaps in the sets of products s · t and quotients s/t, (s, t) ϵ Ci (I = 1, 2).
Resumo:
This investigation is concerned with the notion of concentrated loads in classical elastostatics and related issues. Following a limit treatment of problems involving concentrated internal and surface loads, the orders of the ensuing displacements and stress singularities, as well as the stress resultants of the latter, are determined. These conclusions are taken as a basis for an alternative direct formulation of concentrated-load problems, the completeness of which is established through an appropriate uniqueness theorem. In addition, the present work supplies a reciprocal theorem and an integral representation-theorem applicable to singular problems of the type under consideration. Finally, in the course of the analysis presented here, the theory of Green's functions in elastostatics is extended.
Resumo:
A variety (equational class) of lattices is said to be finitely based if there exists a finite set of identities defining the variety. Let M∞n denote the lattice variety generated by all modular lattices of width not exceeding n. M∞1 and M∞2 are both the class of all distributive lattices and consequently finitely based. B. Jónsson has shown that M∞3 is also finitely based. On the other hand, K. Baker has shown that M∞n is not finitely based for 5 ≤ n ˂ ω. This thesis settles the finite basis problem for M∞4. M∞4 is shown to be finitely based by proving the stronger result that there exist ten varieties which properly contain M∞4 and such that any variety which properly contains M∞4 contains one of these ten varieties.
The methods developed also yield a characterization of sub-directly irreducible width four modular lattices. From this characterization further results are derived. It is shown that the free M∞4 lattice with n generators is finite. A variety with exactly k covers is exhibited for all k ≥ 15. It is further shown that there are 2Ӄo sub- varieties of M∞4.