997 resultados para Library Spaces


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a highly efficient content-lossless compression scheme for Chinese document images. The scheme combines morphologic analysis with pattern matching to cluster patterns. In order to achieve the error maps with minimal error numbers, the morphologic analysis is applied to decomposing and recomposing the Chinese character patterns. In the pattern matching, the criteria are adapted to the characteristics of Chinese characters. Since small-size components sometimes can be inserted into the blank spaces of large-size components, we can achieve small-size pattern library images. Arithmetic coding is applied to the final compression. Our method achieves much better compression performance than most alternative methods, and assures content-lossless reconstruction. (c) 2006 Society of Photo-Optical Instrumentation Engineers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis introduces fundamental equations and numerical methods for manipulating surfaces in three dimensions via conformal transformations. Conformal transformations are valuable in applications because they naturally preserve the integrity of geometric data. To date, however, there has been no clearly stated and consistent theory of conformal transformations that can be used to develop general-purpose geometry processing algorithms: previous methods for computing conformal maps have been restricted to the flat two-dimensional plane, or other spaces of constant curvature. In contrast, our formulation can be used to produce---for the first time---general surface deformations that are perfectly conformal in the limit of refinement. It is for this reason that we commandeer the title Conformal Geometry Processing.

The main contribution of this thesis is analysis and discretization of a certain time-independent Dirac equation, which plays a central role in our theory. Given an immersed surface, we wish to construct new immersions that (i) induce a conformally equivalent metric and (ii) exhibit a prescribed change in extrinsic curvature. Curvature determines the potential in the Dirac equation; the solution of this equation determines the geometry of the new surface. We derive the precise conditions under which curvature is allowed to evolve, and develop efficient numerical algorithms for solving the Dirac equation on triangulated surfaces.

From a practical perspective, this theory has a variety of benefits: conformal maps are desirable in geometry processing because they do not exhibit shear, and therefore preserve textures as well as the quality of the mesh itself. Our discretization yields a sparse linear system that is simple to build and can be used to efficiently edit surfaces by manipulating curvature and boundary data, as demonstrated via several mesh processing applications. We also present a formulation of Willmore flow for triangulated surfaces that permits extraordinarily large time steps and apply this algorithm to surface fairing, geometric modeling, and construction of constant mean curvature (CMC) surfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A standard question in the study of geometric quantization is whether symplectic reduction interacts nicely with the quantized theory, and in particular whether “quantization commutes with reduction.” Guillemin and Sternberg first proposed this question, and answered it in the affirmative for the case of a free action of a compact Lie group on a compact Kähler manifold. Subsequent work has focused mainly on extending their proof to non-free actions and non-Kähler manifolds. For realistic physical examples, however, it is desirable to have a proof which also applies to non-compact symplectic manifolds.

In this thesis we give a proof of the quantization-reduction problem for general symplectic manifolds. This is accomplished by working in a particular wavefunction representation, associated with a polarization that is in some sense compatible with reduction. While the polarized sections described by Guillemin and Sternberg are nonzero on a dense subset of the Kähler manifold, the ones considered here are distributional, having support only on regions of the phase space associated with certain quantized, or “admissible”, values of momentum.

We first propose a reduction procedure for the prequantum geometric structures that “covers” symplectic reduction, and demonstrate how both symplectic and prequantum reduction can be viewed as examples of foliation reduction. Consistency of prequantum reduction imposes the above-mentioned admissibility conditions on the quantized momenta, which can be seen as analogues of the Bohr-Wilson-Sommerfeld conditions for completely integrable systems.

We then describe our reduction-compatible polarization, and demonstrate a one-to-one correspondence between polarized sections on the unreduced and reduced spaces.

Finally, we describe a factorization of the reduced prequantum bundle, suggested by the structure of the underlying reduced symplectic manifold. This in turn induces a factorization of the space of polarized sections that agrees with its usual decomposition by irreducible representations, and so proves that quantization and reduction do indeed commute in this context.

A significant omission from the proof is the construction of an inner product on the space of polarized sections, and a discussion of its behavior under reduction. In the concluding chapter of the thesis, we suggest some ideas for future work in this direction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to investigate to what extent the known theory of subdifferentiability and generic differentiability of convex functions defined on open sets can be carried out in the context of convex functions defined on not necessarily open sets. Among the main results obtained I would like to mention a Kenderov type theorem (the subdifferential at a generic point is contained in a sphere), a generic Gâteaux differentiability result in Banach spaces of class S and a generic Fréchet differentiability result in Asplund spaces. At least two methods can be used to prove these results: first, a direct one, and second, a more general one, based on the theory of monotone operators. Since this last theory was previously developed essentially for monotone operators defined on open sets, it was necessary to extend it to the context of monotone operators defined on a larger class of sets, our "quasi open" sets. This is done in Chapter III. As a matter of fact, most of these results have an even more general nature and have roots in the theory of minimal usco maps, as shown in Chapter II.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.

If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.

The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.

If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM) by the dual norm. The projective bounds of a norm and its dual are equal.

The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.

In all results, the real and complex cases are handled in a completely parallel fashion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.

The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.

Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.

The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a highly efficient content-lossless compression scheme for Chinese document images. The scheme combines morphologic analysis with pattern matching to cluster patterns. In order to achieve the error maps with minimal error numbers, the morphologic analysis is applied to decomposing and recomposing the Chinese character patterns. In the pattern matching, the criteria are adapted to the characteristics of Chinese characters. Since small-size components sometimes can be inserted into the blank spaces of large-size components, we can achieve small-size pattern library images. Arithmetic coding is applied to the final compression. Our method achieves much better compression performance than most alternative methods, and assures content-lossless reconstruction. (c) 2006 Society of Photo-Optical Instrumentation Engineers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine voting situations in which individuals have incomplete information over each others' true preferences. In many respects, this work is motivated by a desire to provide a more complete understanding of so-called probabilistic voting.

Chapter 2 examines the similarities and differences between the incentives faced by politicians who seek to maximize expected vote share, expected plurality, or probability of victory in single member: single vote, simple plurality electoral systems. We find that, in general, the candidates' optimal policies in such an electoral system vary greatly depending on their objective function. We provide several examples, as well as a genericity result which states that almost all such electoral systems (with respect to the distributions of voter behavior) will exhibit different incentives for candidates who seek to maximize expected vote share and those who seek to maximize probability of victory.

In Chapter 3, we adopt a random utility maximizing framework in which individuals' preferences are subject to action-specific exogenous shocks. We show that Nash equilibria exist in voting games possessing such an information structure and in which voters and candidates are each aware that every voter's preferences are subject to such shocks. A special case of our framework is that in which voters are playing a Quantal Response Equilibrium (McKelvey and Palfrey (1995), (1998)). We then examine candidate competition in such games and show that, for sufficiently large electorates, regardless of the dimensionality of the policy space or the number of candidates, there exists a strict equilibrium at the social welfare optimum (i.e., the point which maximizes the sum of voters' utility functions). In two candidate contests we find that this equilibrium is unique.

Finally, in Chapter 4, we attempt the first steps towards a theory of equilibrium in games possessing both continuous action spaces and action-specific preference shocks. Our notion of equilibrium, Variational Response Equilibrium, is shown to exist in all games with continuous payoff functions. We discuss the similarities and differences between this notion of equilibrium and the notion of Quantal Response Equilibrium and offer possible extensions of our framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A noncommutative 2-torus is one of the main toy models of noncommutative geometry, and a noncommutative n-torus is a straightforward generalization of it. In 1980, Pimsner and Voiculescu in [17] described a 6-term exact sequence, which allows for the computation of the K-theory of noncommutative tori. It follows that both even and odd K-groups of n-dimensional noncommutative tori are free abelian groups on 2n-1 generators. In 1981, the Powers-Rieffel projector was described [19], which, together with the class of identity, generates the even K-theory of noncommutative 2-tori. In 1984, Elliott [10] computed trace and Chern character on these K-groups. According to Rieffel [20], the odd K-theory of a noncommutative n-torus coincides with the group of connected components of the elements of the algebra. In particular, generators of K-theory can be chosen to be invertible elements of the algebra. In Chapter 1, we derive an explicit formula for the First nontrivial generator of the odd K-theory of noncommutative tori. This gives the full set of generators for the odd K-theory of noncommutative 3-tori and 4-tori.

In Chapter 2, we apply the graded-commutative framework of differential geometry to the polynomial subalgebra of the noncommutative torus algebra. We use the framework of differential geometry described in [27], [14], [25], [26]. In order to apply this framework to noncommutative torus, the notion of the graded-commutative algebra has to be generalized: the "signs" should be allowed to take values in U(1), rather than just {-1,1}. Such generalization is well-known (see, e.g., [8] in the context of linear algebra). We reformulate relevant results of [27], [14], [25], [26] using this extended notion of sign. We show how this framework can be used to construct differential operators, differential forms, and jet spaces on noncommutative tori. Then, we compare the constructed differential forms to the ones, obtained from the spectral triple of the noncommutative torus. Sections 2.1-2.3 recall the basic notions from [27], [14], [25], [26], with the required change of the notion of "sign". In Section 2.4, we apply these notions to the polynomial subalgebra of the noncommutative torus algebra. This polynomial subalgebra is similar to a free graded-commutative algebra. We show that, when restricted to the polynomial subalgebra, Connes construction of differential forms gives the same answer as the one obtained from the graded-commutative differential geometry. One may try to extend these notions to the smooth noncommutative torus algebra, but this was not done in this work.

A reconstruction of the Beilinson-Bloch regulator (for curves) via Fredholm modules was given by Eugene Ha in [12]. However, the proof in [12] contains a critical gap; in Chapter 3, we close this gap. More specifically, we do this by obtaining some technical results, and by proving Property 4 of Section 3.7 (see Theorem 3.9.4), which implies that such reformulation is, indeed, possible. The main motivation for this reformulation is the longer-term goal of finding possible analogs of the second K-group (in the context of algebraic geometry and K-theory of rings) and of the regulators for noncommutative spaces. This work should be seen as a necessary preliminary step for that purpose.

For the convenience of the reader, we also give a short description of the results from [12], as well as some background material on central extensions and Connes-Karoubi character.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.

In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.

Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.

In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fast radio bursts (FRBs), a novel type of radio pulse, whose physics is not yet understood at all. Only a handful of FRBs had been detected when we started this project. Taking account of the scant observations, we put physical constraints on FRBs. We excluded proposals of a galactic origin for their extraordinarily high dispersion measures (DM), in particular stellar coronas and HII regions. Therefore our work supports an extragalactic origin for FRBs. We show that the resolved scattering tail of FRB 110220 is unlikely to be due to propagation through the intergalactic plasma. Instead the scattering is probably caused by the interstellar medium in the FRB's host galaxy, and indicates that this burst sits in the central region of that galaxy. Pulse durations of order $\ms$ constrain source sizes of FRBs implying enormous brightness temperatures and thus coherent emission. Electric fields near FRBs at cosmological distances would be so strong that they could accelerate free electrons from rest to relativistic energies in a single wave period. When we worked on FRBs, it was unclear whether they were genuine astronomical signals as distinct from `perytons', clearly terrestrial radio bursts, sharing some common properties with FRBs. Recently, in April 2015, astronomers discovered that perytons were emitted by microwave ovens. Radio chirps similar to FRBs were emitted when their doors opened while they were still heating. Evidence for the astronomical nature of FRBs has strengthened since our paper was published. Some bursts have been found to show linear and circular polarizations and Faraday rotation of the linear polarization has also been detected. I hope to resume working on FRBs in the near future. But after we completed our FRB paper, I decided to pause this project because of the lack of observational constraints.

The pulsar triple system, J0733+1715, has its orbital parameters fitted to high accuracy owing to the precise timing of the central $\ms$ pulsar. The two orbits are highly hierarchical, namely $P_{\mathrm{orb,1}}\ll P_{\mathrm{orb,2}}$, where 1 and 2 label the inner and outer white dwarf (WD) companions respectively. Moreover, their orbital planes almost coincide, providing a unique opportunity to study secular interaction associated purely with eccentricity beyond the solar system. Secular interaction only involves effect averaged over many orbits. Thus each companion can be represented by an elliptical wire with its mass distributed inversely proportional to its local orbital speed. Generally there exists a mutual torque, which vanishes only when their apsidal lines are parallel or anti-parallel. To maintain either mode, the eccentricity ratio, $e_1/e_2$, must be of the proper value, so that both apsidal lines precess together. For J0733+1715, $e_1\ll e_2$ for the parallel mode, while $e_1\gg e_2$ for the anti-parallel one. We show that the former precesses $\sim 10$ times slower than the latter. Currently the system is dominated by the parallel mode. Although only a little anti-parallel mode survives, both eccentricities especially $e_1$ oscillate on $\sim 10^3\yr$ timescale. Detectable changes would occur within $\sim 1\yr$. We demonstrate that the anti-parallel mode gets damped $\sim 10^4$ times faster than its parallel brother by any dissipative process diminishing $e_1$. If it is the tidal damping in the inner WD, we proceed to estimate its tidal quantity parameter ($Q$) to be $\sim 10^6$, which was poorly constrained by observations. However, tidal damping may also happen during the preceding low-mass X-ray binary (LMXB) phase or hydrogen thermal nuclear flashes. But, in both cases, the inner companion fills its Roche lobe and probably suffers mass/angular momentum loss, which might cause $e_1$ to grow rather than decay.

Several pairs of solar system satellites occupy mean motion resonances (MMRs). We divide these into two groups according to their proximity to exact resonance. Proximity is measured by the existence of a separatrix in phase space. MMRs between Io-Europa, Europa-Ganymede and Enceladus-Dione are too distant from exact resonance for a separatrix to appear. A separatrix is present only in the phase spaces of the Mimas-Tethys and Titan-Hyperion MMRs and their resonant arguments are the only ones to exhibit substantial librations. When a separatrix is present, tidal damping of eccentricity or inclination excites overstable librations that can lead to passage through resonance on the damping timescale. However, after investigation, we conclude that the librations in the Mimas-Tethys and Titan-Hyperion MMRs are fossils and do not result from overstability.

Rubble piles are common in the solar system. Monolithic elements touch their neighbors in small localized areas. Voids occupy a significant fraction of the volume. In a fluid-free environment, heat cannot conduct through voids; only radiation can transfer energy across them. We model the effective thermal conductivity of a rubble pile and show that it is proportional the square root of the pressure, $P$, for $P\leq \epsy^3\mu$ where $\epsy$ is the material's yield strain and $\mu$ its shear modulus. Our model provides an excellent fit to the depth dependence of the thermal conductivity in the top $140\,\mathrm{cm}$ of the lunar regolith. It also offers an explanation for the low thermal inertias of rocky asteroids and icy satellites. Lastly, we discuss how rubble piles slow down the cooling of small bodies such as asteroids.

Electromagnetic (EM) follow-up observations of gravitational wave (GW) events will help shed light on the nature of the sources, and more can be learned if the EM follow-ups can start as soon as the GW event becomes observable. In this paper, we propose a computationally efficient time-domain algorithm capable of detecting gravitational waves (GWs) from coalescing binaries of compact objects with nearly zero time delay. In case when the signal is strong enough, our algorithm also has the flexibility to trigger EM observation {\it before} the merger. The key to the efficiency of our algorithm arises from the use of chains of so-called Infinite Impulse Response (IIR) filters, which filter time-series data recursively. Computational cost is further reduced by a template interpolation technique that requires filtering to be done only for a much coarser template bank than otherwise required to sufficiently recover optimal signal-to-noise ratio. Towards future detectors with sensitivity extending to lower frequencies, our algorithm's computational cost is shown to increase rather insignificantly compared to the conventional time-domain correlation method. Moreover, at latencies of less than hundreds to thousands of seconds, this method is expected to be computationally more efficient than the straightforward frequency-domain method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The re-ignition characteristics (variation of re-ignition voltage with time after current zero) of short alternating current arcs between plane brass electrodes in air were studied by observing the average re-ignition voltages on the screen of a cathode-ray oscilloscope and controlling the rates of rise of voltage by varying the shunting capacitance and hence the natural period of oscillation of the reactors used to limit the current. The shape of these characteristics and the effects on them of varying the electrode separation, air pressure, and current strength were determined.

The results show that short arc spaces recover dielectric strength in two distinct stages. The first stage agrees in shape and magnitude with a previously developed theory that all voltage is concentrated across a partially deionized space charge layer which increases its breakdown voltage with diminishing density of ionization in the field-tree space. The second stage appears to follow complete deionization by the electric field due to displacement of the field-free region by the space charge layer, its magnitude and shape appearing to be due simply to increase in gas density due to cooling. Temperatures calculated from this second stage and ion densities determined from the first stage by means of the space charge equation and an extrapolation of the temperature curve are consistent with recent measurements of arc value by other methods. Analysis or the decrease with time of the apparent ion density shows that diffusion alone is adequate to explain the results and that volume recombination is not. The effects on the characteristics of variations in the parameters investigated are found to be in accord with previous results and with the theory if deionization mainly by diffusion be assumed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Planning the management of data at proposal time and throughout its lifecycle is becoming increasingly important to funding agencies and is essential to ensure its current usability and long term preservation and access. This presentation will describe the work being done at the Woods Hole Oceanographic Institution (WHOI) to assist PIs with the preparation of data management plans and the role the Library has in this process. Data management does not mean simply storing information. The emphasis is now on sharing data and making research accessible. Topics to be covered include educating staff about the NSF data policy implementation, a data management survey, resources for proposal preparation, collaborating with other librarians, and next steps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I have been asked by administration, how much of our collection could go into storage. They optimistically hoping for a room or two for faculty/staff offices, as some buildings need renovation or need to be closed due to safety issues. Clearly, much of the population believes that all/most library materials are available on-line – free. I will present the results of our survey’s of material held and available on-line and space “freed” thanks to archiving. How little space is freed.