932 resultados para Algebraic Geometry


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA’s behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 9/11 Act mandates the inspection of 100% of cargo shipments entering the U.S. by 2012 and 100% inspection of air cargo by March 2010. So far, only 5% of inbound shipping containers are inspected thoroughly while air cargo inspections have fared better at 50%. Government officials have admitted that these milestones cannot be met since the appropriate technology does not exist. This research presents a novel planar solid phase microextraction (PSPME) device with enhanced surface area and capacity for collection of the volatile chemical signatures in air that are emitted from illicit compounds for direct introduction into ion mobility spectrometers (IMS) for detection. These IMS detectors are widely used to detect particles of illicit substances and do not have to be adapted specifically to this technology. For static extractions, PDMS and sol-gel PDMS PSPME devices provide significant increases in sensitivity over conventional fiber SPME. Results show a 50–400 times increase in mass detected of piperonal and a 2–4 times increase for TNT. In a blind study of 6 cases suspected to contain varying amounts of MDMA, PSPME-IMS correctly detected 5 positive cases with no false positives or negatives. One of these cases had minimal amounts of MDMA resulting in a false negative response for fiber SPME-IMS. A La (dihed) phase chemistry has shown an increase in the extraction efficiency of TNT and 2,4-DNT and enhanced retention over time. An alternative PSPME device was also developed for the rapid (seconds) dynamic sampling and preconcentration of large volumes of air for direct thermal desorption into an IMS. This device affords high extraction efficiencies due to strong retention properties under ambient conditions resulting in ppt detection limits when 3.5 L of air are sampled over the course of 10 seconds. Dynamic PSPME was used to sample the headspace over the following: MDMA tablets (12–40 ng detected of piperonal), high explosives (Pentolite) (0.6 ng detected of TNT), and several smokeless powders (26–35 ng of 2,4-DNT and 11–74 ng DPA detected). PSPME-IMS technology is flexible to end-user needs, is low-cost, rapid, sensitive, easy to use, easy to implement, and effective. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural Health Monitoring (SHM) systems were developed to evaluate the integrity of a system during operation, and to quickly identify the maintenance problems. They will be used in future aerospace vehicles to improve safety, reduce cost and minimize the maintenance time of a system. Many SHM systems were already developed to evaluate the integrity of plates and used in marine structures. Their implementation in manufacturing processes is still expected. The application of SHM methods for complex geometries and welds are two important challenges in this area of research. This research work started by studying the characteristics of piezoelectric actuators, and a small energy harvester was designed. The output voltages at different frequencies of vibration were acquired to determine the nonlinear characteristics of the piezoelectric stripe actuators. The frequency response was evaluated experimentally. AA battery size energy harvesting devices were developed by using these actuators. When the round and square cross section devices were excited at 50 Hz frequency, they generated 16 V and 25 V respectively. The Surface Response to Excitation (SuRE) and Lamb wave methods were used to estimate the condition of parts with complex geometries. Cutting tools and welded plates were considered. Both approaches used piezoelectric elements that were attached to the surfaces of considered parts. The variation of the magnitude of the frequency response was evaluated when the SuRE method was used. The sum of the square of the differences was calculated. The envelope of the received signal was used for the analysis of wave propagation. Bi-orthogonal wavelet (Binlet) analysis was also used for the evaluation of the data obtained during Lamb wave technique. Both the Lamb wave and SuRE approaches along with the three methods for data analysis worked effectively to detect increasing tool wear. Similarly, they detected defects on the plate, on the weld, and on a separate plate without any sensor as long as it was welded to the test plate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA's behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a hybrid technique of frequency selective surfaces project (FSS) on a isotropic dielectric layer, considering various geometries for the elements of the unit cell. Specifically, the hybrid technique uses the equivalent circuit method in conjunction with genetic algorithm, aiming at the synthesis of structures with response single-band and dual-band. The equivalent circuit method allows you to model the structure by using an equivalent circuit and also obtaining circuits for different geometries. From the obtaining of the parameters of these circuits, you can get the transmission and reflection characteristics of patterned structures. For the optimization of patterned structures, according to the desired frequency response, Matlab™ optimization tool named optimtool proved to be easy to use, allowing you to explore important results on the optimization analysis. In this thesis, numeric and experimental results are presented for the different characteristics of the analyzed geometries. For this, it was determined a technique to obtain the parameter N, which is based on genetic algorithms and differential geometry, to obtain the algebraic rational models that determine values of N more accurate, facilitating new projects of FSS with these geometries. The optimal results of N are grouped according to the occupancy factor of the cell and the thickness of the dielectric, for modeling of the structures by means of rational algebraic equations. Furthermore, for the proposed hybrid model was developed a fitness function for the purpose of calculating the error occurred in the definitions of FSS bandwidths with transmission features single band and dual band. This thesis deals with the construction of prototypes of FSS with frequency settings and band widths obtained with the use of this function. The FSS were initially reviewed through simulations performed with the commercial software Ansoft Designer ™, followed by simulation with the equivalent circuit method for obtaining a value of N in order to converge the resonance frequency and the bandwidth of the FSS analyzed, then the results obtained were compared. The methodology applied is validated with the construction and measurement of prototypes with different geometries of the cells of the arrays of FSS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a hybrid technique of frequency selective surfaces project (FSS) on a isotropic dielectric layer, considering various geometries for the elements of the unit cell. Specifically, the hybrid technique uses the equivalent circuit method in conjunction with genetic algorithm, aiming at the synthesis of structures with response single-band and dual-band. The equivalent circuit method allows you to model the structure by using an equivalent circuit and also obtaining circuits for different geometries. From the obtaining of the parameters of these circuits, you can get the transmission and reflection characteristics of patterned structures. For the optimization of patterned structures, according to the desired frequency response, Matlab™ optimization tool named optimtool proved to be easy to use, allowing you to explore important results on the optimization analysis. In this thesis, numeric and experimental results are presented for the different characteristics of the analyzed geometries. For this, it was determined a technique to obtain the parameter N, which is based on genetic algorithms and differential geometry, to obtain the algebraic rational models that determine values of N more accurate, facilitating new projects of FSS with these geometries. The optimal results of N are grouped according to the occupancy factor of the cell and the thickness of the dielectric, for modeling of the structures by means of rational algebraic equations. Furthermore, for the proposed hybrid model was developed a fitness function for the purpose of calculating the error occurred in the definitions of FSS bandwidths with transmission features single band and dual band. This thesis deals with the construction of prototypes of FSS with frequency settings and band widths obtained with the use of this function. The FSS were initially reviewed through simulations performed with the commercial software Ansoft Designer ™, followed by simulation with the equivalent circuit method for obtaining a value of N in order to converge the resonance frequency and the bandwidth of the FSS analyzed, then the results obtained were compared. The methodology applied is validated with the construction and measurement of prototypes with different geometries of the cells of the arrays of FSS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the algebraic and topological genericity of certain subsets of locally recurrent functions, obtaining (among other results) algebrability and spaceability within these classes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the algebraic and topological genericity of certain subsets of locally recurrent functions, obtaining (among other results) algebrability and spaceability within these classes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.

The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.

Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.

Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.

The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sub-ice shelf circulation and freezing/melting rates in ocean general circulation models depend critically on an accurate and consistent representation of cavity geometry. Existing global or pan-Antarctic data sets have turned out to contain various inconsistencies and inaccuracies. The goal of this work is to compile independent regional fields into a global data set. We use the S-2004 global 1-minute bathymetry as the backbone and add an improved version of the BEDMAP topography for an area that roughly coincides with the Antarctic continental shelf. Locations of the merging line have been carefully adjusted in order to get the best out of each data set. High-resolution gridded data for upper and lower ice surface topography and cavity geometry of the Amery, Fimbul, Filchner-Ronne, Larsen C and George VI Ice Shelves, and for Pine Island Glacier have been carefully merged into the ambient ice and ocean topographies. Multibeam survey data for bathymetry in the former Larsen B cavity and the southeastern Bellingshausen Sea have been obtained from the data centers of Alfred Wegener Institute (AWI), British Antarctic Survey (BAS) and Lamont-Doherty Earth Observatory (LDEO), gridded, and again carefully merged into the existing bathymetry map. The global 1-minute dataset (RTopo-1 Version 1.0.5) has been split into two netCDF files. The first contains digital maps for global bedrock topography, ice bottom topography, and surface elevation. The second contains the auxiliary maps for data sources and the surface type mask. A regional subset that covers all variables for the region south of 50 deg S is also available in netCDF format. Datasets for the locations of grounding and coast lines are provided in ASCII format.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coccolithophores are a key phytoplankton group that exhibit remarkable diversity in their biology, ecology, and calcitic exoskeletons (coccospheres). An understanding of the physiological processes that underpin coccosphere architecture is essential for maximizing the information that can be retrieved from their extensive fossil record. Using culturing experiments on four modern species from three long-lived families, we investigate how coccosphere architecture responds to population shifts from rapid (exponential) to slowed (stationary) growth phases as nutrients become depleted. These experiments reveal statistical differences in cell size and the number of coccoliths per cell between these two growth phases, specifically that cells in exponential-phase growth are typically smaller with fewer coccoliths, whereas cells experiencing growth-limiting nutrient depletion have larger coccosphere sizes and greater numbers of coccoliths per cell. Although the exact numbers are species-specific, these growth-phase shifts in coccosphere geometry are common to four different coccolithophore families (Calcidiscaceae, Coccolithaceae, Isochrysidaceae, Helicosphaeraceae), demonstrating that this is a core physiological response to nutrient depletion across a representative diversity of this phytoplankton group. Polarised light microscopy was used for all coccosphere geometry measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let $M$ be a compact, oriented, even dimensional Riemannian manifold and let $S$ be a Clifford bundle over $M$ with Dirac operator $D$. Then \[ \textsc{Atiyah Singer: } \quad \text{Ind } \mathsf{D}= \int_M \hat{\mathcal{A}}(TM)\wedge \text{ch}(\mathcal{V}) \] where $\mathcal{V} =\text{Hom}_{\mathbb{C}l(TM)}(\slashed{\mathsf{S}},S)$. We prove the above statement with the means of the heat kernel of the heat semigroup $e^{-tD^2}$. The first outstanding result is the McKean-Singer theorem that describes the index in terms of the supertrace of the heat kernel. The trace of heat kernel is obtained from local geometric information. Moreover, if we use the asymptotic expansion of the kernel we will see that in the computation of the index only one term matters. The Berezin formula tells us that the supertrace is nothing but the coefficient of the Clifford top part, and at the end, Getzler calculus enables us to find the integral of these top parts in terms of characteristic classes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The category of rational SO(2)--equivariant spectra admits an algebraic model. That is, there is an abelian category A(SO(2)) whose derived category is equivalent to the homotopy category of rational$SO(2)--equivariant spectra. An important question is: does this algebraic model capture the smash product of spectra? The category A(SO(2)) is known as Greenlees' standard model, it is an abelian category that has no projective objects and is constructed from modules over a non--Noetherian ring. As a consequence, the standard techniques for constructing a monoidal model structure cannot be applied. In this paper a monoidal model structure on A(SO(2)) is constructed and the derived tensor product on the homotopy category is shown to be compatible with the smash product of spectra. The method used is related to techniques developed by the author in earlier joint work with Roitzheim. That work constructed a monoidal model structure on Franke's exotic model for the K_(p)--local stable homotopy category. A monoidal Quillen equivalence to a simpler monoidal model category that has explicit generating sets is also given. Having monoidal model structures on the two categories removes a serious obstruction to constructing a series of monoidal Quillen equivalences between the algebraic model and rational SO(2)--equivariant spectra.