911 resultados para TOROIDAL GEOMETRY
Resumo:
We consider quadrate matrices with elements of the first row members of an arithmetic progression and of the second row members of other arithmetic progression. We prove the set of these matrices is a group. Then we give a parameterization of this group and investigate about some invariants of the corresponding geometry. We find an invariant of any two points and an invariant of any sixth points. All calculations are made by Maple.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014
Resumo:
2000 Mathematics Subject Classification: 53C42, 53C15.
Resumo:
In SNAP (Surface nanoscale axial photonics) resonators propagation of a slow whispering gallery mode along an optical fiber is controlled by nanoscale variation of the effective radius of the fiber [1]. Similar behavior can be realized in so - called nanobump microresonators in which the introduced variation of the effective radius is asymmetric, i.e. depends on the axial coordinate [2]. The possibilities of realization of such structures “on the fly” in an optical fiber by applying external electrostatic fields to it is discussed in this work. It is shown that local variations in effective radius of the fiber and in its refractive index caused by external electric fields can be large enough to observe SNAP structure - like behavior in an originally flat optical fiber. Theoretical estimations of the introduced refractive index and effective radius changes and results of finite element calculations are presented. Various effects are taken into account: electromechanical (piezoelectricity and electrostriction), electro-optical (Pockels and Kerr effects) and elasto-optical effect. Different initial fibre cross-sections are studied. The aspects of use of linear isotropic (such as silica) and non-linear anisotropic (such as lithium niobate) materials of the fiber are discussed. REFERENCES [1] M. Sumetsky, J. M. Fini, Opt. Exp. 19, 26470 (2011). [2] L. A. Kochkurov, M. Sumetsky, Opt. Lett. 40, 1430 (2015).
Resumo:
The 9/11 Act mandates the inspection of 100% of cargo shipments entering the U.S. by 2012 and 100% inspection of air cargo by March 2010. So far, only 5% of inbound shipping containers are inspected thoroughly while air cargo inspections have fared better at 50%. Government officials have admitted that these milestones cannot be met since the appropriate technology does not exist. This research presents a novel planar solid phase microextraction (PSPME) device with enhanced surface area and capacity for collection of the volatile chemical signatures in air that are emitted from illicit compounds for direct introduction into ion mobility spectrometers (IMS) for detection. These IMS detectors are widely used to detect particles of illicit substances and do not have to be adapted specifically to this technology. For static extractions, PDMS and sol-gel PDMS PSPME devices provide significant increases in sensitivity over conventional fiber SPME. Results show a 50–400 times increase in mass detected of piperonal and a 2–4 times increase for TNT. In a blind study of 6 cases suspected to contain varying amounts of MDMA, PSPME-IMS correctly detected 5 positive cases with no false positives or negatives. One of these cases had minimal amounts of MDMA resulting in a false negative response for fiber SPME-IMS. A La (dihed) phase chemistry has shown an increase in the extraction efficiency of TNT and 2,4-DNT and enhanced retention over time. An alternative PSPME device was also developed for the rapid (seconds) dynamic sampling and preconcentration of large volumes of air for direct thermal desorption into an IMS. This device affords high extraction efficiencies due to strong retention properties under ambient conditions resulting in ppt detection limits when 3.5 L of air are sampled over the course of 10 seconds. Dynamic PSPME was used to sample the headspace over the following: MDMA tablets (12–40 ng detected of piperonal), high explosives (Pentolite) (0.6 ng detected of TNT), and several smokeless powders (26–35 ng of 2,4-DNT and 11–74 ng DPA detected). PSPME-IMS technology is flexible to end-user needs, is low-cost, rapid, sensitive, easy to use, easy to implement, and effective. ^
Resumo:
Structural Health Monitoring (SHM) systems were developed to evaluate the integrity of a system during operation, and to quickly identify the maintenance problems. They will be used in future aerospace vehicles to improve safety, reduce cost and minimize the maintenance time of a system. Many SHM systems were already developed to evaluate the integrity of plates and used in marine structures. Their implementation in manufacturing processes is still expected. The application of SHM methods for complex geometries and welds are two important challenges in this area of research. This research work started by studying the characteristics of piezoelectric actuators, and a small energy harvester was designed. The output voltages at different frequencies of vibration were acquired to determine the nonlinear characteristics of the piezoelectric stripe actuators. The frequency response was evaluated experimentally. AA battery size energy harvesting devices were developed by using these actuators. When the round and square cross section devices were excited at 50 Hz frequency, they generated 16 V and 25 V respectively. The Surface Response to Excitation (SuRE) and Lamb wave methods were used to estimate the condition of parts with complex geometries. Cutting tools and welded plates were considered. Both approaches used piezoelectric elements that were attached to the surfaces of considered parts. The variation of the magnitude of the frequency response was evaluated when the SuRE method was used. The sum of the square of the differences was calculated. The envelope of the received signal was used for the analysis of wave propagation. Bi-orthogonal wavelet (Binlet) analysis was also used for the evaluation of the data obtained during Lamb wave technique. Both the Lamb wave and SuRE approaches along with the three methods for data analysis worked effectively to detect increasing tool wear. Similarly, they detected defects on the plate, on the weld, and on a separate plate without any sensor as long as it was welded to the test plate.
Resumo:
Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.
A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.
The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.
From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.
Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.
Resumo:
While a great amount of attention is being given to the development of nanodevices, both through academic research and private industry, the field is still on the verge. Progress hinges upon the development of tools and components that can precisely control the interaction between light and matter, and that can be efficiently integrated into nano-devices. Nanofibers are one of the most promising candidates for such purposes. However, in order to fully exploit their potential, a more intimate knowledge of how nanofibers interact with single neutral atoms must be gained. As we learn more about the properties of nanofiber modes, and the way they interface with atoms, and as the technology develops that allows them to be prepared with more precisely known properties, they become more and more adaptable and effective. The work presented in this thesis touches on many topics, which is testament to the broad range of applications and high degree of promise that nanofibers hold. For immediate use, we need to fully grasp how they can be best implemented as sensors, filters, detectors, and switches in existing nano-technologies. Areas of interest also include how they might be best exploited for probing atom-surface interactions, single-atom detection and single photon generation. Nanofiber research is also motivated by their potential integration into fundamental cold atom quantum experiments, and the role they can play there. Combining nanofibers with existing optical and quantum technologies is a powerful strategy for advancing areas like quantum computation, quantum information processing, and quantum communication. In this thesis I present a variety of theoretical work, which explores a range of the applications listed above. The first work presented concerns the use of the evanescent fields around a nanofiber to manipulate an existing trapping geometry and therefore influence the centre-of-mass dynamics of the atom. The second work presented explores interesting trapping geometries that can be achieved in the vicinity of a fiber in which just four modes are allowed to propagate. In a third study I explore the use of a nanofiber as a detector of small numbers of photons by calculating the rate of emission into the fiber modes when the fiber is moved along next to a regularly separated array of atoms. Also included are some results from a work in progress, where I consider the scattered field that appears along the nanofiber axis when a small number of atoms trapped along that axis are illuminated orthogonally; some interesting preliminary results are outlined. Finally, in contrast with the rest of the thesis, I consider some interesting physics that can be done in one of the trapping geometries that can be created around the fiber, here I explore the ground states of a phase separated two-component superfluid Bose-Einstein condensate trapped in a toroidal potential.
Resumo:
Sub-ice shelf circulation and freezing/melting rates in ocean general circulation models depend critically on an accurate and consistent representation of cavity geometry. Existing global or pan-Antarctic data sets have turned out to contain various inconsistencies and inaccuracies. The goal of this work is to compile independent regional fields into a global data set. We use the S-2004 global 1-minute bathymetry as the backbone and add an improved version of the BEDMAP topography for an area that roughly coincides with the Antarctic continental shelf. Locations of the merging line have been carefully adjusted in order to get the best out of each data set. High-resolution gridded data for upper and lower ice surface topography and cavity geometry of the Amery, Fimbul, Filchner-Ronne, Larsen C and George VI Ice Shelves, and for Pine Island Glacier have been carefully merged into the ambient ice and ocean topographies. Multibeam survey data for bathymetry in the former Larsen B cavity and the southeastern Bellingshausen Sea have been obtained from the data centers of Alfred Wegener Institute (AWI), British Antarctic Survey (BAS) and Lamont-Doherty Earth Observatory (LDEO), gridded, and again carefully merged into the existing bathymetry map. The global 1-minute dataset (RTopo-1 Version 1.0.5) has been split into two netCDF files. The first contains digital maps for global bedrock topography, ice bottom topography, and surface elevation. The second contains the auxiliary maps for data sources and the surface type mask. A regional subset that covers all variables for the region south of 50 deg S is also available in netCDF format. Datasets for the locations of grounding and coast lines are provided in ASCII format.
Resumo:
Coccolithophores are a key phytoplankton group that exhibit remarkable diversity in their biology, ecology, and calcitic exoskeletons (coccospheres). An understanding of the physiological processes that underpin coccosphere architecture is essential for maximizing the information that can be retrieved from their extensive fossil record. Using culturing experiments on four modern species from three long-lived families, we investigate how coccosphere architecture responds to population shifts from rapid (exponential) to slowed (stationary) growth phases as nutrients become depleted. These experiments reveal statistical differences in cell size and the number of coccoliths per cell between these two growth phases, specifically that cells in exponential-phase growth are typically smaller with fewer coccoliths, whereas cells experiencing growth-limiting nutrient depletion have larger coccosphere sizes and greater numbers of coccoliths per cell. Although the exact numbers are species-specific, these growth-phase shifts in coccosphere geometry are common to four different coccolithophore families (Calcidiscaceae, Coccolithaceae, Isochrysidaceae, Helicosphaeraceae), demonstrating that this is a core physiological response to nutrient depletion across a representative diversity of this phytoplankton group. Polarised light microscopy was used for all coccosphere geometry measurements.
Resumo:
Let $M$ be a compact, oriented, even dimensional Riemannian manifold and let $S$ be a Clifford bundle over $M$ with Dirac operator $D$. Then \[ \textsc{Atiyah Singer: } \quad \text{Ind } \mathsf{D}= \int_M \hat{\mathcal{A}}(TM)\wedge \text{ch}(\mathcal{V}) \] where $\mathcal{V} =\text{Hom}_{\mathbb{C}l(TM)}(\slashed{\mathsf{S}},S)$. We prove the above statement with the means of the heat kernel of the heat semigroup $e^{-tD^2}$. The first outstanding result is the McKean-Singer theorem that describes the index in terms of the supertrace of the heat kernel. The trace of heat kernel is obtained from local geometric information. Moreover, if we use the asymptotic expansion of the kernel we will see that in the computation of the index only one term matters. The Berezin formula tells us that the supertrace is nothing but the coefficient of the Clifford top part, and at the end, Getzler calculus enables us to find the integral of these top parts in terms of characteristic classes.
Resumo:
Electrospun nanofibers are a promising material for ligamentous tissue engineering, however weak mechanical properties of fibers to date have limited their clinical usage. The goal of this work was to modify electrospun nanofibers to create a robust structure that mimics the complex hierarchy of native tendons and ligaments. The scaffolds that were fabricated in this study consisted of either random or aligned nanofibers in flat sheets or rolled nanofiber bundles that mimic the size scale of fascicle units in primarily tensile load bearing soft musculoskeletal tissues. Altering nanofiber orientation and geometry significantly affected mechanical properties; most notably aligned nanofiber sheets had the greatest modulus; 125% higher than that of random nanofiber sheets; and 45% higher than aligned nanofiber bundles. Modifying aligned nanofiber sheets to form aligned nanofiber bundles also resulted in approximately 107% higher yield stresses and 140% higher yield strains. The mechanical properties of aligned nanofiber bundles were in the range of the mechanical properties of the native ACL: modulus=158±32MPa, yield stress=57±23MPa and yield strain=0.38±0.08. Adipose derived stem cells cultured on all surfaces remained viable and proliferated extensively over a 7 day culture period and cells elongated on nanofiber bundles. The results of the study suggest that aligned nanofiber bundles may be useful for ligament and tendon tissue engineering based on their mechanical properties and ability to support cell adhesion, proliferation, and elongation.
Resumo:
Recent proxy measurements reveal that subglacial lakes beneath modern ice sheets periodically store and release large volumes of water, providing an important but poorly understood influence on contemporary ice dynamics and mass balance. This is because direct observations of how lake drainage initiates and proceeds are lacking. Here we present physical evidence of the mechanism and geometry of lake drainage from the discovery of relict subglacial lakes formed during the last glaciation in Canada. These palaeo-subglacial lakes comprised shallow (<10 m) lenses of water perched behind ridges orientated transverse to ice flow. We show that lakes periodically drained through channels incised into bed substrate (canals). Canals sometimes trend into eskers that represent the depositional imprint of the last high-magnitude lake outburst. The subglacial lakes and channels are preserved on top of glacial lineations, indicating long-term re-organization of the subglacial drainage system and coupling to ice flow.