926 resultados para compositional geometry
Resumo:
This work acquaints with a program for interactive computer training to students on the subject "Mutual intersecting of pyramids in axonometry ”. Our software is a set of three modules, which we call "student", "teacher" and "autopilot". It gives the final solution of the problem, the traceability of various significant moments in its solution and 3D-image of the finished composition of the two intersecting polyhedra, stripped of the working lines and subjected to rotation and translation.
Resumo:
It is shown in the paper the discovery of two remarkable points of the triangle by means of “THE GEOMETER’S SKETCHPAD” software. Some properties of the points are considered too.
Resumo:
We consider quadrate matrices with elements of the first row members of an arithmetic progression and of the second row members of other arithmetic progression. We prove the set of these matrices is a group. Then we give a parameterization of this group and investigate about some invariants of the corresponding geometry. We find an invariant of any two points and an invariant of any sixth points. All calculations are made by Maple.
Resumo:
Бойко Бл. Банчев - Представена е обосновка и описание на език за програмиране в композиционен стил за опитни и учебни цели. Под “композиционен” имаме предвид функционален стил на програмиране, при който пресмятането е йерархия от композиции и прилагания на функции. Един от данновите типове на езика е този на геометричните фигури, които могат да бъдат получавани чрез прости правила за съотнасяне и така също образуват йерархични композиции. Езикът е силно повлиян от GeomLab, но по редица свойства се различава от него значително. Статията разглежда основните черти на езика; подробното му описание и фигурноконструктивните му възможности ще бъдат представени в съпътстваща публикация.
Resumo:
2000 Mathematics Subject Classification: 53C42, 53C15.
Resumo:
In SNAP (Surface nanoscale axial photonics) resonators propagation of a slow whispering gallery mode along an optical fiber is controlled by nanoscale variation of the effective radius of the fiber [1]. Similar behavior can be realized in so - called nanobump microresonators in which the introduced variation of the effective radius is asymmetric, i.e. depends on the axial coordinate [2]. The possibilities of realization of such structures “on the fly” in an optical fiber by applying external electrostatic fields to it is discussed in this work. It is shown that local variations in effective radius of the fiber and in its refractive index caused by external electric fields can be large enough to observe SNAP structure - like behavior in an originally flat optical fiber. Theoretical estimations of the introduced refractive index and effective radius changes and results of finite element calculations are presented. Various effects are taken into account: electromechanical (piezoelectricity and electrostriction), electro-optical (Pockels and Kerr effects) and elasto-optical effect. Different initial fibre cross-sections are studied. The aspects of use of linear isotropic (such as silica) and non-linear anisotropic (such as lithium niobate) materials of the fiber are discussed. REFERENCES [1] M. Sumetsky, J. M. Fini, Opt. Exp. 19, 26470 (2011). [2] L. A. Kochkurov, M. Sumetsky, Opt. Lett. 40, 1430 (2015).
Resumo:
The 9/11 Act mandates the inspection of 100% of cargo shipments entering the U.S. by 2012 and 100% inspection of air cargo by March 2010. So far, only 5% of inbound shipping containers are inspected thoroughly while air cargo inspections have fared better at 50%. Government officials have admitted that these milestones cannot be met since the appropriate technology does not exist. This research presents a novel planar solid phase microextraction (PSPME) device with enhanced surface area and capacity for collection of the volatile chemical signatures in air that are emitted from illicit compounds for direct introduction into ion mobility spectrometers (IMS) for detection. These IMS detectors are widely used to detect particles of illicit substances and do not have to be adapted specifically to this technology. For static extractions, PDMS and sol-gel PDMS PSPME devices provide significant increases in sensitivity over conventional fiber SPME. Results show a 50–400 times increase in mass detected of piperonal and a 2–4 times increase for TNT. In a blind study of 6 cases suspected to contain varying amounts of MDMA, PSPME-IMS correctly detected 5 positive cases with no false positives or negatives. One of these cases had minimal amounts of MDMA resulting in a false negative response for fiber SPME-IMS. A La (dihed) phase chemistry has shown an increase in the extraction efficiency of TNT and 2,4-DNT and enhanced retention over time. An alternative PSPME device was also developed for the rapid (seconds) dynamic sampling and preconcentration of large volumes of air for direct thermal desorption into an IMS. This device affords high extraction efficiencies due to strong retention properties under ambient conditions resulting in ppt detection limits when 3.5 L of air are sampled over the course of 10 seconds. Dynamic PSPME was used to sample the headspace over the following: MDMA tablets (12–40 ng detected of piperonal), high explosives (Pentolite) (0.6 ng detected of TNT), and several smokeless powders (26–35 ng of 2,4-DNT and 11–74 ng DPA detected). PSPME-IMS technology is flexible to end-user needs, is low-cost, rapid, sensitive, easy to use, easy to implement, and effective. ^
Resumo:
Structural Health Monitoring (SHM) systems were developed to evaluate the integrity of a system during operation, and to quickly identify the maintenance problems. They will be used in future aerospace vehicles to improve safety, reduce cost and minimize the maintenance time of a system. Many SHM systems were already developed to evaluate the integrity of plates and used in marine structures. Their implementation in manufacturing processes is still expected. The application of SHM methods for complex geometries and welds are two important challenges in this area of research. This research work started by studying the characteristics of piezoelectric actuators, and a small energy harvester was designed. The output voltages at different frequencies of vibration were acquired to determine the nonlinear characteristics of the piezoelectric stripe actuators. The frequency response was evaluated experimentally. AA battery size energy harvesting devices were developed by using these actuators. When the round and square cross section devices were excited at 50 Hz frequency, they generated 16 V and 25 V respectively. The Surface Response to Excitation (SuRE) and Lamb wave methods were used to estimate the condition of parts with complex geometries. Cutting tools and welded plates were considered. Both approaches used piezoelectric elements that were attached to the surfaces of considered parts. The variation of the magnitude of the frequency response was evaluated when the SuRE method was used. The sum of the square of the differences was calculated. The envelope of the received signal was used for the analysis of wave propagation. Bi-orthogonal wavelet (Binlet) analysis was also used for the evaluation of the data obtained during Lamb wave technique. Both the Lamb wave and SuRE approaches along with the three methods for data analysis worked effectively to detect increasing tool wear. Similarly, they detected defects on the plate, on the weld, and on a separate plate without any sensor as long as it was welded to the test plate.
Resumo:
We examined periphyton along transects in five Everglades marshes and related compositional and functional aspects to phosphorus(P ) gradients caused by enriched inflows. Results were compared to those of a P-addition experiment in a pristine Everglades marsh. While the water total P (TP) concentration was not related to P load in the marshes or experiment the concentration of TP in periphyton was strongly correlated with the distance from the P source. Increased P concentration in periphyton was associated with a loss of biomass,p articularly of the calcifying mat-forming matrix, regardless of the growth form of the periphyton (epiphytic, floating,or epilithic). Diatom species composition was also strongly related to P availability, but the TP optima of many species varied among marshes. Enriched periphyton communities were found 14 km downstream of P inputs to one marsh that has been receiving enhanced P loads for decades, where other studies using different biotic indicators show negligible change in the same marsh. Although recovery trajectories are unknown, periphyton indicators should serve as excellent metrics for the progression or amelioration of P-related effects in the Everglades.
Resumo:
The landscape structure of emergent wetlands in undeveloped portions of the southeastern coastal Everglades is comprised of two distinct components: scattered forest fragments, or tree islands, surrounded by a low matrix of marsh or shrub-dominated vegetation. Changes in the matrix, including the inland transgression of salt-tolerant mangroves and the recession of sawgrass marshes, have been attributed to the combination of sea level rise and reductions in fresh water supply. In this study we examined concurrent changes in the composition of the region’s tree islands over a period of almost three decades. No trend in species composition toward more salt-tolerant trees was observed anywhere, but species characteristic of freshwater swamps increased in forests in which fresh water supply was augmented. Tree islands in the coastal Everglades appear to be buffered from some of the short term effects of salt water intrusion, due to their ability to build soils above the surface of the surrounding wetlands, thus maintaining mesophytic conditions. However, the apparent resistance of tree islands to changes associated with sea level rise is likely to be a temporary stage, as continued salt water intrusion will eventually overwhelm the forests’ capacity to maintain fresh water in the rooting zone.
Resumo:
Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.
A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.
The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.
From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.
Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
Resumo:
This dissertation consists of three distinct components: (1) “Double Rainbow,” a notated composition for an acoustic ensemble of 10 instruments, ca. 36 minutes. (2) “Appalachiana”, a fixed-media composition for electro-acoustic music and video, ca. 30 minutes, and (3) “'The Invisible Mass': Exploring Compositional Technique in Alfred Schnittke’s Second Symphony”, an analytical article.
(1) Double Rainbow is a ca. 36 minute composition in four movements scored for 10 instruments: flute, Bb clarinet (doubling on bass clarinet), tenor saxophone (doubling on alto saxophone), french horn, percussion (glockenspiel, vibraphone, wood block, 3 toms, snare drum, bass drum, suspended cymbal), piano, violin, viola, cello, and double bass. Each of the four movements of the piece explore their own distinct character and set of compositional goals. The piece is presented as a musical score and as a recording, which was extensively treated in post-production.
(2) Appalachiana, is a ca. 30 minute fixed-media composition for music and video. The musical component was created as a vehicle to showcase several approaches to electro-acoustic music composition –fft re-synthesis for time manipulation effects, the use of a custom-built software instrument which implements generative approaches to creating rhythm and pitch patterns, using a recording of rain to create rhythmic triggers for software instruments, and recording additional components with acoustic instruments. The video component transforms footage of natural landscapes filmed at several locations in North Carolina, Virginia, and West Virginia into a surreal narrative using a variety of color, lighting, distortion, and time-manipulation video effects.
(3) “‘The Invisible Mass:’ Exploring Compositional Technique in Alfred Schnittke’s Second Symphony” is an analytical article that focuses on Alfred Schnittke’s compositional technique as evidenced in the construction of his Second Symphony and discussed by the composer in a number of previously untranslated articles and interviews. Though this symphony is pivotal in the composer’s oeuvre, there are currently no scholarly articles that offer in-depth analyses of the piece. The article combines analyses of the harmony, form, and orchestration in the Second Symphony with relevant quotations from the composer, some from published and translated sources and others newly translated by the author from research at the Russian State Library in St. Petersburg. These offer a perspective on how Schnittke’s compositional technique combines systematic geometric design with keen musical intuition.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.
Resumo:
Sub-ice shelf circulation and freezing/melting rates in ocean general circulation models depend critically on an accurate and consistent representation of cavity geometry. Existing global or pan-Antarctic data sets have turned out to contain various inconsistencies and inaccuracies. The goal of this work is to compile independent regional fields into a global data set. We use the S-2004 global 1-minute bathymetry as the backbone and add an improved version of the BEDMAP topography for an area that roughly coincides with the Antarctic continental shelf. Locations of the merging line have been carefully adjusted in order to get the best out of each data set. High-resolution gridded data for upper and lower ice surface topography and cavity geometry of the Amery, Fimbul, Filchner-Ronne, Larsen C and George VI Ice Shelves, and for Pine Island Glacier have been carefully merged into the ambient ice and ocean topographies. Multibeam survey data for bathymetry in the former Larsen B cavity and the southeastern Bellingshausen Sea have been obtained from the data centers of Alfred Wegener Institute (AWI), British Antarctic Survey (BAS) and Lamont-Doherty Earth Observatory (LDEO), gridded, and again carefully merged into the existing bathymetry map. The global 1-minute dataset (RTopo-1 Version 1.0.5) has been split into two netCDF files. The first contains digital maps for global bedrock topography, ice bottom topography, and surface elevation. The second contains the auxiliary maps for data sources and the surface type mask. A regional subset that covers all variables for the region south of 50 deg S is also available in netCDF format. Datasets for the locations of grounding and coast lines are provided in ASCII format.