890 resultados para Zero sets of bivariate polynomials
Resumo:
Object detection and recognition are important problems in computer vision. The challenges of these problems come from the presence of noise, background clutter, large within class variations of the object class and limited training data. In addition, the computational complexity in the recognition process is also a concern in practice. In this thesis, we propose one approach to handle the problem of detecting an object class that exhibits large within-class variations, and a second approach to speed up the classification processes. In the first approach, we show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly solved with using a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. For applications where explicit parameterization of the within-class states is unavailable, a nonparametric formulation of the kernel can be constructed with a proper foreground distance/similarity measure. Detector training is accomplished via standard Support Vector Machine learning. The resulting detectors are tuned to specific variations in the foreground class. They also serve to evaluate hypotheses of the foreground state. When the image masks for foreground objects are provided in training, the detectors can also produce object segmentation. Methods for generating a representative sample set of detectors are proposed that can enable efficient detection and tracking. In addition, because individual detectors verify hypotheses of foreground state, they can also be incorporated in a tracking-by-detection frame work to recover foreground state in image sequences. To run the detectors efficiently at the online stage, an input-sensitive speedup strategy is proposed to select the most relevant detectors quickly. The proposed approach is tested on data sets of human hands, vehicles and human faces. On all data sets, the proposed approach achieves improved detection accuracy over the best competing approaches. In the second part of the thesis, we formulate a filter-and-refine scheme to speed up recognition processes. The binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the face recognition grand challenge version 2 data set, hand shape detection and parameter estimation on a hand data set, and vehicle detection and estimation of the view angle on a multi-pose vehicle data set. On all data sets, our approach is at least five times faster than simply evaluating all foreground state hypotheses with virtually no loss in classification accuracy.
Resumo:
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of over-fitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.
Resumo:
Standard structure from motion algorithms recover 3D structure of points. If a surface representation is desired, for example a piece-wise planar representation, then a two-step procedure typically follows: in the first step the plane-membership of points is first determined manually, and in a subsequent step planes are fitted to the sets of points thus determined, and their parameters are recovered. This paper presents an approach for automatically segmenting planar structures from a sequence of images, and simultaneously estimating their parameters. In the proposed approach the plane-membership of points is determined automatically, and the planar structure parameters are recovered directly in the algorithm rather than indirectly in a post-processing stage. Simulated and real experimental results show the efficacy of this approach.
Resumo:
We introduce a method for recovering the spatial and temporal alignment between two or more views of objects moving over a ground plane. Existing approaches either assume that the streams are globally synchronized, so that only solving the spatial alignment is needed, or that the temporal misalignment is small enough so that exhaustive search can be performed. In contrast, our approach can recover both the spatial and temporal alignment. We compute for each trajectory a number of interesting segments, and we use their description to form putative matches between trajectories. Each pair of corresponding interesting segments induces a temporal alignment, and defines an interval of common support across two views of an object that is used to recover the spatial alignment. Interesting segments and their descriptors are defined using algebraic projective invariants measured along the trajectories. Similarity between interesting segments is computed taking into account the statistics of such invariants. Candidate alignment parameters are verified checking the consistency, in terms of the symmetric transfer error, of all the putative pairs of corresponding interesting segments. Experiments are conducted with two different sets of data, one with two views of an outdoor scene featuring moving people and cars, and one with four views of a laboratory sequence featuring moving radio-controlled cars.
Resumo:
We wish to construct a realization theory of stable neural networks and use this theory to model the variety of stable dynamics apparent in natural data. Such a theory should have numerous applications to constructing specific artificial neural networks with desired dynamical behavior. The networks used in this theory should have well understood dynamics yet be as diverse as possible to capture natural diversity. In this article, I describe a parameterized family of higher order, gradient-like neural networks which have known arbitrary equilibria with unstable manifolds of known specified dimension. Moreover, any system with hyperbolic dynamics is conjugate to one of these systems in a neighborhood of the equilibrium points. Prior work on how to synthesize attractors using dynamical systems theory, optimization, or direct parametric. fits to known stable systems, is either non-constructive, lacks generality, or has unspecified attracting equilibria. More specifically, We construct a parameterized family of gradient-like neural networks with a simple feedback rule which will generate equilibrium points with a set of unstable manifolds of specified dimension. Strict Lyapunov functions and nested periodic orbits are obtained for these systems and used as a method of synthesis to generate a large family of systems with the same local dynamics. This work is applied to show how one can interpolate finite sets of data, on nested periodic orbits.
Resumo:
The International Energy Agency has repeatedly identified increased end-use energy efficiency as the quickest, least costly method of green house gas mitigation, most recently in the 2012 World Energy Outlook, and urges all governing bodies to increase efforts to promote energy efficiency policies and technologies. The residential sector is recognised as a major potential source of cost effective energy efficiency gains. Within the EU this relative importance can be seen from a review of the National Energy Efficiency Action Plans (NEEAP) submitted by member states, which in all cases place a large emphasis on the residential sector. This is particularly true for Ireland whose residential sector has historically had higher energy consumption and CO2 emissions than the EU average and whose first NEEAP targeted 44% of the energy savings to be achieved in 2020 from this sector. This thesis develops a bottom-up engineering archetype modelling approach to analyse the Irish residential sector and to estimate the technical energy savings potential of a number of policy measures. First, a model of space and water heating energy demand for new dwellings is built and used to estimate the technical energy savings potential due to the introduction of the 2008 and 2010 changes to part L of the building regulations governing energy efficiency in new dwellings. Next, the author makes use of a valuable new dataset of Building Energy Rating (BER) survey results to first characterise the highly heterogeneous stock of existing dwellings, and then to estimate the technical energy savings potential of an ambitious national retrofit programme targeting up to 1 million residential dwellings. This thesis also presents work carried out by the author as part of a collaboration to produce a bottom-up, multi-sector LEAP model for Ireland. Overall this work highlights the challenges faced in successfully implementing both sets of policy measures. It points to the wide potential range of final savings possible from particular policy measures and the resulting high degree of uncertainty as to whether particular targets will be met and identifies the key factors on which the success of these policies will depend. It makes recommendations on further modelling work and on the improvements necessary in the data available to researchers and policy makers alike in order to develop increasingly sophisticated residential energy demand models and better inform policy.
Resumo:
The influence of communication technology on group decision-making has been examined in many studies. But the findings are inconsistent. Some studies showed a positive effect on decision quality, other studies have shown that communication technology makes the decision even worse. One possible explanation for these different findings could be the use of different Group Decision Support Systems (GDSS) in these studies, with some GDSS better fitting to the given task than others and with different sets of functions. This paper outlines an approach with an information system solely designed to examine the effect of (1) anonymity, (2) voting and (3) blind picking on decision quality, discussion quality and perceived quality of information.
Resumo:
BACKGROUND: There is considerable interest in the development of methods to efficiently identify all coding variants present in large sample sets of humans. There are three approaches possible: whole-genome sequencing, whole-exome sequencing using exon capture methods, and RNA-Seq. While whole-genome sequencing is the most complete, it remains sufficiently expensive that cost effective alternatives are important. RESULTS: Here we provide a systematic exploration of how well RNA-Seq can identify human coding variants by comparing variants identified through high coverage whole-genome sequencing to those identified by high coverage RNA-Seq in the same individual. This comparison allowed us to directly evaluate the sensitivity and specificity of RNA-Seq in identifying coding variants, and to evaluate how key parameters such as the degree of coverage and the expression levels of genes interact to influence performance. We find that although only 40% of exonic variants identified by whole genome sequencing were captured using RNA-Seq; this number rose to 81% when concentrating on genes known to be well-expressed in the source tissue. We also find that a high false positive rate can be problematic when working with RNA-Seq data, especially at higher levels of coverage. CONCLUSIONS: We conclude that as long as a tissue relevant to the trait under study is available and suitable quality control screens are implemented, RNA-Seq is a fast and inexpensive alternative approach for finding coding variants in genes with sufficiently high expression levels.
Resumo:
The neurodegenerative disease Friedreich's ataxia (FRDA) is the most common autosomal-recessively inherited ataxia and is caused by a GAA triplet repeat expansion in the first intron of the frataxin gene. In this disease, transcription of frataxin, a mitochondrial protein involved in iron homeostasis, is impaired, resulting in a significant reduction in mRNA and protein levels. Global gene expression analysis was performed in peripheral blood samples from FRDA patients as compared to controls, which suggested altered expression patterns pertaining to genotoxic stress. We then confirmed the presence of genotoxic DNA damage by using a gene-specific quantitative PCR assay and discovered an increase in both mitochondrial and nuclear DNA damage in the blood of these patients (p<0.0001, respectively). Additionally, frataxin mRNA levels correlated with age of onset of disease and displayed unique sets of gene alterations involved in immune response, oxidative phosphorylation, and protein synthesis. Many of the key pathways observed by transcription profiling were downregulated, and we believe these data suggest that patients with prolonged frataxin deficiency undergo a systemic survival response to chronic genotoxic stress and consequent DNA damage detectable in blood. In conclusion, our results yield insight into the nature and progression of FRDA, as well as possible therapeutic approaches. Furthermore, the identification of potential biomarkers, including the DNA damage found in peripheral blood, may have predictive value in future clinical trials.
Resumo:
The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.
Resumo:
We present a theory of hypoellipticity and unique ergodicity for semilinear parabolic stochastic PDEs with "polynomial" nonlinearities and additive noise, considered as abstract evolution equations in some Hilbert space. It is shown that if Hörmander's bracket condition holds at every point of this Hilbert space, then a lower bound on the Malliavin covariance operatorμt can be obtained. Informally, this bound can be read as "Fix any finite-dimensional projection on a subspace of sufficiently regular functions. Then the eigenfunctions of μt with small eigenvalues have only a very small component in the image of Π." We also show how to use a priori bounds on the solutions to the equation to obtain good control on the dependency of the bounds on the Malliavin matrix on the initial condition. These bounds are sufficient in many cases to obtain the asymptotic strong Feller property introduced in [HM06]. One of the main novel technical tools is an almost sure bound from below on the size of "Wiener polynomials," where the coefficients are possibly non-adapted stochastic processes satisfying a Lips chitz condition. By exploiting the polynomial structure of the equations, this result can be used to replace Norris' lemma, which is unavailable in the present context. We conclude by showing that the two-dimensional stochastic Navier-Stokes equations and a large class of reaction-diffusion equations fit the framework of our theory.
Resumo:
Remembering past events - or episodic retrieval - consists of several components. There is evidence that mental imagery plays an important role in retrieval and that the brain regions supporting imagery overlap with those supporting retrieval. An open issue is to what extent these regions support successful vs. unsuccessful imagery and retrieval processes. Previous studies that examined regional overlap between imagery and retrieval used uncontrolled memory conditions, such as autobiographical memory tasks, that cannot distinguish between successful and unsuccessful retrieval. A second issue is that fMRI studies that compared imagery and retrieval have used modality-aspecific cues that are likely to activate auditory and visual processing regions simultaneously. Thus, it is not clear to what extent identified brain regions support modality-specific or modality-independent imagery and retrieval processes. In the current fMRI study, we addressed this issue by comparing imagery to retrieval under controlled memory conditions in both auditory and visual modalities. We also obtained subjective measures of imagery quality allowing us to dissociate regions contributing to successful vs. unsuccessful imagery. Results indicated that auditory and visual regions contribute both to imagery and retrieval in a modality-specific fashion. In addition, we identified four sets of brain regions with distinct patterns of activity that contributed to imagery and retrieval in a modality-independent fashion. The first set of regions, including hippocampus, posterior cingulate cortex, medial prefrontal cortex and angular gyrus, showed a pattern common to imagery/retrieval and consistent with successful performance regardless of task. The second set of regions, including dorsal precuneus, anterior cingulate and dorsolateral prefrontal cortex, also showed a pattern common to imagery and retrieval, but consistent with unsuccessful performance during both tasks. Third, left ventrolateral prefrontal cortex showed an interaction between task and performance and was associated with successful imagery but unsuccessful retrieval. Finally, the fourth set of regions, including ventral precuneus, midcingulate cortex and supramarginal gyrus, showed the opposite interaction, supporting unsuccessful imagery, but successful retrieval performance. Results are discussed in relation to reconstructive, attentional, semantic memory, and working memory processes. This is the first study to separate the neural correlates of successful and unsuccessful performance for both imagery and retrieval and for both auditory and visual modalities.
Resumo:
Robert Schumann (1810-1856) and Johannes Brahms (1833-1897), in some ways Robert Schumann's artistic descendant, are the most important and representative German piano composers during the Romantic period. Schumann was already a mature and established musician in 1853 when he first met the young Brahms and recognized his talents, an encounter that had a long-lasting affect on the lives and careers of both men. After Schumann’s mental breakdown and death, Brahms maintained his admiration of Schumann’s music and preserved an intimate relationship with Clara Schumann. In spite of the personal and musical closeness of the two men, Schumann’s music is stylistically distinct from that of Brahms. Brahms followed traditions from Baroque and Classical music, and avoided using images and expressive titles in his music. Brahms extraordinarily intermingled earlier musical forms with multicolored tones of German Romanticism. In contrast, Schumann saw himself as a radical composer devoted to personal emotionalism and spontaneity. He favored programmatic titles for his character pieces and extra-musical references in his music. While developing their own musical styles as German Romantic composers, Schumann and Brahms both utilized the piano as a resourceful tool for self-realization and compositional development. To investigate and compare the main characteristics of Schumann and Brahms’s piano music, I looked at three genres. First, in the category of the piano concerto, I chose two major Romantic works, Schumann’s A minor concerto and Brahms’s B-flat major concerto. Second, for the category of piano variations I included two sets by Brahms because the variation framework was such an important vehicle for him to express his musical thoughts. Schumann’s unique motivic approach to variation is displayed vividly in his character-piece cycle Carnaval. Third, the category of the character piece, perhaps the favorite medium of Romantic expression at the piano, is shown by Schumann’s Papillons and Brahms’s sets of pieces Op.118 and Op.119. This performance dissertation consists of three recitals performed in the Gildenhorn Recital Hall at the University of Maryland, College Park. These recitals are documented on compact disc recordings that are housed within the University of Maryland Library System.
Resumo:
The variation and fugue originated from the 15th and 16th centuries and blossomed during the Baroque and Classical Periods. In a variation, a theme with a particular structure precedes a series of pieces that usually have the same or very similar structure. A fugue is a work written in imitative counterpoint in which the theme is stated successively in all voices of polyphonic texture. Beethoven’s use of variation and fugue in large scale works greatly influenced his contemporaries. After the Classical Period, variations continued to be popular, and numerous composers employed the technique in various musical genres. Fugues had pedagogical associations, and by the middle of 19th century became a requirement in conservatory instruction, modeled after Bach’s Well-Tempered Clavier. In the 20th century, the fugue was revived in the spirit of neoclassicism; it was incorporated in sonatas, and sets of preludes and fugues were composed. Schubert's Wanderer Fantasy presents his song Der Wanderer through thematic transformations, including a fugue and a set of variations. Liszt was highly influenced by this, as shown in his thematic transformations and the fugue as one of the transformations in his Sonata in b. In Schumann’s Symphonic Études, Rachmaninoff's Rhapsody on a Theme of Paganini and Copland’s Piano Variations, the variation serves as the basis for the entire work. Prokofiev and Schubert take a different approach in Piano Concerto No. 3 and Wanderer Fantasy, employing the variation in a single movement. Unlike Schubert and Liszt's use of the fugue as a part of the piece or movement, Franck’s Prelude Chorale et Fugue and Shchedrin’s Polyphonic Notebook use it in its independent form. Since the Classical Period, the variation and fugue have evolved from stylistic and technical influences of earlier composers. It is interesting and remarkable to observe the unique effects each had on a particular work. As true and dependable classic forms, they remain popular by offering the composer an organizational framework for musical imagination.
Resumo:
BACKGROUND: Previous mathematical models for hepatic and tissue one-carbon metabolism have been combined and extended to include a blood plasma compartment. We use this model to study how the concentrations of metabolites that can be measured in the plasma are related to their respective intracellular concentrations. METHODS: The model consists of a set of ordinary differential equations, one for each metabolite in each compartment, and kinetic equations for metabolism and for transport between compartments. The model was validated by comparison to a variety of experimental data such as the methionine load test and variation in folate intake. We further extended this model by introducing random and systematic variation in enzyme activity. OUTCOMES AND CONCLUSIONS: A database of 10,000 virtual individuals was generated, each with a quantitatively different one-carbon metabolism. Our population has distributions of folate and homocysteine in the plasma and tissues that are similar to those found in the NHANES data. The model reproduces many other sets of clinical data. We show that tissue and plasma folate is highly correlated, but liver and plasma folate much less so. Oxidative stress increases the plasma S-adenosylmethionine/S-adenosylhomocysteine (SAM/SAH) ratio. We show that many relationships among variables are nonlinear and in many cases we provide explanations. Sampling of subpopulations produces dramatically different apparent associations among variables. The model can be used to simulate populations with polymorphisms in genes for folate metabolism and variations in dietary input.