958 resultados para Dynamic texture recognition


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this thesis is concerned with the dynamic behaviour of structural joints which are both loaded, and excited, normal to the joint interface. Since the forces on joints are transmitted through their interface, the surface texture of joints was carefully examined. A computerised surface measuring system was developed and computer programs were written. Surface flatness was functionally defined, measured and quantised into a form suitable for the theoretical calculation of the joint stiffness. Dynamic stiffness and damping were measured at various preloads for a range of joints with different surface textures. Dry clean and lubricated joints were tested and the results indicated an increase in damping for the lubricated joints of between 30 to 100 times. A theoretical model for the computation of the stiffness of dry clean joints was built. The model is based on the theory that the elastic recovery of joints is due to the recovery of the material behind the loaded asperities. It takes into account, in a quantitative manner, the flatness deviations present on the surfaces of the joint. The theoretical results were found to be in good agreement with those measured experimentally. It was also found that theoretical assessment of the joint stiffness could be carried out using a different model based on the recovery of loaded asperities into a spherical form. Stepwise procedures are given in order to design a joint having a particular stiffness. A theoretical model for the loss factor of dry clean joints was built. The theoretical results are in reasonable agreement with those experimentally measured. The theoretical models for the stiffness and loss factor were employed to evaluate the second natural frequency of the test rig. The results are in good agreement with the experimentally measured natural frequencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emergence of pen-based mobile devices such as PDAs and tablet PCs provides a new way to input mathematical expressions to computer by using handwriting which is much more natural and efficient for entering mathematics. This paper proposes a web-based handwriting mathematics system, called WebMath, for supporting mathematical problem solving. The proposed WebMath system is based on client-server architecture. It comprises four major components: a standard web server, handwriting mathematical expression editor, computation engine and web browser with Ajax-based communicator. The handwriting mathematical expression editor adopts a progressive recognition approach for dynamic recognition of handwritten mathematical expressions. The computation engine supports mathematical functions such as algebraic simplification and factorization, and integration and differentiation. The web browser provides a user-friendly interface for accessing the system using advanced Ajax-based communication. In this paper, we describe the different components of the WebMath system and its performance analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How speech is separated perceptually from other speech remains poorly understood. Recent research indicates that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This study explored the effects of manipulating the depth and pattern of that variation. Three formants (F1+F2+F3) constituting synthetic analogues of natural sentences were distributed across the 2 ears, together with a competitor for F2 (F2C) that listeners must reject to optimize recognition (left = F1+F2C; right = F2+F3). The frequency contours of F1 − F3 were each scaled to 50% of their natural depth, with little effect on intelligibility. Competitors were created either by inverting the frequency contour of F2 about its geometric mean (a plausibly speech-like pattern) or using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Adding a competitor typically reduced intelligibility; this reduction depended on the depth of F2C variation, being greatest for 100%-depth, intermediate for 50%-depth, and least for 0%-depth (constant) F2Cs. This suggests that competitor impact depends on overall depth of frequency variation, not depth relative to that for the target formants. The absence of tuning (i.e., no minimum in intelligibility for the 50% case) suggests that the ability to reject an extraneous formant does not depend on similarity in the depth of formant-frequency variation. Furthermore, triangle-wave competitors were as effective as their more speech-like counterparts, suggesting that the selection of formants from the ensemble also does not depend on speech-specific constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How speech is separated perceptually from other speech remains poorly understood. Recent research indicates that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This study explored the effects of manipulating the depth and pattern of that variation. Three formants (F1+F2+F3) constituting synthetic analogues of natural sentences were distributed across the 2 ears, together with a competitor for F2 (F2C) that listeners must reject to optimize recognition (left = F1+F2C; right = F2+F3). The frequency contours of F1 - F3 were each scaled to 50% of their natural depth, with little effect on intelligibility. Competitors were created either by inverting the frequency contour of F2 about its geometric mean (a plausibly speech-like pattern) or using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Adding a competitor typically reduced intelligibility; this reduction depended on the depth of F2C variation, being greatest for 100%-depth, intermediate for 50%-depth, and least for 0%-depth (constant) F2Cs. This suggests that competitor impact depends on overall depth of frequency variation, not depth relative to that for the target formants. The absence of tuning (i.e., no minimum in intelligibility for the 50% case) suggests that the ability to reject an extraneous formant does not depend on similarity in the depth of formant-frequency variation. Furthermore, triangle-wave competitors were as effective as their more speech-like counterparts, suggesting that the selection of formants from the ensemble also does not depend on speech-specific constraints. © 2014 The Author(s).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this research is to develop nanoscale ultrasensitive transducers for detection of biological species at molecular level using carbon nanotubes as nanoelectrodes. Rapid detection of ultra low concentration or even single DNA molecules are essential for medical diagnosis and treatment, pharmaceutical applications, gene sequencing as well as forensic analysis. Here the use of functionalized single walled carbon nanotubes (SWNT) as nanoscale detection platform for rapid detection of single DNA molecules is demonstrated. The detection principle is based on obtaining electrical signal from a single amine terminated DNA molecule which is covalently bridged between two ends of an SWNT separated by a nanoscale gap. The synthesis, fabrication, chemical functionalization of nanoelectrodes and DNA attachment were optimized to perform reliable electrical characterization these molecules. Using this detection system fundamental study on charge transport in DNA molecule of both genomic and non genomic sequences is performed. We measured an electrical signal of about 30 pA through a hybridized DNA molecule of 80 base pair in length which encodes a portion of sequence of H5N1 gene of avian Influenza A virus. Due the dynamic nature of the DNA molecules the local environment such as ion concentration, pH and temperature significantly influence its physical properties. We observed a decrease in DNA conductance of about 33% in high vacuum conditions. The counterion variation was analyzed by changing the buffer from sodium acetate to tris(hydroxymethyl) aminomethane, which resulted in a two orders of magnitude increase in the conductivity of the DNA. The fabrication of large array of identical SWNT nanoelectrodes was achieved by using ultralong SWNTs. Using these nanoelectrode array we have investigated the sequence dependent charge transport in DNA. A systematic study performed on PolyG - PolyC sequence with varying number of intervening PolyA - PolyT pairs showed a decrease in electrical signal from 180 pA (PolyG - PolyC) to 30 pA with increasing number of the PolyA - PolyT pairs. This work also led to the development of ultrasensitive nanoelectrodes based on enzyme functionalized vertically aligned high density multiwalled CNTs for electrochemical detection of cholesterol. The nanoelectrodes exhibited selectively detection of cholesterol in the presence of common interferents found in human blood.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perception and recognition of faces are fundamental cognitive abilities that form a basis for our social interactions. Research has investigated face perception using a variety of methodologies across the lifespan. Habituation, novelty preference, and visual paired comparison paradigms are typically used to investigate face perception in young infants. Storybook recognition tasks and eyewitness lineup paradigms are generally used to investigate face perception in young children. These methodologies have introduced systematic differences including the use of linguistic information for children but not infants, greater memory load for children than infants, and longer exposure times to faces for infants than for older children, making comparisons across age difficult. Thus, research investigating infant and child perception of faces using common methods, measures, and stimuli is needed to better understand how face perception develops. According to predictions of the Intersensory Redundancy Hypothesis (IRH; Bahrick & Lickliter, 2000, 2002), in early development, perception of faces is enhanced in unimodal visual (i.e., silent dynamic face) rather than bimodal audiovisual (i.e., dynamic face with synchronous speech) stimulation. The current study investigated the development of face recognition across children of three ages: 5 – 6 months, 18 – 24 months, and 3.5 – 4 years, using the novelty preference paradigm and the same stimuli for all age groups. It also assessed the role of modality (unimodal visual versus bimodal audiovisual) and memory load (low versus high) on face recognition. It was hypothesized that face recognition would improve across age and would be enhanced in unimodal visual stimulation with a low memory load. Results demonstrated a developmental trend (F(2, 90) = 5.00, p = 0.009) with older children showing significantly better recognition of faces than younger children. In contrast to predictions, no differences were found as a function of modality of presentation (bimodal audiovisual versus unimodal visual) or memory load (low versus high). This study was the first to demonstrate a developmental improvement in face recognition from infancy through childhood using common methods, measures and stimuli consistent across age.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Race in Argentina played a significant role as a highly durable construct by identifying and advancing subjects (1776–1810) and citizens (1811–1853). My dissertation explores the intricacies of power relations by focusing on the ways in which race informed the legal process during the transition from a colonial to national State. It argues that the State’s development in both the colonial and national periods depended upon defining and classifying African descendants. In response, people of African descendent used the State’s assigned definitions and classifications to advance their legal identities. It employs race and culture as operative concepts, and law as a representation of the sometimes, tense relationship between social practices and the State’s concern for social peace. This dissertation examines the dynamic nature of the court. It utilizes the theoretical concepts multicentric legal orders that are analyzed through weak and strong legal pluralisms, and jurisdictional politics, from the late eighteenth to early nineteenth centuries. This dissertation juxtaposes various levels of jurisdiction (canon/state law and colonial/national law) to illuminate how people of color used the legal system to ameliorate their social condition. In each chapter the primary source materials are state generated documents which include criminal, ecclesiastical, civil, and marriage dissent court cases along with notarial and census records. Though it would appear that these documents would provide a superficial understanding of people of color, my analysis provides both a top-down and bottom-up approach that reflects a continuous negotiation for African descendants’ goal for State recognition. These approaches allow for implicit or explicit negotiation of a legal identity that transformed slaves and free African descendants into active agents of their own destinies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of 3D-assisted 2D face recognition in scenarios when the input image is subject to degradations or exhibits intra-personal variations not captured by the 3D model. The proposed solution involves a novel approach to learn a subspace spanned by perturbations caused by the missing modes of variation and image degradations, using 3D face data reconstructed from 2D images rather than 3D capture. This is accomplished by modelling the difference in the texture map of the 3D aligned input and reference images. A training set of these texture maps then defines a perturbation space which can be represented using PCA bases. Assuming that the image perturbation subspace is orthogonal to the 3D face model space, then these additive components can be recovered from an unseen input image, resulting in an improved fit of the 3D face model. The linearity of the model leads to efficient fitting. Experiments show that our method achieves very competitive face recognition performance on Multi-PIE and AR databases. We also present baseline face recognition results on a new data set exhibiting combined pose and illumination variations as well as occlusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, research in Computer Vision has developed several algorithms to help botanists and non-experts to classify plants based on images of their leaves. LeafSnap is a mobile application that uses a multiscale curvature model of the leaf margin to classify leaf images into species. It has achieved high levels of accuracy on 184 tree species from Northeast US. We extend the research that led to the development of LeafSnap along two lines. First, LeafSnap’s underlying algorithms are applied to a set of 66 tree species from Costa Rica. Then, texture is used as an additional criterion to measure the level of improvement achieved in the automatic identification of Costa Rica tree species. A 25.6% improvement was achieved for a Costa Rican clean image dataset and 42.5% for a Costa Rican noisy image dataset. In both cases, our results show this increment as statistically significant. Further statistical analysis of visual noise impact, best algorithm combinations per species, and best value of k , the minimal cardinality of the set of candidate species that the tested algorithms render as best matches is also presented in this research

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Beach sands from the Rosa Marina locality (Adriatic coast, southern Italy) were analysed mainly microscopically in order to trace the source areas of their lithoclastic and bioclastic components. The main cropping out sedimentary units were also studied with the objective to identify the potential source areas of lithoclasts. This allowed to establish how the various rock units contribute to the formation of beach sands. The analysis of the bioclastic components allows to estimate the actual role of organisms regarding the supply of this material to the beach. Identification of taxa that are present in the beach sands as shell fragments or other remains was carried out at the genus or family level. Ecologi- cal investigation of the same beach and the recognition of sub-environments (mainly distinguished on the basis of the nature of the substrate and of the water depth) was the key topic that allowed to establish the actual source areas of bioclasts in the Rosa Marina beach sands. The sedimentological analysis (including a physical study of the beach and the calculation of some statistical parameters concerning the grain-size curves) shows that the Rosa Marina beach is nowadays subject to erosion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The experimental projects discussed in this thesis are all related to the field of artificial molecular machines, specifically to systems composed of pseudorotaxane and rotaxane architectures. The characterization of the peculiar properties of these mechano-molecules is frequently associated with the analysis and elucidation of complex reaction networks; this latter aspect represents the main focus and central thread tying my thesis work. In each chapter, a specific project is described as summarized below: the focus of the first chapter is the realization and characterization of a prototype model of a photoactivated molecular transporter based on a pseudorotaxane architecture; in the second chapter is reported the design, synthesis, and characterization of a [2]rotaxane endowed with a dibenzylammonium station and a novel photochromic unit that acts as a recognition site for a DB24C8 crown ether macrocycle; in the last chapter is described the synthesis and characterization of a [3]rotaxane in which the relative number of rings and stations can be changed on command.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the Amazon Region, there is a virtual absence of severe malaria and few fatal cases of naturally occurring Plasmodium falciparum infections; this presents an intriguing and underexplored area of research. In addition to the rapid access of infected persons to effective treatment, one cause of this phenomenon might be the recognition of cytoadherent variant proteins on the infected red blood cell (IRBC) surface, including the var gene encoded P. falciparum erythrocyte membrane protein 1. In order to establish a link between cytoadherence, IRBC surface antibody recognition and the presence or absence of malaria symptoms, we phenotype-selected four Amazonian P. falciparum isolates and the laboratory strain 3D7 for their cytoadherence to CD36 and ICAM1 expressed on CHO cells. We then mapped the dominantly expressed var transcripts and tested whether antibodies from symptomatic or asymptomatic infections showed a differential recognition of the IRBC surface. As controls, the 3D7 lineages expressing severe disease-associated phenotypes were used. We showed that there was no profound difference between the frequency and intensity of antibody recognition of the IRBC-exposed P. falciparum proteins in symptomatic vs. asymptomatic infections. The 3D7 lineages, which expressed severe malaria-associated phenotypes, were strongly recognised by most, but not all plasmas, meaning that the recognition of these phenotypes is frequent in asymptomatic carriers, but is not necessarily a prerequisite to staying free of symptoms.