965 resultados para Recognition algorithms
Resumo:
In the present review, we describe a systematic study of the sulfated polysaccharides from marine invertebrates, which led to the discovery of a carbohydrate-based mechanism of sperm-egg recognition during sea urchin fertilization. We have described unique polymers present in these organisms, especially sulfated fucose-rich compounds found in the egg jelly coat of sea urchins. The polysaccharides have simple, linear structures consisting of repeating units of oligosaccharides. They differ among the various species of sea urchins in specific patterns of sulfation and/or position of the glycosidic linkage within their repeating units. These polysaccharides show species specificity in inducing the acrosome reaction in sea urchin sperm, providing a clear-cut example of a signal transduction event regulated by sulfated polysaccharides. This distinct carbohydrate-mediated mechanism of sperm-egg recognition coexists with the bindin-protein system. Possibly, the genes involved in the biosynthesis of these sulfated fucans did not evolve in concordance with evolutionary distance but underwent a dramatic change near the tip of the Strongylocentrotid tree. Overall, we established a direct causal link between the molecular structure of a sulfated polysaccharide and a cellular physiological event - the induction of the sperm acrosome reaction in sea urchins. Small structural changes modulate an entire system of sperm-egg recognition and species-specific fertilization in sea urchins. We demonstrated that sulfated polysaccharides - in addition to their known function in cell proliferation, development, coagulation, and viral infection - mediate fertilization, and respond to evolutionary mechanisms that lead to species diversity.
Resumo:
We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio ³95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.
Resumo:
Facial expressions of basic emotions have been widely used to investigate the neural substrates of emotion processing, but little is known about the exact meaning of subjective changes provoked by perceiving facial expressions. Our assumption was that fearful faces would be related to the processing of potential threats, whereas angry faces would be related to the processing of proximal threats. Experimental studies have suggested that serotonin modulates the brain processes underlying defensive responses to environmental threats, facilitating risk assessment behavior elicited by potential threats and inhibiting fight or flight responses to proximal threats. In order to test these predictions about the relationship between fearful and angry faces and defensive behaviors, we carried out a review of the literature about the effects of pharmacological probes that affect 5-HT-mediated neurotransmission on the perception of emotional faces. The hypothesis that angry faces would be processed as a proximal threat and that, as a consequence, their recognition would be impaired by an increase in 5-HT function was not supported by the results reviewed. In contrast, most of the studies that evaluated the behavioral effects of serotonin challenges showed that increased 5-HT neurotransmission facilitates the recognition of fearful faces, whereas its decrease impairs the same performance. These results agree with the hypothesis that fearful faces are processed as potential threats and that 5-HT enhances this brain processing.
Resumo:
Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.
Resumo:
A modified version of the intruder-resident paradigm was used to investigate if social recognition memory lasts at least 24 h. One hundred and forty-six adult male Wistar rats were used. Independent groups of rats were exposed to an intruder for 0.083, 0.5, 2, 24, or 168 h and tested 24 h after the first encounter with the familiar or a different conspecific. Factor analysis was employed to identify associations between behaviors and treatments. Resident rats exhibited a 24-h social recognition memory, as indicated by a 3- to 5-fold decrease in social behaviors in the second encounter with the same conspecific compared to those observed for a different conspecific, when the duration of the first encounter was 2 h or longer. It was possible to distinguish between two different categories of social behaviors and their expression depended on the duration of the first encounter. Sniffing the anogenital area (49.9% of the social behaviors), sniffing the body (17.9%), sniffing the head (3%), and following the conspecific (3.1%), exhibited mostly by resident rats, characterized social investigation and revealed long-term social recognition memory. However, dominance (23.8%) and mild aggression (2.3%), exhibited by both resident and intruders, characterized social agonistic behaviors and were not affected by memory. Differently, sniffing the environment (76.8% of the non-social behaviors) and rearing (14.3%), both exhibited mostly by adult intruder rats, characterized non-social behaviors. Together, these results show that social recognition memory in rats may last at least 24 h after a 2-h or longer exposure to the conspecific.
Resumo:
The visualization of tools and manipulable objects activates motor-related areas in the cortex, facilitating possible actions toward them. This pattern of activity may underlie the phenomenon of object affordance. Some cortical motor neurons are also covertly activated during the recognition of body parts such as hands. One hypothesis is that different subpopulations of motor neurons in the frontal cortex are activated in each motor program; for example, canonical neurons in the premotor cortex are responsible for the affordance of visual objects, while mirror neurons support motor imagery triggered during handedness recognition. However, the question remains whether these subpopulations work independently. This hypothesis can be tested with a manual reaction time (MRT) task with a priming paradigm to evaluate whether the view of a manipulable object interferes with the motor imagery of the subject's hand. The MRT provides a measure of the course of information processing in the brain and allows indirect evaluation of cognitive processes. Our results suggest that canonical and mirror neurons work together to create a motor plan involving hand movements to facilitate successful object manipulation.
Resumo:
Our objective is to evaluate the accuracy of three algorithms in differentiating the origins of outflow tract ventricular arrhythmias (OTVAs). This study involved 110 consecutive patients with OTVAs for whom a standard 12-lead surface electrocardiogram (ECG) showed typical left bundle branch block morphology with an inferior axis. All the ECG tracings were retrospectively analyzed using the following three recently published ECG algorithms: 1) the transitional zone (TZ) index, 2) the V2 transition ratio, and 3) V2 R wave duration and R/S wave amplitude indices. Considering all patients, the V2 transition ratio had the highest sensitivity (92.3%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (93.9%). The latter finding had a maximal area under the ROC curve of 0.925. In patients with left ventricular (LV) rotation, the V2 transition ratio had the highest sensitivity (94.1%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (87.5%). The former finding had a maximal area under the ROC curve of 0.892. All three published ECG algorithms are effective in differentiating the origin of OTVAs, while the V2 transition ratio, and the V2 R wave duration and R/S wave amplitude indices are the most sensitive and specific algorithms, respectively. Amongst all of the patients, the V2 R wave duration and R/S wave amplitude algorithm had the maximal area under the ROC curve, but in patients with LV rotation the V2 transition ratio algorithm had the maximum area under the ROC curve.
Resumo:
Simplification of highly detailed CAD models is an important step when CAD models are visualized or by other means utilized in augmented reality applications. Without simplification, CAD models may cause severe processing and storage is- sues especially in mobile devices. In addition, simplified models may have other advantages like better visual clarity or improved reliability when used for visual pose tracking. The geometry of CAD models is invariably presented in form of a 3D mesh. In this paper, we survey mesh simplification algorithms in general and focus especially to algorithms that can be used to simplify CAD models. We test some commonly known algorithms with real world CAD data and characterize some new CAD related simplification algorithms that have not been surveyed in previous mesh simplification reviews.
Resumo:
This thesis introduces an extension of Chomsky’s context-free grammars equipped with operators for referring to left and right contexts of strings.The new model is called grammar with contexts. The semantics of these grammars are given in two equivalent ways — by language equations and by logical deduction, where a grammar is understood as a logic for the recursive definition of syntax. The motivation for grammars with contexts comes from an extensive example that completely defines the syntax and static semantics of a simple typed programming language. Grammars with contexts maintain most important practical properties of context-free grammars, including a variant of the Chomsky normal form. For grammars with one-sided contexts (that is, either left or right), there is a cubic-time tabular parsing algorithm, applicable to an arbitrary grammar. The time complexity of this algorithm can be improved to quadratic,provided that the grammar is unambiguous, that is, it only allows one parsefor every string it defines. A tabular parsing algorithm for grammars withtwo-sided contexts has fourth power time complexity. For these grammarsthere is a recognition algorithm that uses a linear amount of space. For certain subclasses of grammars with contexts there are low-degree polynomial parsing algorithms. One of them is an extension of the classical recursive descent for context-free grammars; the version for grammars with contexts still works in linear time like its prototype. Another algorithm, with time complexity varying from linear to cubic depending on the particular grammar, adapts deterministic LR parsing to the new model. If all context operators in a grammar define regular languages, then such a grammar can be transformed to an equivalent grammar without context operators at all. This allows one to represent the syntax of languages in a more succinct way by utilizing context specifications. Linear grammars with contexts turned out to be non-trivial already over a one-letter alphabet. This fact leads to some undecidability results for this family of grammars
Resumo:
Metal-ion-mediated base-pairing of nucleic acids has attracted considerable attention during the past decade, since it offers means to expand the genetic code by artificial base-pairs, to create predesigned molecular architecture by metal-ion-mediated inter- or intra-strand cross-links, or to convert double stranded DNA to a nano-scale wire. Such applications largely depend on the presence of a modified nucleobase in both strands engaged in the duplex formation. Hybridization of metal-ion-binding oligonucleotide analogs with natural nucleic acid sequences has received much less attention in spite of obvious applications. While the natural oligonucleotides hybridize with high selectivity, their affinity for complementary sequences is inadequate for a number of applications. In the case of DNA, for example, more than 10 consecutive Watson-Crick base pairs are required for a stable duplex at room temperature, making targeting of sequences shorter than this challenging. For example, many types of cancer exhibit distinctive profiles of oncogenic miRNA, the diagnostics of which is, however, difficult owing to the presence of only short single stranded loop structures. Metallo-oligonucleotides, with their superior affinity towards their natural complements, would offer a way to overcome the low stability of short duplexes. In this study a number of metal-ion-binding surrogate nucleosides were prepared and their interaction with nucleoside 5´-monophosphates (NMPs) has been investigated by 1H NMR spectroscopy. To find metal ion complexes that could discriminate between natural nucleobases upon double helix formation, glycol nucleic acid (GNA) sequences carrying a PdII ion with vacant coordination sites at a predetermined position were synthesized and their affinity to complementary as well as mismatched counterparts quantified by UV-melting measurements.
Resumo:
The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.
Resumo:
One group of 12 non learning disabled students and two groups of 12 learning disabled students between the ges of 10 and 12 were measured on implicit and explicit knowledge cquisition. Students in each group implicitly cquired knowledge bout I of 2 vocabulary rules. The vocabulary rules governed the pronunciation of 2 types of pseudowords. After completing the implicit acquisition phase, all groups were administered a test of implicit knowledge. The non learning disabled group and I learning disabled group were then asked to verbalize the knowledge acquired during the initial phase. This was a test of explicit knowledge. All 3 groups were then given a postlest of implicit knowledge. This tcst was a measure of the effectiveness of the employment of the verbalization technique. Results indicate that implicit knowledge capabilities for both the learning disabled and non learning disabled groups were intact. However. there were significant differences between groups on explicit knowledge capabilities. This led to the conclusion that implicit functions show little individual differences, and that explicit functions are affected by ability difference. Furthermore, the employment of the verbalization technique significantly increased POStlest scores for learning disabled students. This suggested that the use of metacognitive techniques was a beneficial learning tool for learning disabled students.
Resumo:
This research attempted to address the question of the role of explicit algorithms and episodic contexts in the acquisition of computational procedures for regrouping in subtraction. Three groups of students having difficulty learning to subtract with regrouping were taught procedures for doing so through either an explicit algorithm, an episodic content or an examples approach. It was hypothesized that the use of an explicit algorithm represented in a flow chart format would facilitate the acquisition and retention of specific procedural steps relative to the other two conditions. On the other hand, the use of paragraph stories to create episodic content was expected to facilitate the retrieval of algorithms, particularly in a mixed presentation format. The subjects were tested on similar, near, and far transfer questions over a four-day period. Near and far transfer algorithms were also introduced on Day Two. The results suggested that both explicit and episodic context facilitate performance on questions requiring subtraction with regrouping. However, the differential effects of these two approaches on near and far transfer questions were not as easy to identify. Explicit algorithms may facilitate the acquisition of specific procedural steps while at the same time inhibiting the application of such steps to transfer questions. Similarly, the value of episodic context in cuing the retrieval of an algorithm may be limited by the ability of a subject to identify and classify a new question as an exemplar of a particular episodically deflned problem type or category. The implications of these findings in relation to the procedures employed in the teaching of Mathematics to students with learning problems are discussed in detail.
Resumo:
Adults' expert face recognition is limited to the kinds of faces they encounter on a daily basis (typically upright human faces of the same race). Adults process own-race faces holistically (Le., as a gestalt) and are exquisitely sensitive to small differences among faces in the spacing of features, the shape of individual features and the outline or contour of the face (Maurer, Le Grand, & Mondloch, 2002), however this expertise does not seem to extend to faces from other races. The goal of the current study was to investigate the extent to which the mechanisms that underlie expert face processing of own-race faces extend to other-race faces. Participants from rural Pennsylvania that had minimal exposure to other-race faces were tested on a battery of tasks. They were tested on a memory task, two measures of holistic processing (the composite task and the part/whole task), two measures of spatial and featural processing (the JanelLing task and the scrambledlblurred faces task) and a test of contour processing (JanelLing task) for both own-and other-race faces. No study to date has tested the same participants on all of these tasks. Participants had minimal experience with other-race faces; they had no Chinese family members, friends or had ever traveled to an Asian country. Results from the memory task did not reveal an other-race effect. In the present study, participants also demonstrated holistic processing of both own- and other-race faces on both the composite task and the part/whole task. These findings contradict previous findings that Caucasian adults process own-race faces more holistically than other-race faces. However participants did demonstrate an own-race advantage for processing the spacing among features, consistent with two recent studies that used different manipulations of spacing cues (Hayward et al. 2007; Rhodes et al. 2006). They also demonstrated an other-race effect for the processing of individual features for the Jane/Ling task (a direct measure of featural processing) consistent with previous findings (Rhodes, Hayward, & Winkler, 2006), but not for the scrambled faces task (an indirect measure offeatural processing). There was no own-race advantage for contour processing. Thus, these results lead to the conclusion that individuals may show less sensitivity to the appearance of individual features and the spacing among them in other-race faces, despite processing other-race faces holistically.