995 resultados para minimal processing
Resumo:
A method to achieve improvement in template size for an iris-recognition system is reported. To achieve this result, the biological characteristics of the human iris have been studied. Processing has been performed by image processing techniques, isolating the iris and enhancing the area of study, after which multi resolution analysis is made. Reduction of the pattern obtained has been obtained via statistical study.
Resumo:
The epitopes recognized by CD8+ cytotoxic T lymphocytes (CTL) are generated from cytosolic proteins by proteolytic processing. The nature of the influences exerted by the sequences flanking CTL epitopes on these processing events remains controversial. Here we show that each epitope within an artificial polyepitope protein containing nine minimal CD8+ CTL epitopes in sequence was processed and presented to appropriate CTL clones. Natural flanking sequences were thus not required to direct class I proteolytic processing. In addition, unnatural flanking sequences containing other CTL epitopes did not interfere with processing. The ability of every CTL epitope to be effectively processed from a protein containing only CTL epitopes is likely to find application in the construction of recombinant polyepitope CTL vaccines.
Resumo:
This paper presents a multilayered architecture that enhances the capabilities of current QA systems and allows different types of complex questions or queries to be processed. The answers to these questions need to be gathered from factual information scattered throughout different documents. Specifically, we designed a specialized layer to process the different types of temporal questions. Complex temporal questions are first decomposed into simple questions, according to the temporal relations expressed in the original question. In the same way, the answers to the resulting simple questions are recomposed, fulfilling the temporal restrictions of the original complex question. A novel aspect of this approach resides in the decomposition which uses a minimal quantity of resources, with the final aim of obtaining a portable platform that is easily extensible to other languages. In this paper we also present a methodology for evaluation of the decomposition of the questions as well as the ability of the implemented temporal layer to perform at a multilingual level. The temporal layer was first performed for English, then evaluated and compared with: a) a general purpose QA system (F-measure 65.47% for QA plus English temporal layer vs. 38.01% for the general QA system), and b) a well-known QA system. Much better results were obtained for temporal questions with the multilayered system. This system was therefore extended to Spanish and very good results were again obtained in the evaluation (F-measure 40.36% for QA plus Spanish temporal layer vs. 22.94% for the general QA system).
Resumo:
Paper presented at the Purdue Centennial Year Symposium on Information Processing, April 28, 1969.
Resumo:
Cyclotides are a family of plant proteins that have the unusual combination of head-to-tail backbone cyclization and a cystine knot motif. They are exceptionally stable and show resistance to most chemical, physical, and enzymatic treatments. The structure of tricyclon A, a previously unreported cyclotide, is described here. In this structure, a loop that is disordered in other cyclotides forms a beta sheet that protrudes from the globular core. This study indicates that the cyclotide fold is amenable to the introduction of a range of structural elements without affecting the cystine knot core of the protein, which is essential for the stability of the cyclotides. Tricyclon A does not possess a hydrophobic patch, typical of other cyclotides, and has minimal hemolytic activity, making it suitable for pharmaceutical applications. The 22 kDa precursor protein of tricyclon A was identified and provides clues to the processing of these fascinating miniproteins.
Resumo:
Huge advertising budgets are invested by firms to reach and convince potential consumers to buy their products. To optimize these investments, it is fundamental not only to ensure that appropriate consumers will be reached, but also that they will be in appropriate reception conditions. Marketing research has focused on the way consumers react to advertising, as well as on some individual and contextual factors that could mediate or moderate the ad impact on consumers (e.g. motivation and ability to process information or attitudes toward advertising). Nevertheless, a factor that potentially influences consumers’ advertising reactions has not yet been studied in marketing research: fatigue. Fatigue can yet impact key variables of advertising processing, such as cognitive resources availability (Lieury 2004). Fatigue is felt when the body warns to stop an activity (or inactivity) to have some rest, allowing the individual to compensate for fatigue effects. Dittner et al. (2004) defines it as “the state of weariness following a period of exertion, mental or physical, characterized by a decreased capacity for work and reduced efficiency to respond to stimuli.’’ It signals that resources will lack if we continue with the ongoing activity. According to Schmidtke (1969), fatigue leads to troubles in information reception, in perception, in coordination, in attention getting, in concentration and in thinking. In addition, for Markle (1984) fatigue generates a decrease in memory, and in communication ability, whereas it increases time reaction, and number of errors. Thus, fatigue may have large effects on advertising processing. We suggest that fatigue determines the level of available resources. Some research about consumer responses to advertising claim that complexity is a fundamental element to take into consideration. Complexity determines the cognitive efforts the consumer must provide to understand the message (Putrevu et al. 2004). Thus, we suggest that complexity determines the level of required resources. To study this complex question about need and provision of cognitive resources, we draw upon Resource Matching Theory. Anand and Sternthal (1989, 1990) are the first to state the Resource Matching principle, saying that an ad is most persuasive when the resources required to process it match the resources the viewer is willing and able to provide. They show that when the required resources exceed those available, the message is not entirely processed by the consumer. And when there are too many available resources comparing to those required, the viewer elaborates critical or unrelated thoughts. According to the Resource Matching theory, the level of resource demanded by an ad can be high or low, and is mostly determined by the ad’s layout (Peracchio and Myers-Levy, 1997). We manipulate the level of required resources using three levels of ad complexity (low – high – extremely high). On the other side, the resource availability of an ad viewer is determined by lots of contextual and individual variables. We manipulate the level of available resources using two levels of fatigue (low – high). Tired viewers want to limit the processing effort to minimal resource requirements by making heuristics, forming overall impression at first glance. It will be easier for them to decode the message when ads are very simple. On the contrary, the most effective ads for viewers who are not tired are complex enough to draw their attention and fully use their resources. They will use more analytical strategies, looking at the details of the ad. However, if ads are too complex, they will be too difficult to understand. The viewer will be discouraged to process information and will overlook the ad. The objective of our research is to study fatigue as a moderating variable of advertising information processing. We run two experimental studies to assess the effect of fatigue on visual strategies, comprehension, persuasion and memorization. In study 1, thirty-five undergraduate students enrolled in a marketing research course participated in the experiment. The experimental design is 2 (tiredness level: between subjects) x 3 (ad complexity level: within subjects). Participants were randomly assigned a schedule time (morning: 8-10 am or evening: 10-12 pm) to perform the experiment. We chose to test subjects at various moments of the day to obtain maximum variance in their fatigue level. We use Morningness / Eveningness tendency of participants (Horne & Ostberg, 1976) as a control variable. We assess fatigue level using subjective measures - questionnaire with fatigue scales - and objective measures - reaction time and number of errors. Regarding complexity levels, we have designed our own ads in order to keep aspects other than complexity equal. We ran a pretest using the Resource Demands scale (Keller and Bloch 1997) and by rating them on complexity like Morrison and Dainoff (1972) to check for our complexity manipulation. We found three significantly different levels. After having completed the fatigue scales, participants are asked to view the ads on a screen, while their eye movements are recorded by the eye-tracker. Eye-tracking allows us to find out patterns of visual attention (Pieters and Warlop 1999). We are then able to infer specific respondents’ visual strategies according to their level of fatigue. Comprehension is assessed with a comprehension test. We collect measures of attitude change for persuasion and measures of recall and recognition at various points of time for memorization. Once the effect of fatigue will be determined across the student population, it is interesting to account for individual differences in fatigue severity and perception. Therefore, we run study 2, which is similar to the previous one except for the design: time of day is now within-subjects and complexity becomes between-subjects
Resumo:
*This research was supported by the National Science Foundation Grant DMS 0200187 and by ONR Grant N00014-96-1-1003
Resumo:
A poly(L-lactide-co-caprolactone) copolymer, P(LL-co-CL), of composition 75:25 mol% was synthesized via the bulk ring-opening copolymerization of L-lactide and ε-caprolactone using a novel bis[tin(II) monooctoate] diethylene glycol coordination-insertion initiator, OctSn-OCH2CH2OCH2CH2O-SnOct. The P(LL-co-CL) copolymer obtained was characterized by a combination of analytical techniques, namely nuclear magnetic resonance spectroscopy, gel permeation chromatography, dilute-solution viscometry, differential scanning calorimetry, and thermogravimetric analysis. For processing into a monofilament fiber, the copolymer was melt spun with minimal draw to give a largely amorphous and unoriented as-spun fiber. The fiber's oriented semicrystalline morphology, necessary to give the required balance of mechanical properties, was then developed via a sequence of controlled offline hot-drawing and annealing steps. Depending on the final draw ratio, the fibers obtained had tensile strengths in the region of 200–400 MPa.
Resumo:
The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.
Resumo:
This dissertation investigates the acquisition of oblique relative clauses in L2 Spanish by English and Moroccan Arabic speakers in order to understand the role of previous linguistic knowledge and its interaction with Universal Grammar on the one hand, and the relationship between grammatical knowledge and its use in real-time, on the other hand. Three types of tasks were employed: an oral production task, an on-line self-paced grammaticality judgment task, and an on-line self-paced reading comprehension task. Results indicated that the acquisition of oblique relative clauses in Spanish is a problematic area for second language learners of intermediate proficiency in the language, regardless of their native language. In particular, this study has showed that, even when the learners’ native language shares the main properties of the L2, i.e., fronting of the obligatory preposition (Pied-Piping), there is still room for divergence, especially in production and timed grammatical intuitions. On the other hand, reaction time data have shown that L2 learners can and do converge at the level of sentence processing, showing exactly the same real-time effects for oblique relative clauses that native speakers had. Processing results demonstrated that native and non-native speakers alike are able to apply universal processing principles such as the Minimal Chain Principle (De Vincenzi, 1991) even when the L2 learners still have incomplete grammatical representations, a result that contradicts some of the predictions of the Shallow Structure Hypothesis (Clahsen & Felser, 2006). Results further suggest that the L2 processing and comprehension domains may be able to access some type of information that it is not yet available to other grammatical modules, probably because transfer of certain L1 properties occurs asymmetrically across linguistic domains. In addition, this study also explored the Null-Prep phenomenon in L2 Spanish, and proposed that Null-Prep is an interlanguage stage, fully available and accounted within UG, which intermediate L2 as well as first language learners go through in the development of pied-piping oblique relative clauses. It is hypothesized that this intermediate stage is the result of optionality of the obligatory preposition in the derivation, when it is not crucial for the meaning of the sentence, and when the DP is going to be in an A-bar position, so it can get default case. This optionality can be predicted by the Bottleneck Hypothesis (Slabakova, 2009c) if we consider that these prepositions are some sort of functional morphology. This study contributes to the field of SLA and L2 processing in various ways. First, it demonstrates that the grammatical representations may be dissociated from grammatical processing in the sense that L2 learners, unlike native speakers, can present unexpected asymmetries such as a convergent processing but divergent grammatical intuitions or production. This conclusion is only possible under the assumption of a modular language system. Finally, it contributes to the general debate of generative SLA since in argues for a fully UG-constrained interlanguage grammar.
Resumo:
Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements.
Resumo:
The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.^
Resumo:
One of the prominent questions in modern psycholinguistics is the relationship between the grammar and the parser. Within the approach of Generative Grammar, this issue has been investigated in terms of the role that Principles of Universal Grammar may play in language processing. The aim of this research experiment is to investigate this topic. Specifically, this experiment aims to test whether the Minimal Structure Principle (MSP) plays a role in the processing of Preposition-Stranding versus Pied-Piped Constructions. This investigation is made with a self-paced reading task, an on-line processing test that measures participants’ unconscious reaction to language stimuli. Monolingual English speakers’ reading times of sentences with Preposition-Stranding and Pied-Piped Constructions are compared. Results indicate that neither construction has greater processing costs, suggesting that factors other than the MSP are active during language processing.
Resumo:
To evaluate patients with transverse fractures of the shaft of the humerus treated with indirect reduction and internal fixation with plate and screws through minimally invasive technique. Inclusion criteria were adult patients with transverse diaphyseal fractures of the humerus closed, isolated or not occurring within 15 days of the initial trauma. Exclusion criteria were patients with compound fractures. In two patients, proximal screw loosening occurred, however, the fractures consolidated in the same mean time as the rest of the series. Consolidation with up to 5 degrees of varus occurred in five cases and extension deficit was observed in the patient with olecranon fracture treated with tension band, which was not considered as a complication. There was no recurrence of infection or iatrogenic radial nerve injury. It can be concluded that minimally invasive osteosynthesis with bridge plate can be considered a safe and effective option for the treatment of transverse fractures of the humeral shaft. Level of Evidence III, Therapeutic Study.
Resumo:
The aim of this study was to evaluate fat substitute in processing of sausages prepared with surimi of waste from piramutaba filleting. The formulation ingredients were mixed with the fat substitutes added according to a fractional planning 2(4-1), where the independent variables, manioc starch (Ms), hydrogenated soy fat (F), texturized soybean protein (Tsp) and carrageenan (Cg) were evaluated on the responses of pH, texture (Tx), raw batter stability (RBS) and water holding capacity (WHC) of the sausage. Fat substitutes were evaluated in 11 formulations and the results showed that the greatest effects on the responses were found to Ms, F and Cg, being eliminated from the formulation Tsp. To find the best formulation for processing piramutaba sausage was made a complete factorial planning of 2(3) to evaluate the concentrations of fat substitutes in an enlarged range. The optimum condition found for fat substitutes in the sausages formulation were carrageenan (0.51%), manioc starch (1.45%) and fat (1.2%).