965 resultados para Response times


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We have prepared heavy metal oxide glasses containing metallic copper nanoparticles with promising nonlinear optical properties which were determined by Z-scan and pump-probe measurements using femtosecond laser pulses. For the wavelengths within the plasmon band, we have observed saturable absorption and response times of 2.3 ps. For the other regions of the spectrum, reverse saturable absorption and lifetimes shorter than 200 fs were verified. The nonlinear refractive index is about 2.0 × 10-19 m2/W from visible to telecom region, thus presenting an enhancement effect at wavelengths near the plasmon and Cu+2 d-d band. © 2013 Springer Science+Business Media New York.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

AIM: The purpose of this study was to examine the effect of intensive practice in table-­tennis on perceptual, decision-­making and motor-­systems. Groups of elite (HL=11), intermediate (LL=6) and control (CC=11) performed tasks of different levels. METHODS: All subjects underwent to reaction-­time-­test and response-­time-­test consisting of a pointing task to targets placed at distinct distances (15 and 25-­cm) on the right and left sides. The ball speed test in forehand and backhand condition just for HL and LL group. RESULTS: In CC group reaction time was higher compared to HL (P< 0.05) group. In the response-­time-­test, there was a significant main effect of distance (P< 0.0001) and the tennis-­table expertise (P= 0.011). In the ball speed test the HL were constantly faster compared to the LL in both forehand stroke (P< 0.0001) and backhand stroke (P< 0.0001). Overall, the forehand stroke was significantly faster than the backhand stroke. CONCLUSION: We can conclude that table-­tennis-­players have shorter responsetimes than non-­athletes and the tasks of reaction-­time and response-­time are incapable to distinguish the performance of well-­trained table tennis players of the intermediate player, but the ball speed test seems be able to do it.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The application of pesticides is one of the most important steps in the agricultural production process. The spray volume can directly affect application success, and this parameter is directly dependent on the displacement speed of the sprayer. In conventional systems, the operator has to maintain a constant speed to ensure uniform application along the tracks. In order to improve over application quality and preserve levels of precision for applied doses; the use of electronic flow control systems allows for automatic adjustment of volume applied over the area when there is a change in velocity during application. The objective of this research was to study the response times of a flow controller with DGPS for aerial application subjected to variations of velocity in laboratory simulated flight conditions. For this purpose, a bench test has been developed including software for simulating DGPS signals, which was used to simulate different flight speeds and conditions. The results showed the average response time from the flow controller to a change in velocity to be between 6 and 20 seconds. Variations in total flow and the controller setting had a significant influence on response time with situations where there was interaction between the factors being evaluated. There was a tendency for better response times using a constant setting for the control algorithm other than that specified by the manufacturer. The flow controller presented an average error rates below 2% in all evaluated operating conditions, providing satisfactory accuracy in determining the output of product in different test situations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Our understanding of the climate of northern Sweden during the late Holocene is largely dependent on proxy-data series. These datasets remain spatially and temporally sparse and instrumental series are rare prior to the mid 19th century. Nevertheless, the glaciology and paleo-glaciology of the region has a strong potential significance for the exploration of climate change scenarios, past and future. The aim of this thesis is to investigate the 19th and 20th century climate in the northern Swedish mountain range. This provides a good opportunity to analyse the natural variability of the climate before the onset of the industrial epoch. Developing a temporal understanding of fluctuations in glacier front positions and glacier mass balance that is linked to a better understanding of their interaction and relative significance to climate is fundamental in the assessment of past climate. I have chosen to investigate previously unexplored temperature data from northern Sweden from between 1802 and 1860 and combined it with a temperature series from a synoptic station in Haparanda, which began operation in 1859, in order to create a reliable long temperature series for the period 1802 to 2002. I have also investigated two different glaciers, Pårteglaciären and Salajekna, which are located in different climatic environments. These glaciers have, from a Swedish perspective, long observational records. Furthermore, I have investigated a recurring jökulhlaup at the glacier Sälkaglaciären in order to analyse glacier-climate relationships with respect to the jökulhlaups. A number of datasets are presented, including: glacier frontal changes, in situ and photogrammetric mass balance data, in situ and satellite radar interferometry measurements of surface velocity, radar measurements, ice volume data and a temperature series. All these datasets are analysed in order to investigate the response of the glaciers to climatic stimuli, to attribute specific behaviour to particular climates and to analyse the 19th and 20th century glacier/climate relationships in northern Sweden. The 19th century was characterized by cold conditions in northern Sweden, particularly in winter. Significant changes in the amplitude of the annual temperature cycle are evident. Through the 19th century there is a marked decreasing trend in the amplitude of the data, suggesting a change towards a prevalence of maritime (westerly) air masses, something which has characterised the 20th century. The investigations on Salajekna support the conclusion that the major part of the 19th century was cold and dry. The 19th century advance of Salajekna was probably caused by colder climate in the late 18th and early 19th centuries, coupled with a weakening of the westerly airflow. The investigations on Pårteglaciären show that the glacier has a response time of ~200 years. It also suggests that there was a relatively high frequency of easterly winds providing the glacier with winter precipitation during the 19th century. Glaciers have very different response times and are sensitive to different climatic parameters. Glaciers in rather continental areas of the Subarctic and Arctic can have very long response times because of mass balance considerations and not primarily the glacier dynamics. This is of vital importance for analyzing Arctic and Subarctic glacier behaviour in a global change perspective. It is far from evident that the behaviour of the glacier fronts today reflects the present climate.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bioinformatics is a recent and emerging discipline which aims at studying biological problems through computational approaches. Most branches of bioinformatics such as Genomics, Proteomics and Molecular Dynamics are particularly computationally intensive, requiring huge amount of computational resources for running algorithms of everincreasing complexity over data of everincreasing size. In the search for computational power, the EGEE Grid platform, world's largest community of interconnected clusters load balanced as a whole, seems particularly promising and is considered the new hope for satisfying the everincreasing computational requirements of bioinformatics, as well as physics and other computational sciences. The EGEE platform, however, is rather new and not yet free of problems. In addition, specific requirements of bioinformatics need to be addressed in order to use this new platform effectively for bioinformatics tasks. In my three years' Ph.D. work I addressed numerous aspects of this Grid platform, with particular attention to those needed by the bioinformatics domain. I hence created three major frameworks, Vnas, GridDBManager and SETest, plus an additional smaller standalone solution, to enhance the support for bioinformatics applications in the Grid environment and to reduce the effort needed to create new applications, additionally addressing numerous existing Grid issues and performing a series of optimizations. The Vnas framework is an advanced system for the submission and monitoring of Grid jobs that provides an abstraction with reliability over the Grid platform. In addition, Vnas greatly simplifies the development of new Grid applications by providing a callback system to simplify the creation of arbitrarily complex multistage computational pipelines and provides an abstracted virtual sandbox which bypasses Grid limitations. Vnas also reduces the usage of Grid bandwidth and storage resources by transparently detecting equality of virtual sandbox files based on content, across different submissions, even when performed by different users. BGBlast, evolution of the earlier project GridBlast, now provides a Grid Database Manager (GridDBManager) component for managing and automatically updating biological flatfile databases in the Grid environment. GridDBManager sports very novel features such as an adaptive replication algorithm that constantly optimizes the number of replicas of the managed databases in the Grid environment, balancing between response times (performances) and storage costs according to a programmed cost formula. GridDBManager also provides a very optimized automated management for older versions of the databases based on reverse delta files, which reduces the storage costs required to keep such older versions available in the Grid environment by two orders of magnitude. The SETest framework provides a way to the user to test and regressiontest Python applications completely scattered with side effects (this is a common case with Grid computational pipelines), which could not easily be tested using the more standard methods of unit testing or test cases. The technique is based on a new concept of datasets containing invocations and results of filtered calls. The framework hence significantly accelerates the development of new applications and computational pipelines for the Grid environment, and the efforts required for maintenance. An analysis of the impact of these solutions will be provided in this thesis. This Ph.D. work originated various publications in journals and conference proceedings as reported in the Appendix. Also, I orally presented my work at numerous international conferences related to Grid and bioinformatics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Crowding is defined as the negative effect obtained by adding visual distractors around a central target which has to be identified. Some studies have suggested the presence of a marked crowding effect in developmental dyslexia (e.g. Atkinson, 1991; Spinelli et al., 2002). Inspired by Spinelli’s (2002) experimental design, we explored the hypothesis that the crowding effect may affect dyslexics’ response times (RTs) and accuracy in identification tasks dealing with words, pseudowords, illegal non-words and symbolstrings. Moreover, our study aimed to clarify the relationship between the crowding phenomenon and the word-reading process, in an inter-language comparison perspective. For this purpose we studied twenty-two French dyslexics and twenty-two Italian dyslexics (total forty-four dyslexics), compared to forty-four subjects matched for reading level (22 French and 22 Italians) and forty-four chronological age-matched subjects (22 French and 22 Italians). Children were all tested on reading and cognitive abilities. Results showed no differences between French and Italian participants suggesting that performances were homogenous. Dyslexic children were all significantly impaired in words and pseudowords reading compared to their normal reading controls. Regarding the identification task with which we assessed crowding effect, both accuracy and RTs showed a lexicality effect which meant that the recognition of words was more accurate and faster in words than pseudowords, non-words and symbolstrings. Moreover, compared to normal readers, dyslexics’ RTs and accuracy were impaired only for verbal materials but not for non-verbal material; these results are in line with the phonological hypothesis (Griffiths & Snowling, 2002; Snowling, 2000; 2006) . RTs revealed a general crowding effect (RTs in the crowding condition were slower than those recorded in the isolated condition) affecting all the subjects’ performances. This effect, however, emerged to be not specific for dyslexics. Data didn’t reveal a significant effect of language, allowing the generalization of the obtained results. We also analyzed the performance of two subgroups of dyslexics, categorized according to their reading abilities. The two subgroups produced different results regarding the crowding effect and type of material, suggesting that it is meaningful to take into account also the heterogeneity of the dyslexia disorder. Finally, we also analyzed the relationship of the identification task with both reading and cognitive abilities. In conclusion, this study points out the importance of comparing visual tasks performances of dyslexic participants with those of their reading level-matched controls. This approach may improve our comprehension of the potential causal link between crowding and reading (Goswami, 2003).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work I address the study of language comprehension in an “embodied” framework. Firstly I show behavioral evidence supporting the idea that language modulates the motor system in a specific way, both at a proximal level (sensibility to the effectors) and at the distal level (sensibility to the goal of the action in which the single motor acts are inserted). I will present two studies in which the method is basically the same: we manipulated the linguistic stimuli (the kind of sentence: hand action vs. foot action vs. mouth action) and the effector by which participants had to respond (hand vs. foot vs. mouth; dominant hand vs. non-dominant hand). Response times analyses showed a specific modulation depending on the kind of sentence: participants were facilitated in the task execution (sentence sensibility judgment) when the effector they had to use to respond was the same to which the sentences referred. Namely, during language comprehension a pre-activation of the motor system seems to take place. This activation is analogous (even if less intense) to the one detectable when we practically execute the action described by the sentence. Beyond this effector specific modulation, we also found an effect of the goal suggested by the sentence. That is, the hand effector was pre-activated not only by hand-action-related sentences, but also by sentences describing mouth actions, consistently with the fact that to execute an action on an object with the mouth we firstly have to bring it to the mouth with the hand. After reviewing the evidence on simulation specificity directly referring to the body (for instance, the kind of the effector activated by the language), I focus on the specific properties of the object to which the words refer, particularly on the weight. In this case the hypothesis to test was if both lifting movement perception and lifting movement execution are modulated by language comprehension. We used behavioral and kinematics methods, and we manipulated the linguistic stimuli (the kind of sentence: the lifting of heavy objects vs. the lifting of light objects). To study the movement perception we measured the correlations between the weight of the objects lifted by an actor (heavy objects vs. light objects) and the esteems provided by the participants. To study the movement execution we measured kinematics parameters variance (velocity, acceleration, time to the first peak of velocity) during the actual lifting of objects (heavy objects vs. light objects). Both kinds of measures revealed that language had a specific effect on the motor system, both at a perceptive and at a motoric level. Finally, I address the issue of the abstract words. Different studies in the “embodied” framework tried to explain the meaning of abstract words The limit of these works is that they account only for subsets of phenomena, so results are difficult to generalize. We tried to circumvent this problem by contrasting transitive verbs (abstract and concrete) and nouns (abstract and concrete) in different combinations. The behavioral study was conducted both with German and Italian participants, as the two languages are syntactically different. We found that response times were faster for both the compatible pairs (concrete verb + concrete noun; abstract verb + abstract noun) than for the mixed ones. Interestingly, for the mixed combinations analyses showed a modulation due to the specific language (German vs. Italian): when the concrete word precedes the abstract one responses were faster, regardless of the word grammatical class. Results are discussed in the framework of current views on abstract words. They highlight the important role of developmental and social aspects of language use, and confirm theories assigning a crucial role to both sensorimotor and linguistic experience for abstract words.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Zusammenfassung Die vorliegenden Arbeit beschäftigt sich mit den Synthesen und Charakterisierungen multifunktioneller, Arylamin-haltiger Polymere, welche sich als photorefraktive(PR)-Materialien eignen. Die Glastemperaturen (Tg) der angestrebten Materialien liegen deutlich über Raumtemperatur, um so den Pockels-Mechanismus zum Aufbau des PR-Effektes zu favorisieren. Hierzu sind zwei synthetische Konzepte, basierend auf Maleinimid-Methylvinylisocyanat-Reaktiv-Polymeren und Triphenylamin-haltigen Polymeren, entwickelt worden. Im Rahmen des Reaktiv-Polymer-Konzeptes konnten PR-Materialien mit den bisher größten Beugungs-Effizienzen sowie den schnellsten Ansprechzeiten für multifunktionelle hoch-Tg-Polymere dargestellt werden. Hierfür wurden Maleinimid-Methylvinylisocyanat-Reaktiv-Polymere synthetisiert welche an der Imid-Position über Spacer-Gruppen mit Carbazol-Einheiten funktionalisiert sind. Die Tg´s der Polymere konnten zwischen 60°C und 194°C eingestellt werden. Die Isocyanat-Gruppen wurden dann polymeranlog mit hydroxyalkyl-funktionalisierten Chromophoren umgesetzt. Die Kinetik des PR-Effektes dieser Materialien wird durch die Ladungsträger-Beweglichkeiten in den Proben bestimmt. Eine Steigerung der Farbstoff-Konzentrationen erhöht die PR-Leistungen der Materialien, behindert jedoch deren Kinetik.Das Triphenylamin-Polymer-Konzept verwendet Triphenylamine als Lochleiter. Hierfür wurden die radikalischen Polymerisations-Verhalten der Monomere p-Diphenylaminostyrol (DPAS) und erstmals p-Ditoluylaminostyrol (DTAS) untersucht. Die Monomere wurden durch spontane, freie und kontrollierte radikalische Verfahren polymerisiert. Mittels eines TEMPO-Derivates gelang der Aufbau von Block-Copolymeren. Poly-DPAS konnte, im Gegensatz zu Poly-DTAS, polymeranlog tricyanovinyliert werden. Dadurch lassen sich PDPAS-block-PTPAS-Copolymere selektiv im PDPAS-Block tricyanovinylieren. Diese Materialien weisen eine Tendenzen zur Mikro-Phasen-Separation auf.Die Strukturierung von PDPAS konnte durch Photo-Polymerisation mit einer Auflösung von wenigen mm demonstriert werden. Carbazol und Triphenylamin-haltige Materialien wurden mittels Cyclo-Voltametrie untersucht.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis, the main Executive Control theories are exposed. Methods typical of Cognitive and Computational Neuroscience are introduced and the role of behavioural tasks involving conflict resolution in the response elaboration, after the presentation of a stimulus to the subject, are highlighted. In particular, the Eriksen Flanker Task and its variants are discussed. Behavioural data, from scientific literature, are illustrated in terms of response times and error rates. During experimental behavioural tasks, EEG is registered simultaneously. Thanks to this, event related potential, related with the current task, can be studied. Different theories regarding relevant event related potential in this field - such as N2, fERN (feedback Error Related Negativity) and ERN (Error Related Negativity) – are introduced. The aim of this thesis is to understand and simulate processes regarding Executive Control, including performance improvement, error detection mechanisms, post error adjustments and the role of selective attention, with the help of an original neural network model. The network described here has been built with the purpose to simulate behavioural results of a four choice Eriksen Flanker Task. Model results show that the neural network can simulate response times, error rates and event related potentials quite well. Finally, results are compared with behavioural data and discussed in light of the mentioned Executive Control theories. Future perspective for this new model are outlined.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Im Bereich sicherheitsrelevanter eingebetteter Systeme stellt sich der Designprozess von Anwendungen als sehr komplex dar. Entsprechend einer gegebenen Hardwarearchitektur lassen sich Steuergeräte aufrüsten, um alle bestehenden Prozesse und Signale pünktlich auszuführen. Die zeitlichen Anforderungen sind strikt und müssen in jeder periodischen Wiederkehr der Prozesse erfüllt sein, da die Sicherstellung der parallelen Ausführung von größter Bedeutung ist. Existierende Ansätze können schnell Designalternativen berechnen, aber sie gewährleisten nicht, dass die Kosten für die nötigen Hardwareänderungen minimal sind. Wir stellen einen Ansatz vor, der kostenminimale Lösungen für das Problem berechnet, die alle zeitlichen Bedingungen erfüllen. Unser Algorithmus verwendet Lineare Programmierung mit Spaltengenerierung, eingebettet in eine Baumstruktur, um untere und obere Schranken während des Optimierungsprozesses bereitzustellen. Die komplexen Randbedingungen zur Gewährleistung der periodischen Ausführung verlagern sich durch eine Zerlegung des Hauptproblems in unabhängige Unterprobleme, die als ganzzahlige lineare Programme formuliert sind. Sowohl die Analysen zur Prozessausführung als auch die Methoden zur Signalübertragung werden untersucht und linearisierte Darstellungen angegeben. Des Weiteren präsentieren wir eine neue Formulierung für die Ausführung mit fixierten Prioritäten, die zusätzlich Prozessantwortzeiten im schlimmsten anzunehmenden Fall berechnet, welche für Szenarien nötig sind, in denen zeitliche Bedingungen an Teilmengen von Prozessen und Signalen gegeben sind. Wir weisen die Anwendbarkeit unserer Methoden durch die Analyse von Instanzen nach, welche Prozessstrukturen aus realen Anwendungen enthalten. Unsere Ergebnisse zeigen, dass untere Schranken schnell berechnet werden können, um die Optimalität von heuristischen Lösungen zu beweisen. Wenn wir optimale Lösungen mit Antwortzeiten liefern, stellt sich unsere neue Formulierung in der Laufzeitanalyse vorteilhaft gegenüber anderen Ansätzen dar. Die besten Resultate werden mit einem hybriden Ansatz erzielt, der heuristische Startlösungen, eine Vorverarbeitung und eine heuristische mit einer kurzen nachfolgenden exakten Berechnungsphase verbindet.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Keyboards, mice, and touch screens are a potential source of infection or contamination in operating rooms, intensive care units, and autopsy suites. The authors present a low-cost prototype of a system, which allows for touch-free control of a medical image viewer. This touch-free navigation system consists of a computer system (IMac, OS X 10.6 Apple, USA) with a medical image viewer (OsiriX, OsiriX foundation, Switzerland) and a depth camera (Kinect, Microsoft, USA). They implemented software that translates the data delivered by the camera and a voice recognition software into keyboard and mouse commands, which are then passed to OsiriX. In this feasibility study, the authors introduced 10 medical professionals to the system and asked them to re-create 12 images from a CT data set. They evaluated response times and usability of the system compared with standard mouse/keyboard control. Users felt comfortable with the system after approximately 10 minutes. Response time was 120 ms. Users required 1.4 times more time to re-create an image with gesture control. Users with OsiriX experience were significantly faster using the mouse/keyboard and faster than users without prior experience. They rated the system 3.4 out of 5 for ease of use in comparison to the mouse/keyboard. The touch-free, gesture-controlled system performs favorably and removes a potential vector for infection, protecting both patients and staff. Because the camera can be quickly and easily integrated into existing systems, requires no calibration, and is low cost, the barriers to using this technology are low.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The synchronization of dynamic multileaf collimator (DMLC) response with respiratory motion is critical to ensure the accuracy of DMLC-based four dimensional (4D) radiation delivery. In practice, however, a finite time delay (response time) between the acquisition of tumor position and multileaf collimator response necessitates predictive models of respiratory tumor motion to synchronize radiation delivery. Predicting a complex process such as respiratory motion introduces geometric errors, which have been reported in several publications. However, the dosimetric effect of such errors on 4D radiation delivery has not yet been investigated. Thus, our aim in this work was to quantify the dosimetric effects of geometric error due to prediction under several different conditions. Conformal and intensity modulated radiation therapy (IMRT) plans for a lung patient were generated for anterior-posterior/posterior-anterior (AP/PA) beam arrangements at 6 and 18 MV energies to provide planned dose distributions. Respiratory motion data was obtained from 60 diaphragm-motion fluoroscopy recordings from five patients. A linear adaptive filter was employed to predict the tumor position. The geometric error of prediction was defined as the absolute difference between predicted and actual positions at each diaphragm position. Distributions of geometric error of prediction were obtained for all of the respiratory motion data. Planned dose distributions were then convolved with distributions for the geometric error of prediction to obtain convolved dose distributions. The dosimetric effect of such geometric errors was determined as a function of several variables: response time (0-0.6 s), beam energy (6/18 MV), treatment delivery (3D/4D), treatment type (conformal/IMRT), beam direction (AP/PA), and breathing training type (free breathing/audio instruction/visual feedback). Dose difference and distance-to-agreement analysis was employed to quantify results. Based on our data, the dosimetric impact of prediction (a) increased with response time, (b) was larger for 3D radiation therapy as compared with 4D radiation therapy, (c) was relatively insensitive to change in beam energy and beam direction, (d) was greater for IMRT distributions as compared with conformal distributions, (e) was smaller than the dosimetric impact of latency, and (f) was greatest for respiration motion with audio instructions, followed by visual feedback and free breathing. Geometric errors of prediction that occur during 4D radiation delivery introduce dosimetric errors that are dependent on several factors, such as response time, treatment-delivery type, and beam energy. Even for relatively small response times of 0.6 s into the future, dosimetric errors due to prediction could approach delivery errors when respiratory motion is not accounted for at all. To reduce the dosimetric impact, better predictive models and/or shorter response times are required.