16 resultados para Single-item

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In visual search one tries to find the currently relevant item among other, irrelevant items. In the present study, visual search performance for complex objects (characters, faces, computer icons and words) was investigated, and the contribution of different stimulus properties, such as luminance contrast between characters and background, set size, stimulus size, colour contrast, spatial frequency, and stimulus layout were investigated. Subjects were required to search for a target object among distracter objects in two-dimensional stimulus arrays. The outcome measure was threshold search time, that is, the presentation duration of the stimulus array required by the subject to find the target with a certain probability. It reflects the time used for visual processing separated from the time used for decision making and manual reactions. The duration of stimulus presentation was controlled by an adaptive staircase method. The number and duration of eye fixations, saccade amplitude, and perceptual span, i.e., the number of items that can be processed during a single fixation, were measured. It was found that search performance was correlated with the number of fixations needed to find the target. Search time and the number of fixations increased with increasing stimulus set size. On the other hand, several complex objects could be processed during a single fixation, i.e., within the perceptual span. Search time and the number of fixations depended on object type as well as luminance contrast. The size of the perceptual span was smaller for more complex objects, and decreased with decreasing luminance contrast within object type, especially for very low contrasts. In addition, the size and shape of perceptual span explained the changes in search performance for different stimulus layouts in word search. Perceptual span was scale invariant for a 16-fold range of stimulus sizes, i.e., the number of items processed during a single fixation was independent of retinal stimulus size or viewing distance. It is suggested that saccadic visual search consists of both serial (eye movements) and parallel (processing within perceptual span) components, and that the size of the perceptual span may explain the effectiveness of saccadic search in different stimulus conditions. Further, low-level visual factors, such as the anatomical structure of the retina, peripheral stimulus visibility and resolution requirements for the identification of different object types are proposed to constrain the size of the perceptual span, and thus, limit visual search performance. Similar methods were used in a clinical study to characterise the visual search performance and eye movements of neurological patients with chronic solvent-induced encephalopathy (CSE). In addition, the data about the effects of different stimulus properties on visual search in normal subjects were presented as simple practical guidelines, so that the limits of human visual perception could be taken into account in the design of user interfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been suggested that semantic information processing is modularized according to the input form (e.g., visual, verbal, non-verbal sound). A great deal of research has concentrated on detecting a separate verbal module. Also, it has traditionally been assumed in linguistics that the meaning of a single clause is computed before integration to a wider context. Recent research has called these views into question. The present study explored whether it is reasonable to assume separate verbal and nonverbal semantic systems in the light of the evidence from event-related potentials (ERPs). The study also provided information on whether the context influences processing of a single clause before the local meaning is computed. The focus was on an ERP called N400. Its amplitude is assumed to reflect the effort required to integrate an item to the preceding context. For instance, if a word is anomalous in its context, it will elicit a larger N400. N400 has been observed in experiments using both verbal and nonverbal stimuli. Contents of a single sentence were not hypothesized to influence the N400 amplitude. Only the combined contents of the sentence and the picture were hypothesized to influence the N400. The subjects (n = 17) viewed pictures on a computer screen while hearing sentences through headphones. Their task was to judge the congruency of the picture and the sentence. There were four conditions: 1) the picture and the sentence were congruent and sensible, 2) the sentence and the picture were congruent, but the sentence ended anomalously, 3) the picture and the sentence were incongruent but sensible, 4) the picture and the sentence were incongruent and anomalous. Stimuli from the four conditions were presented in a semi-randomized sequence. Their electroencephalography was simultaneously recorded. ERPs were computed for the four conditions. The amplitude of the N400 effect was largest in the incongruent sentence-picture -pairs. The anomalously ending sentences did not elicit a larger N400 than the sensible sentences. The results suggest that there is no separate verbal semantic system, and that the meaning of a single clause is not processed independent of the context.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of two parts; in the first part we performed a single-molecule force extension measurement with 10kb long DNA-molecules from phage-λ to validate the calibration and single-molecule capability of our optical tweezers instrument. Fitting the worm-like chain interpolation formula to the data revealed that ca. 71% of the DNA tethers featured a contour length within ±15% of the expected value (3.38 µm). Only 25% of the found DNA had a persistence length between 30 and 60 nm. The correct value should be within 40 to 60 nm. In the second part we designed and built a precise temperature controller to remove thermal fluctuations that cause drifting of the optical trap. The controller uses feed-forward and PID (proportional-integral-derivative) feedback to achieve 1.58 mK precision and 0.3 K absolute accuracy. During a 5 min test run it reduced drifting of the trap from 1.4 nm/min in open-loop to 0.6 nm/min in closed-loop.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the studies was to improve the diagnostic capability of electrocardiography (ECG) in detecting myocardial ischemic injury with a future goal of an automatic screening and monitoring method for ischemic heart disease. The method of choice was body surface potential mapping (BSPM), containing numerous leads, with intention to find the optimal recording sites and optimal ECG variables for ischemia and myocardial infarction (MI) diagnostics. The studies included 144 patients with prior MI, 79 patients with evolving ischemia, 42 patients with left ventricular hypertrophy (LVH), and 84 healthy controls. Study I examined the depolarization wave in prior MI with respect to MI location. Studies II-V examined the depolarization and repolarization waves in prior MI detection with respect to the Minnesota code, Q-wave status, and study V also with respect to MI location. In study VI the depolarization and repolarization variables were examined in 79 patients in the face of evolving myocardial ischemia and ischemic injury. When analyzed from a single lead at any recording site the results revealed superiority of the repolarization variables over the depolarization variables and over the conventional 12-lead ECG methods, both in the detection of prior MI and evolving ischemic injury. The QT integral, covering both depolarization and repolarization, appeared indifferent to the Q-wave status, the time elapsed from MI, or the MI or ischemia location. In the face of evolving ischemic injury the performance of the QT integral was not hampered even by underlying LVH. The examined depolarization and repolarization variables were effective when recorded in a single site, in contrast to the conventional 12-lead ECG criteria. The inverse spatial correlation of the depolarization and depolarization waves in myocardial ischemia and injury could be reduced into the QT integral variable recorded in a single site on the left flank. In conclusion, the QT integral variable, detectable in a single lead, with optimal recording site on the left flank, was able to detect prior MI and evolving ischemic injury more effectively than the conventional ECG markers. The QT integral, in a single-lead or a small number of leads, offers potential for automated screening of ischemic heart disease, acute ischemia monitoring and therapeutic decision-guiding as well as risk stratification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report a measurement of the single top quark production cross section in 2.2 ~fb-1 of p-pbar collision data collected by the Collider Detector at Fermilab at sqrt{s}=1.96 TeV. Candidate events are classified as signal-like by three parallel analyses which use likelihood, matrix element, and neural network discriminants. These results are combined in order to improve the sensitivity. We observe a signal consistent with the standard model prediction, but inconsistent with the background-only model by 3.7 standard deviations with a median expected sensitivity of 4.9 standard deviations. We measure a cross section of 2.2 +0.7 -0.6(stat+sys) pb, extract the CKM matrix element value |V_{tb}|=0.88 +0.13 -0.12 (stat+sys) +- 0.07(theory), and set the limit |V_{tb}|>0.66 at the 95% C.L.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report a search for single top quark production with the CDF II detector using 2.1 fb-1 of integrated luminosity of pbar p collisions at sqrt{s}=1.96 TeV. The data selected consist of events characterized by large energy imbalance in the transverse plane and hadronic jets, and no identified electrons and muons, so the sample is enriched in W -> tau nu decays. In order to suppress backgrounds, additional kinematic and topological requirements are imposed through a neural network, and at least one of the jets must be identified as a b-quark jet. We measure an excess of signal-like events in agreement with the standard model prediction, but inconsistent with a model without single top quark production by 2.1 standard deviations (sigma), with a median expected sensitivity of 1.4 sigma. Assuming a top quark mass of 175 GeV/c2 and ascribing the excess to single top quark production, the cross section is measured to be 4.9+2.5-2.2(stat+syst)pb, consistent with measurements performed in independent datasets and with the standard model prediction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report the observation of electroweak single top quark production in 3.2  fb-1 of pp̅ collision data collected by the Collider Detector at Fermilab at √s=1.96  TeV. Candidate events in the W+jets topology with a leptonically decaying W boson are classified as signal-like by four parallel analyses based on likelihood functions, matrix elements, neural networks, and boosted decision trees. These results are combined using a super discriminant analysis based on genetically evolved neural networks in order to improve the sensitivity. This combined result is further combined with that of a search for a single top quark signal in an orthogonal sample of events with missing transverse energy plus jets and no charged lepton. We observe a signal consistent with the standard model prediction but inconsistent with the background-only model by 5.0 standard deviations, with a median expected sensitivity in excess of 5.9 standard deviations. We measure a production cross section of 2.3-0.5+0.6(stat+sys)  pb, extract the value of the Cabibbo-Kobayashi-Maskawa matrix element |Vtb|=0.91-0.11+0.11(stat+sys)±0.07  (theory), and set a lower limit |Vtb|>0.71 at the 95% C.L., assuming mt=175  GeV/c2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report the observation of electroweak single top quark production in 3.2 fb-1 of ppbar collision data collected by the Collider Detector at Fermilab at sqrt{s}=1.96 TeV. Candidate events in the W+jets topology with a leptonically decaying W boson are classified as signal-like by four parallel analyses based on likelihood functions, matrix elements, neural networks, and boosted decision trees. These results are combined using a super discriminant analysis based on genetically evolved neural networks in order to improve the sensitivity. This combined result is further combined with that of a search for a single top quark signal in an orthogonal sample of events with missing transverse energy plus jets and no charged lepton. We observe a signal consistent with the standard model prediction but inconsistent with the background-only model by 5.0 standard deviations, with a median expected sensitivity in excess of 5.9 standard deviations. We measure a production cross section of 2.3+0.6-0.5(stat+sys) pb, extract the CKM matrix element value |Vtb|=0.91+0.11-0.11 (stat+sys)+-0.07(theory), and set a lower limit |Vtb|>0.71 at the 95% confidence level, assuming m_t=175 GeVc^2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report a measurement of the single top quark production cross section in 2.2 ~fb-1 of p-pbar collision data collected by the Collider Detector at Fermilab at sqrt{s}=1.96 TeV. Candidate events are classified as signal-like by three parallel analyses which use likelihood, matrix element, and neural network discriminants. These results are combined in order to improve the sensitivity. We observe a signal consistent with the standard model prediction, but inconsistent with the background-only model by 3.7 standard deviations with a median expected sensitivity of 4.9 standard deviations. We measure a cross section of 2.2 +0.7 -0.6(stat+sys) pb, extract the CKM matrix element value |V_{tb}|=0.88 +0.13 -0.12 (stat+sys) +- 0.07(theory), and set the limit |V_{tb}|>0.66 at the 95% C.L.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report the first observation of single top quark production using 3.2 fb^-1 of pbar p collision data with sqrt{s}=1.96 TeV collected by the Collider Detector at Fermilab. The significance of the observed data is 5.0 standard deviations, and the expected sensitivity for standard model production and decay is in excess of 5.9 standard deviations. Assuming m_t=175 GeV/c^2, we measure a cross section of 2.3 +0.6 -0.5 (stat+syst) pb, extract the CKM matrix element value |V_{tb}|=0.91 +-0.11 (stat+syst) 0.07(theory), and set the limit |V_{tb}|>0.71 at the 95% C.L.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Molecular machinery on the micro-scale, believed to be the fundamental building blocks of life, involve forces of 1-100 pN and movements of nanometers to micrometers. Micromechanical single-molecule experiments seek to understand the physics of nucleic acids, molecular motors, and other biological systems through direct measurement of forces and displacements. Optical tweezers are a popular choice among several complementary techniques for sensitive force-spectroscopy in the field of single molecule biology. The main objective of this thesis was to design and construct an optical tweezers instrument capable of investigating the physics of molecular motors and mechanisms of protein/nucleic-acid interactions on the single-molecule level. A double-trap optical tweezers instrument incorporating acousto-optic trap-steering, two independent detection channels, and a real-time digital controller was built. A numerical simulation and a theoretical study was performed to assess the signal-to-noise ratio in a constant-force molecular motor stepping experiment. Real-time feedback control of optical tweezers was explored in three studies. Position-clamping was implemented and compared to theoretical models using both proportional and predictive control. A force-clamp was implemented and tested with a DNA-tether in presence of the enzyme lambda exonuclease. The results of the study indicate that the presented models describing signal-to-noise ratio in constant-force experiments and feedback control experiments in optical tweezers agree well with experimental data. The effective trap stiffness can be increased by an order of magnitude using the presented position-clamping method. The force-clamp can be used for constant-force experiments, and the results from a proof-of-principle experiment, in which the enzyme lambda exonuclease converts double-stranded DNA to single-stranded DNA, agree with previous research. The main objective of the thesis was thus achieved. The developed instrument and presented results on feedback control serve as a stepping stone for future contributions to the growing field of single molecule biology.