912 resultados para Optics in computing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improving safety at nighttime work zones is important because of the extra visibility concerns. The deployment of sequential lights is an innovative method for improving driver recognition of lane closures and work zone tapers. Sequential lights are wireless warning lights that flash in a sequence to clearly delineate the taper at work zones. The effectiveness of sequential lights was investigated using controlled field studies. Traffic parameters were collected at the same field site with and without the deployment of sequential lights. Three surrogate performance measures were used to determine the impact of sequential lights on safety. These measures were the speeds of approaching vehicles, the number of late taper merges and the locations where vehicles merged into open lane from the closed lane. In addition, an economic analysis was conducted to monetize the benefits and costs of deploying sequential lights at nighttime work zones. The results of this study indicates that sequential warning lights had a net positive effect in reducing the speeds of approaching vehicles, enhancing driver compliance, and preventing passenger cars, trucks and vehicles at rural work zones from late taper merges. Statistically significant decreases of 2.21 mph mean speed and 1 mph 85% speed resulted with sequential lights. The shift in the cumulative speed distributions to the left (i.e. speed decrease) was also found to be statistically significant using the Mann-Whitney and Kolmogorov-Smirnov tests. But a statistically significant increase of 0.91 mph in the speed standard deviation also resulted with sequential lights. With sequential lights, the percentage of vehicles that merged earlier increased from 53.49% to 65.36%. A benefit-cost ratio of around 5 or 10 resulted from this analysis of Missouri nighttime work zones and historical crash data. The two different benefitcost ratios reflect two different ways of computing labor costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photopolymerization is commonly used in a broad range of bioapplications, such as drug delivery, tissue engineering, and surgical implants, where liquid materials are injected and then hardened by means of illumination to create a solid polymer network. However, photopolymerization using a probe, e.g., needle guiding both the liquid and the curing illumination, has not been thoroughly investigated. We present a Monte Carlo model that takes into account the dynamic absorption and scattering parameters as well as solid-liquid boundaries of the photopolymer to yield the shape and volume of minimally invasively injected, photopolymerized hydrogels. In the first part of the article, our model is validated using a set of well-known poly(ethylene glycol) dimethacrylate hydrogels showing an excellent agreement between simulated and experimental volume-growth-rates. In the second part, in situ experimental results and simulations for photopolymerization in tissue cavities are presented. It was found that a cavity with a volume of 152  mm3 can be photopolymerized from the output of a 0.28-mm2 fiber by adding scattering lipid particles while only a volume of 38  mm3 (25%) was achieved without particles. The proposed model provides a simple and robust method to solve complex photopolymerization problems, where the dimension of the light source is much smaller than the volume of the photopolymerizable hydrogel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluorescence imaging for detection of non-muscle-invasive bladder cancer is based on the selective production and accumulation of fluorescing porphyrins-mainly, protoporphyrin IX-in cancerous tissues after the instillation of Hexvix®. Although the sensitivity of this procedure is very good, its specificity is somewhat limited due to fluorescence false-positive sites. Consequently, magnification cystoscopy has been investigated in order to discriminate false from true fluorescence positive findings. Both white-light and fluorescence modes are possible with the magnification cystoscope, allowing observation of the bladder wall with magnification ranging between 30× for standard observation and 650×. The optical zooming setup allows adjusting the magnification continuously in situ. In the high-magnification (HM) regime, the smallest diameter of the field of view is 600 microns and the resolution is 2.5 microns when in contact with the bladder wall. With this cystoscope, we characterized the superficial vascularization of the fluorescing sites in order to discriminate cancerous from noncancerous tissues. This procedure allowed us to establish a classification based on observed vascular patterns. Seventy-two patients subject to Hexvix® fluorescence cystoscopy were included in the study. Comparison of HM cystoscopy classification with histopathology results confirmed 32/33 (97%) cancerous biopsies and rejected 17/20 (85%) noncancerous lesions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study looks at how increased memory utilisation affects throughput and energy consumption in scientific computing, especially in high-energy physics. Our aim is to minimise energy consumed by a set of jobs without increasing the processing time. The earlier tests indicated that, especially in data analysis, throughput can increase over 100% and energy consumption decrease 50% by processing multiple jobs in parallel per CPU core. Since jobs are heterogeneous, it is not possible to find an optimum value for the number of parallel jobs. A better solution is based on memory utilisation, but finding an optimum memory threshold is not straightforward. Therefore, a fuzzy logic-based algorithm was developed that can dynamically adapt the memory threshold based on the overall load. In this way, it is possible to keep memory consumption stable with different workloads while achieving significantly higher throughput and energy-efficiency than using a traditional fixed number of jobs or fixed memory threshold approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: Genome-wide association studies have become widely used tools to study effects of genetic variants on complex diseases. While it is of great interest to extend existing analysis methods by considering interaction effects between pairs of loci, the large number of possible tests presents a significant computational challenge. The number of computations is further multiplied in the study of gene expression quantitative trait mapping, in which tests are performed for thousands of gene phenotypes simultaneously. Results: We present FastEpistasis, an efficient parallel solution extending the PLINK epistasis module, designed to test for epistasis effects when analyzing continuous phenotypes. Our results show that the algorithm scales with the number of processors and offers a reduction in computation time when several phenotypes are analyzed simultaneously. FastEpistasis is capable of testing the association of a continuous trait with all single nucleotide polymorphism ( SNP) pairs from 500 000 SNPs, totaling 125 billion tests, in a population of 5000 individuals in 29, 4 or 0.5 days using 8, 64 or 512 processors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using combined emotional stimuli, combining photos of faces and recording of voices, we investigated the neural dynamics of emotional judgment using scalp EEG recordings. Stimuli could be either combioned in a congruent, or a non-congruent way.. As many evidences show the major role of alpha in emotional processing, the alpha band was subjected to be analyzed. Analysis was performed by computing the synchronization of the EEGs and the conditions congruent vs. non-congruent were compared using statistical tools. The obtained results demonstrate that scalp EEG ccould be used as a tool to investigate the neural dynamics of emotional valence and discriminate various emotions (angry, happy and neutral stimuli).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a deep study on tissue modelization andclassification Techniques on T1-weighted MR images. Threeapproaches have been taken into account to perform thisvalidation study. Two of them are based on FiniteGaussian Mixture (FGM) model. The first one consists onlyin pure gaussian distributions (FGM-EM). The second oneuses a different model for partial volume (PV) (FGM-GA).The third one is based on a Hidden Markov Random Field(HMRF) model. All methods have been tested on a DigitalBrain Phantom image considered as the ground truth. Noiseand intensity non-uniformities have been added tosimulate real image conditions. Also the effect of ananisotropic filter is considered. Results demonstratethat methods relying in both intensity and spatialinformation are in general more robust to noise andinhomogeneities. However, in some cases there is nosignificant differences between all presented methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The M-Coffee server is a web server that makes it possible to compute multiple sequence alignments (MSAs) by running several MSA methods and combining their output into one single model. This allows the user to simultaneously run all his methods of choice without having to arbitrarily choose one of them. The MSA is delivered along with a local estimation of its consistency with the individual MSAs it was derived from. The computation of the consensus multiple alignment is carried out using a special mode of the T-Coffee package [Notredame, Higgins and Heringa (T-Coffee: a novel method for fast and accurate multiple sequence alignment. J. Mol. Biol. 2000; 302: 205-217); Wallace, O'Sullivan, Higgins and Notredame (M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006; 34: 1692-1699)] Given a set of sequences (DNA or proteins) in FASTA format, M-Coffee delivers a multiple alignment in the most common formats. M-Coffee is a freeware open source package distributed under a GPL license and it is available either as a standalone package or as a web service from www.tcoffee.org.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les problèmes d'écoulements multiphasiques en média poreux sont d'un grand intérêt pour de nombreuses applications scientifiques et techniques ; comme la séquestration de C02, l'extraction de pétrole et la dépollution des aquifères. La complexité intrinsèque des systèmes multiphasiques et l'hétérogénéité des formations géologiques sur des échelles multiples représentent un challenge majeur pour comprendre et modéliser les déplacements immiscibles dans les milieux poreux. Les descriptions à l'échelle supérieure basées sur la généralisation de l'équation de Darcy sont largement utilisées, mais ces méthodes sont sujettes à limitations pour les écoulements présentant de l'hystérèse. Les avancées récentes en terme de performances computationnelles et le développement de méthodes précises pour caractériser l'espace interstitiel ainsi que la distribution des phases ont favorisé l'utilisation de modèles qui permettent une résolution fine à l'échelle du pore. Ces modèles offrent un aperçu des caractéristiques de l'écoulement qui ne peuvent pas être facilement observées en laboratoire et peuvent être utilisé pour expliquer la différence entre les processus physiques et les modèles à l'échelle macroscopique existants. L'objet premier de la thèse se porte sur la simulation numérique directe : les équations de Navier-Stokes sont résolues dans l'espace interstitiel et la méthode du volume de fluide (VOF) est employée pour suivre l'évolution de l'interface. Dans VOF, la distribution des phases est décrite par une fonction fluide pour l'ensemble du domaine et des conditions aux bords particulières permettent la prise en compte des propriétés de mouillage du milieu poreux. Dans la première partie de la thèse, nous simulons le drainage dans une cellule Hele-Shaw 2D avec des obstacles cylindriques. Nous montrons que l'approche proposée est applicable même pour des ratios de densité et de viscosité très importants et permet de modéliser la transition entre déplacement stable et digitation visqueuse. Nous intéressons ensuite à l'interprétation de la pression capillaire à l'échelle macroscopique. Nous montrons que les techniques basées sur la moyenne spatiale de la pression présentent plusieurs limitations et sont imprécises en présence d'effets visqueux et de piégeage. Au contraire, une définition basée sur l'énergie permet de séparer les contributions capillaires des effets visqueux. La seconde partie de la thèse est consacrée à l'investigation des effets d'inertie associés aux reconfigurations irréversibles du ménisque causé par l'interface des instabilités. Comme prototype pour ces phénomènes, nous étudions d'abord la dynamique d'un ménisque dans un pore angulaire. Nous montrons que, dans un réseau de pores cubiques, les sauts et reconfigurations sont si fréquents que les effets d'inertie mènent à différentes configurations des fluides. A cause de la non-linéarité du problème, la distribution des fluides influence le travail des forces de pression, qui, à son tour, provoque une chute de pression dans la loi de Darcy. Cela suggère que ces phénomènes devraient être pris en compte lorsque que l'on décrit l'écoulement multiphasique en média poreux à l'échelle macroscopique. La dernière partie de la thèse s'attache à démontrer la validité de notre approche par une comparaison avec des expériences en laboratoire : un drainage instable dans un milieu poreux quasi 2D (une cellule Hele-Shaw avec des obstacles cylindriques). Plusieurs simulations sont tournées sous différentes conditions aux bords et en utilisant différents modèles (modèle intégré 2D et modèle 3D) afin de comparer certaines quantités macroscopiques avec les observations au laboratoire correspondantes. Malgré le challenge de modéliser des déplacements instables, où, par définition, de petites perturbations peuvent grandir sans fin, notre approche numérique apporte de résultats satisfaisants pour tous les cas étudiés. - Problems involving multiphase flow in porous media are of great interest in many scientific and engineering applications including Carbon Capture and Storage, oil recovery and groundwater remediation. The intrinsic complexity of multiphase systems and the multi scale heterogeneity of geological formations represent the major challenges to understand and model immiscible displacement in porous media. Upscaled descriptions based on generalization of Darcy's law are widely used, but they are subject to several limitations for flow that exhibit hysteric and history- dependent behaviors. Recent advances in high performance computing and the development of accurate methods to characterize pore space and phase distribution have fostered the use of models that allow sub-pore resolution. These models provide an insight on flow characteristics that cannot be easily achieved by laboratory experiments and can be used to explain the gap between physical processes and existing macro-scale models. We focus on direct numerical simulations: we solve the Navier-Stokes equations for mass and momentum conservation in the pore space and employ the Volume Of Fluid (VOF) method to track the evolution of the interface. In the VOF the distribution of the phases is described by a fluid function (whole-domain formulation) and special boundary conditions account for the wetting properties of the porous medium. In the first part of this thesis we simulate drainage in a 2-D Hele-Shaw cell filled with cylindrical obstacles. We show that the proposed approach can handle very large density and viscosity ratios and it is able to model the transition from stable displacement to viscous fingering. We then focus on the interpretation of the macroscopic capillary pressure showing that pressure average techniques are subject to several limitations and they are not accurate in presence of viscous effects and trapping. On the contrary an energy-based definition allows separating viscous and capillary contributions. In the second part of the thesis we investigate inertia effects associated with abrupt and irreversible reconfigurations of the menisci caused by interface instabilities. As a prototype of these phenomena we first consider the dynamics of a meniscus in an angular pore. We show that in a network of cubic pores, jumps and reconfigurations are so frequent that inertia effects lead to different fluid configurations. Due to the non-linearity of the problem, the distribution of the fluids influences the work done by pressure forces, which is in turn related to the pressure drop in Darcy's law. This suggests that these phenomena should be taken into account when upscaling multiphase flow in porous media. The last part of the thesis is devoted to proving the accuracy of the numerical approach by validation with experiments of unstable primary drainage in a quasi-2D porous medium (i.e., Hele-Shaw cell filled with cylindrical obstacles). We perform simulations under different boundary conditions and using different models (2-D integrated and full 3-D) and we compare several macroscopic quantities with the corresponding experiment. Despite the intrinsic challenges of modeling unstable displacement, where by definition small perturbations can grow without bounds, the numerical method gives satisfactory results for all the cases studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Neuronal oscillations have been the focus of increasing interest in the neuroscientific community, in part because they have been considered as a possible integrating mechanism through which internal states can influence stimulus processing in a top-down way (Engel et al., 2001). Moreover, increasing evidence indicates that oscillations in different frequency bands interact with one other through coupling mechanisms (Jensen and Colgin, 2007). The existence and the importance of these cross-frequency couplings during various tasks have been verified by recent studies (Canolty et al., 2006; Lakatos et al., 2007). In this study, we measure the strength and directionality of two types of couplings - phase-amplitude couplings and phase-phase couplings - between various bands in EEG data recorded during an illusory contour experiment that were identified using a recently-proposed adaptive frequency tracking algorithm (Van Zaen et al., 2010). Methods: The data used in this study have been taken from a previously published study examining the spatiotemporal mechanisms of illusory contour processing (Murray et al., 2002). The EEG in the present study were from a subset of nine subjects. Each stimulus was composed of 'pac-man' inducers presented in two orientations: IC, when an illusory contour was present, and NC, when no contour could be detected. The signals recorded by the electrodes P2, P4, P6, PO4 and PO6 were averaged, and filtered into the following bands: 4-8Hz, 8-12Hz, 15-25Hz, 35-45Hz, 45-55Hz, 55-65Hz and 65-75Hz. An adaptive frequency tracking algorithm (Van Zaen et al., 2010) was then applied in each band in order to extract the main oscillation and estimate its frequency. This additional step ensures that clean phase information is obtained when taking the Hilbert transform. The frequency estimated by the tracker was averaged over sliding windows and then used to compare the two conditions. Two types of cross-frequency couplings were considered: phase-amplitude couplings and phase-phase couplings. Both types were measured with the phase locking value (PLV, Lachaux et al., 1999) over sliding windows. The phase-amplitude couplings were computed with the phase of the low frequency oscillation and the phase of the amplitude of the high frequency one. Different coupling coefficients were used when measuring phase-phase couplings in order to estimate different m:n synchronizations (4:3, 3:2, 2:1, 3:1, 4:1, 5:1, 6:1, 7:1, 8:1 and 9:1) and to take into account the frequency differences across bands. Moreover, the direction of coupling was estimated with a directionality index (Bahraminasab et al., 2008). Finally, the two conditions IC and NC were compared with ANOVAs with 'subject' as a random effect and 'condition' as a fixed effect. Before computing the statistical tests, the PLV values were transformed into approximately normal variables (Penny et al., 2008). Results: When comparing the mean estimated frequency across conditions, a significant difference was found only in the 4-8Hz band, such that the frequency within this band was significantly higher for IC than NC stimuli starting at ~250ms post-stimulus onset (Fig. 1; solid line shows IC and dashed line NC). Significant differences in phase-amplitude couplings were obtained only when the 4-8 Hz band was taken as the low frequency band. Moreover, in all significant situations, the coupling strength is higher for the NC than IC condition. An example of significant difference between conditions is shown in Fig. 2 for the phase-amplitude coupling between the 4-8Hz and 55-65Hz bands (p-value in top panel and mean PLV values in the bottom panel). A decrease in coupling strength was observed shortly after stimulus onset for both conditions and was greater for the condition IC. This phenomenon was observed with all other frequency bands. The results obtained for the phase-phase couplings were more complex. As for the phase-amplitude couplings, all significant differences were obtained when the 4-8Hz band was considered as the low frequency band. The stimulus condition exhibiting the higher coupling strength depended on the ratio of the coupling coefficients. When this ratio was small, the IC condition exhibited the higher phase-phase coupling strength. When this ratio was large, the NC condition exhibited the higher coupling strength. Fig. 3 shows the phase-phase couplings between the 4-8Hz and 35-45Hz bands for the coupling coefficient 6:1, and the coupling strength was significantly higher for the IC than NC condition. By contrast, for the coupling coefficient 9:1 the NC condition gave the higher coupling strength (Fig. 4). Control analyses verified that it is not a consequence of the frequency difference between the two conditions in the 4-8Hz band. The directionality measures indicated a transfer of information from the low frequency components towards the high frequency ones. Conclusions: Adaptive tracking is a feasible method for EEG analyses, revealing information both about stimulus-related differences and coupling patterns across frequencies. Theta oscillations play a central role in illusory shape processing and more generally in visual processing. The presence vs. absence of illusory shapes was paralleled by faster theta oscillations. Phase-amplitude couplings were decreased more for IC than NC and might be due to a resetting mechanism. The complex patterns in phase-phase coupling between theta and beta/gamma suggest that the contribution of these oscillations to visual binding and stimulus processing are not as straightforward as conventionally held. Causality analyses further suggest that theta oscillations drive beta/gamma oscillations (see also Schroeder and Lakatos, 2009). The present findings highlight the need for applying more sophisticated signal analyses in order to establish a fuller understanding of the functional role of neural oscillations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last five years, Deep Brain Stimulation (DBS) has become the most popular and effective surgical technique for the treatent of Parkinson's disease (PD). The Subthalamic Nucleus (STN) is the usual target involved when applying DBS. Unfortunately, the STN is in general not visible in common medical imaging modalities. Therefore, atlas-based segmentation is commonly considered to locate it in the images. In this paper, we propose a scheme that allows both, to perform a comparison between different registration algorithms and to evaluate their ability to locate the STN automatically. Using this scheme we can evaluate the expert variability against the error of the algorithms and we demonstrate that automatic STN location is possible and as accurate as the methods currently used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the numerical treatment of the optical flow problem by evaluating the performance of the trust region method versus the line search method. To the best of our knowledge, the trust region method is studied here for the first time for variational optical flow computation. Four different optical flow models are used to test the performance of the proposed algorithm combining linear and nonlinear data terms with quadratic and TV regularization. We show that trust region often performs better than line search; especially in the presence of non-linearity and non-convexity in the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time-resolved measurements of tissue autofluorescence (AF) excited at 405 nm were carried out with an optical-fiber-based spectrometer in the bronchi of 11 patients. The objectives consisted of assessing the lifetime as a new tumor/normal (T/N) tissue contrast parameter and trying to explain the origin of the contrasts observed when using AF-based cancer detection imaging systems. No significant change in the AF lifetimes was found. AF bronchoscopy performed in parallel with an imaging device revealed both intensity and spectral contrasts. Our results suggest that the spectral contrast might be due to an enhanced blood concentration just below the epithelial layers of the lesion. The intensity contrast probably results from the thickening of the epithelium in the lesions. The absence of T/N lifetime contrast indicates that the quenching is not at the origin of the fluorescence intensity and spectral contrasts. These lifetimes (6.9 ns, 2.0 ns, and 0.2 ns) were consistent for all the examined sites. The fact that these lifetimes are the same for different emission domains ranging between 430 and 680 nm indicates that there is probably only one dominant fluorophore involved. The measured lifetimes suggest that this fluorophore is elastin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A reinforcement learning (RL) method was used to train a virtual character to move participants to a specified location. The virtual environment depicted an alleyway displayed through a wide field-of-view head-tracked stereo head-mounted display. Based on proxemics theory, we predicted that when the character approached within a personal or intimate distance to the participants, they would be inclined to move backwards out of the way. We carried out a between-groups experiment with 30 female participants, with 10 assigned arbitrarily to each of the following three groups: In the Intimate condition the character could approach within 0.38m and in the Social condition no nearer than 1.2m. In the Random condition the actions of the virtual character were chosen randomly from among the same set as in the RL method, and the virtual character could approach within 0.38m. The experiment continued in each case until the participant either reached the target or 7 minutes had elapsed. The distributions of the times taken to reach the target showed significant differences between the three groups, with 9 out of 10 in the Intimate condition reaching the target significantly faster than the 6 out of 10 who reached the target in the Social condition. Only 1 out of 10 in the Random condition reached the target. The experiment is an example of applied presence theory: we rely on the many findings that people tend to respond realistically in immersive virtual environments, and use this to get people to achieve a task of which they had been unaware. This method opens up the door for many such applications where the virtual environment adapts to the responses of the human participants with the aim of achieving particular goals.