11 resultados para bandwidth

em Université de Lausanne, Switzerland


Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: Nonvisual light-dependent functions in humans are conveyed mainly by intrinsically photosensitive retinal ganglion cells, which express melanopsin as photopigment. We aimed to identify the effects of circadian phase and sleepiness across 24 hours on various aspects of the pupil response to light stimulation. METHODS: We tested 10 healthy adults hourly in two 12-hour sessions covering a 24-hour period. Pupil responses to narrow bandwidth red (635 ± 18 nm) and blue (463 ± 24 nm) light (duration of 1 and 30 seconds) at equal photon fluxes were recorded, and correlated with salivary melatonin concentrations at the same circadian phases and to subjective sleepiness ratings. The magnitude of pupil constriction was determined from minimal pupil size. The post-stimulus pupil response was assessed from the pupil size at 6 seconds following light offset, the area within the redilation curve, and the exponential rate of redilation. RESULTS: Among the measured parameters, the pupil size 6 seconds after light offset correlated with melatonin concentrations (P < 0.05) and showed a significant modulation over 24 hours with maximal values after the nocturnal peak of melatonin secretion. In contrast, the post-stimulus pupil response following red light stimulation correlated with subjective sleepiness (P < 0.05) without significant changes over 24 hours. CONCLUSIONS: The post-stimulus pupil response to blue light as a marker of intrinsic melanopsin activity demonstrated a circadian modulation. In contrast, the effect of sleepiness was more apparent in the cone contribution to the pupil response. Thus, pupillary responsiveness to light is under influence of the endogenous circadian clock and subjective sleepiness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multisensory interactions have been documented within low-level, even primary, cortices and at early post-stimulus latencies. These effects are in turn linked to behavioral and perceptual modulations. In humans, visual cortex excitability, as measured by transcranial magnetic stimulation (TMS) induced phosphenes, can be reliably enhanced by the co-presentation of sounds. This enhancement occurs at pre-perceptual stages and is selective for different types of complex sounds. However, the source(s) of auditory inputs effectuating these excitability changes in primary visual cortex remain disputed. The present study sought to determine if direct connections between low-level auditory cortices and primary visual cortex are mediating these kinds of effects by varying the pitch and bandwidth of the sounds co-presented with single-pulse TMS over the occipital pole. Our results from 10 healthy young adults indicate that both the central frequency and bandwidth of a sound independently affect the excitability of visual cortex during processing stages as early as 30 msec post-sound onset. Such findings are consistent with direct connections mediating early-latency, low-level multisensory interactions within visual cortices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: Although 24-hour arterial blood pressure can be monitored in a free-moving animal using pressure telemetric transmitter mostly from Data Science International (DSI), accurate monitoring of 24-hour mouse left ventricular pressure (LVP) is not available because of its insufficient frequency response to a high frequency signal such as the maximum derivative of mouse LVP (LVdP/dtmax and LVdP/dtmin). The aim of the study was to develop a tiny implantable flow-through LVP telemetric transmitter for small rodent animals, which can be potentially adapted for human 24 hour BP and LVP accurate monitoring. Design and Method: The mouse LVP telemetric transmitter (Diameter: _12 mm, _0.4 g) was assembled by a pressure sensor, a passive RF telemetry chip, and to a 1.2F Polyurethane (PU) catheter tip. The device was developed in two configurations and compared with existing DSI system: (a) prototype-I: a new flow-through pressure sensor with wire link and (b) prototype-II: prototype-I plus a telemetry chip and its receiver. All the devices were applied in C57BL/6J mice. Data are mean_SEM. Results: A high frequency response (>100 Hz) PU heparin saline-filled catheter was inserted into mouse left ventricle via right carotid artery and implanted, LV systolic pressure (LVSP), LVdP/dtmax, and LVdP/dtmin were recorded on day2, 3, 4, 5, and 7 in conscious mice. The hemodynamic values were consistent and comparable (139_4 mmHg, 16634_319, - 12283_184 mmHg/s, n¼5) to one recorded by a validated Pebax03 catheter (138_2mmHg, 16045_443 and -12112_357 mmHg/s, n¼9). Similar LV hemodynamic values were obtained with Prototype-I. The same LVP waveforms were synchronically recorded by Notocord wire and Senimed wireless software through prototype-II in anesthetized mice. Conclusion: An implantable flow-through LVP transmitter (prototype-I) is generated for LVP accurate assessment in conscious mice. The prototype-II needs a further improvement on data transmission bandwidth and signal coupling distance to its receiver for accurate monitoring of LVP in a freemoving mouse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE. The purpose of this study was to improve the blood-pool signal-to-noise ratio (SNR) and blood-myocardium contrast-to-noise ratio (CNR) of slow-infusion 3-T whole-heart coronary MR angiography (MRA).SUBJECTS AND METHODS. In 2D sensitivity encoding (SENSE), the number of acquired k-space lines is reduced, allowing less radiofrequency excitation per cardiac cycle and a longer TR. The former can be exploited for signal enhancement with a higher radiofrequency excitation angle, and the latter leads to noise reduction due to lower data-sampling bandwidth. Both effects contribute to SNR gain in coronary MRA when spatial and temporal resolution and acquisition time remain identical. Numeric simulation was performed to select the optimal 2D SENSE pulse sequence parameters and predict the SNR gain. Eleven patients underwent conventional unenhanced and the proposed 2D SENSE contrast-enhanced coronary MRA acquisition. Blood-pool SNR, blood-myocardium CNR, visible vessel length, vessel sharpness, and number of side branches were evaluated.RESULTS. Consistent with the numeric simulation, using 2D SENSE in contrast-enhanced coronary MRA resulted in significant improvement in aortic blood-pool SNR (unenhanced vs contrast-enhanced, 37.5 +/- 14.7 vs 121.3 +/- 44.0; p < 0.05) and CNR (14.4 +/- 6.9 vs 101.5 +/- 40.8; p < 0.05) in the patient sample. A longer length of left anterior descending coronary artery was visualized, but vessel sharpness, coronary artery coverage, and image quality score were not improved with the proposed approach.CONCLUSION. In combination with contrast administration, 2D SENSE was found effective in improving SNR and CNR in 3-T whole-heart coronary MRA. Further investigation of cardiac motion compensation is necessary to exploit the SNR and CNR advantages and to achieve submillimeter spatial resolution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This PhD thesis addresses the issue of scalable media streaming in large-scale networking environments. Multimedia streaming is one of the largest sink of network resources and this trend is still growing as testified by the success of services like Skype, Netflix, Spotify and Popcorn Time (BitTorrent-based). In traditional client-server solutions, when the number of consumers increases, the server becomes the bottleneck. To overcome this problem, the Content-Delivery Network (CDN) model was invented. In CDN model, the server copies the media content to some CDN servers, which are located in different strategic locations on the network. However, they require heavy infrastructure investment around the world, which is too expensive. Peer-to-peer (P2P) solutions are another way to achieve the same result. These solutions are naturally scalable, since each peer can act as both a receiver and a forwarder. Most of the proposed streaming solutions in P2P networks focus on routing scenarios to achieve scalability. However, these solutions cannot work properly in video-on-demand (VoD) streaming, when resources of the media server are not sufficient. Replication is a solution that can be used in these situations. This thesis specifically provides a family of replication-based media streaming protocols, which are scalable, efficient and reliable in P2P networks. First, it provides SCALESTREAM, a replication-based streaming protocol that adaptively replicates media content in different peers to increase the number of consumers that can be served in parallel. The adaptiveness aspect of this solution relies on the fact that it takes into account different constraints like bandwidth capacity of peers to decide when to add or remove replicas. SCALESTREAM routes media blocks to consumers over a tree topology, assuming a reliable network composed of homogenous peers in terms of bandwidth. Second, this thesis proposes RESTREAM, an extended version of SCALESTREAM that addresses the issues raised by unreliable networks composed of heterogeneous peers. Third, this thesis proposes EAGLEMACAW, a multiple-tree replication streaming protocol in which two distinct trees, named EAGLETREE and MACAWTREE, are built in a decentralized manner on top of an underlying mesh network. These two trees collaborate to serve consumers in an efficient and reliable manner. The EAGLETREE is in charge of improving efficiency, while the MACAWTREE guarantees reliability. Finally, this thesis provides TURBOSTREAM, a hybrid replication-based streaming protocol in which a tree overlay is built on top of a mesh overlay network. Both these overlays cover all peers of the system and collaborate to improve efficiency and low-latency in streaming media to consumers. This protocol is implemented and tested in a real networking environment using PlanetLab Europe testbed composed of peers distributed in different places in Europe.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have explored the possibility of obtaining first-order permeability estimates for saturated alluvial sediments based on the poro-elastic interpretation of the P-wave velocity dispersion inferred from sonic logs. Modern sonic logging tools designed for environmental and engineering applications allow one for P-wave velocity measurements at multiple emitter frequencies over a bandwidth covering 5 to 10 octaves. Methodological considerations indicate that, for saturated unconsolidated sediments in the silt to sand range and typical emitter frequencies ranging from approximately 1 to 30 kHz, the observable velocity dispersion should be sufficiently pronounced to allow one for reliable first-order estimations of the permeability structure. The corresponding predictions have been tested on and verified for a borehole penetrating a typical surficial alluvial aquifer. In addition to multifrequency sonic logs, a comprehensive suite of nuclear and electrical logs, an S-wave log, a litholog, and a limited number laboratory measurements of the permeability from retrieved core material were also available. This complementary information was found to be essential for parameterizing the poro-elastic inversion procedure and for assessing the uncertainty and internal consistency of corresponding permeability estimates. Our results indicate that the thus obtained permeability estimates are largely consistent with those expected based on the corresponding granulometric characteristics, as well as with the available evidence form laboratory measurements. These findings are also consistent with evidence from ocean acoustics, which indicate that, over a frequency range of several orders-of-magnitude, the classical theory of poro-elasticity is generally capable of explaining the observed P-wave velocity dispersion in medium- to fine-grained seabed sediments

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. METHODS: We looked at about 240,000 IRC measurements carried out in about 150,000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m(3). Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. RESULTS: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. CONCLUSIONS: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contemporary coronary magnetic resonance angiography techniques suffer from signal-to-noise ratio (SNR) constraints. We propose a method to enhance SNR in gradient echo coronary magnetic resonance angiography by using sensitivity encoding (SENSE). While the use of sensitivity encoding to improve SNR seems counterintuitive, it can be exploited by reducing the number of radiofrequency excitations during the acquisition window while lowering the signal readout bandwidth, therefore improving the radiofrequency receive to radiofrequency transmit duty cycle. Under certain conditions, this leads to improved SNR. The use of sensitivity encoding for improved SNR in three-dimensional coronary magnetic resonance angiography is investigated using numerical simulations and an in vitro and an in vivo study. A maximum 55% SNR enhancement for coronary magnetic resonance angiography was found both in vitro and in vivo, which is well consistent with the numerical simulations. This method is most suitable for spoiled gradient echo coronary magnetic resonance angiography in which a high temporal and spatial resolution is required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Here we describe a method for measuring tonotopic maps and estimating bandwidth for voxels in human primary auditory cortex (PAC) using a modification of the population Receptive Field (pRF) model, developed for retinotopic mapping in visual cortex by Dumoulin and Wandell (2008). The pRF method reliably estimates tonotopic maps in the presence of acoustic scanner noise, and has two advantages over phase-encoding techniques. First, the stimulus design is flexible and need not be a frequency progression, thereby reducing biases due to habituation, expectation, and estimation artifacts, as well as reducing the effects of spatio-temporal BOLD nonlinearities. Second, the pRF method can provide estimates of bandwidth as a function of frequency. We find that bandwidth estimates are narrower for voxels within the PAC than in surrounding auditory responsive regions (non-PAC).