990 resultados para Communication Protocols


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study experimentally how the ability to communicate affects the frequency and effectiveness of flexible and inflexible contracts in a bilateral trade context where sellers can adjust trade quality after observing a post-contractual cost shock and a discretionary buyer transfer. In the absence of communication, we find that rigid contracts are more frequent and lead to higher earnings for both buyer and seller. By contrast, in the presence of communication, flexible contracts are much more frequent and considerably more productive, both for buyers and sellers. Also, both buyer and seller earn considerably more from flexible with communication than rigid without communication. Our results show quite strongly that communication, a normal feature in contracting, can remove the potential cost of flexibility (disagreements caused by conflicting perceptions). We offer an explanation based on social norms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is an exploratory study that aims, on the one hand, to examine in more detail how children between 12 and 16 years of age use different audiovisual technologies, what they feel and think when using them, and whom they like to speak to about such experiences. On the other hand, we look more deeply into the interactions between adults and children, particularly between parents and their children, in relation to these technologies when children use them at home or in other places. We analysed responses to questionnaires with several common items, administered separately to parents and children. Children’s responses reflect an important level of dissatisfaction when talking with different adults about media activities. Our findings support the thesis that more and more children socialise through new information and communication technologies with little or no recourse to adult criteria, giving rise to the emergence of specific children’s cultures. Crossing of the responses of parents and those of their own children shows us which aspects of media reality adults overestimate or underestimate in comparison to children, and to what degree certain judgements coincide and differ between generations. The results can be applied to the improvement of relations between adults and adolescents, taking advantage of adolescents’ strong motivation to engage in activities using audiovisual media

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives Medical futility at the end of life is a growing challenge to medicine. The goals of the authors were to elucidate how clinicians define futility, when they perceive life-sustaining treatment (LST) to be futile, how they communicate this situation and why LST is sometimes continued despite being recognised as futile. Methods The authors reviewed ethics case consultation protocols and conducted semi-structured interviews with 18 physicians and 11 nurses from adult intensive and palliative care units at a tertiary hospital in Germany. The transcripts were subjected to qualitative content analysis. Results Futility was identified in the majority of case consultations. Interviewees associated futility with the failure to achieve goals of care that offer a benefit to the patient's quality of life and are proportionate to the risks, harms and costs. Prototypic examples mentioned are situations of irreversible dependence on LST, advanced metastatic malignancies and extensive brain injury. Participants agreed that futility should be assessed by physicians after consultation with the care team. Intensivists favoured an indirect and stepwise disclosure of the prognosis. Palliative care clinicians focused on a candid and empathetic information strategy. The reasons for continuing futile LST are primarily emotional, such as guilt, grief, fear of legal consequences and concerns about the family's reaction. Other obstacles are organisational routines, insufficient legal and palliative knowledge and treatment requests by patients or families. Conclusion Managing futility could be improved by communication training, knowledge transfer, organisational improvements and emotional and ethical support systems. The authors propose an algorithm for end-of-life decision making focusing on goals of treatment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Communication in cancer care has become a major topic of interest. Since there is evidence that ineffective communication affects both patients and oncology clinicians (physicians and nurses), so-called communication skills trainings (CSTs) have been developed over the last decade. While these trainings have been demonstrated to be effective, there is an important heterogeneity with regard to implementation and with regard to evidence of different aspects of CST. METHODS: In order to review and discuss the scientific literature on CST in oncology and to formulate recommendations, the Swiss Cancer League has organised a consensus meeting with European opinion leaders and experts in the field of CST, as well as oncology clinicians, representatives of oncology societies and patient organisations. On the basis of a systematic review and a meta-analysis, recommendations have been developed and agreed upon. RESULTS: Recommendations address (i) the setting, objectives and participants of CST, (ii) its content and pedagogic tools, (iii) organisational aspects, (iv) outcome and (v) future directions and research. CONCLUSION: This consensus meeting, on the basis of European expert opinions and a systematic review and meta-analysis, defines key elements for the current provision and future development and evaluation of CST in oncology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES: Renal tubular sodium handling was measured in healthy subjects submitted to acute and chronic salt-repletion/salt-depletion protocols. The goal was to compare the changes in proximal and distal sodium handling induced by the two procedures using the lithium clearance technique. METHODS: In nine subjects, acute salt loading was obtained with a 2 h infusion of isotonic saline, and salt depletion was induced with a low-salt diet and furosemide. In the chronic protocol, 15 subjects randomly received a low-, a regular- and a high-sodium diet for 1 week. In both protocols, renal and systemic haemodynamics and urinary electrolyte excretion were measured after an acute water load. In the chronic study, sodium handling was also determined, based on 12 h day- and night-time urine collections. RESULTS: The acute and chronic protocols induced comparable changes in sodium excretion, renal haemodynamics and hormonal responses. Yet, the relative contribution of the proximal and distal nephrons to sodium excretion in response to salt loading and depletion differed in the two protocols. Acutely, subjects appeared to regulate sodium balance mainly by the distal nephron, with little contribution of the proximal tubule. In contrast, in the chronic protocol, changes in sodium reabsorption could be measured both in the proximal and distal nephrons. Acute water loading was an important confounding factor which increased sodium excretion by reducing proximal sodium reabsorption. This interference of water was particularly marked in salt-depleted subjects. CONCLUSION: Acute and chronic salt loading/salt depletion protocols investigate different renal mechanisms of control of sodium balance. The endogenous lithium clearance technique is a reliable method to assess proximal sodium reabsorption in humans. However, to investigate sodium handling in diseases such as hypertension, lithium should be measured preferably on 24 h or overnight urine collections to avoid the confounding influence of water.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Expressions relating spectral efficiency, power, and Doppler spectrum, are derived for Rayleigh-faded wireless channels with Gaussian signal transmission. No side information on the state of the channel is assumed at the receiver. Rather, periodic reference signals are postulated in accordance with the functioning of most wireless systems. The analysis relies on a well-established lower bound, generally tight and asymptotically exact at low SNR. In contrast with most previous studies, which relied on block-fading channel models, a continuous-fading model is adopted. This embeds the Doppler spectrum directly in the derived expressions, imbuing them with practical significance. Closed-form relationships are obtained for the popular Clarke-Jakes spectrum and informative expansions, valid for arbitrary spectra, are found for the low- and high-power regimes. While the paper focuses on scalar channels, the extension to multiantenna settings is also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analysis of the multiantenna capacity in the high-SNR regime has hitherto focused on the high-SNR slope (or maximum multiplexing gain), which quantifies the multiplicative increase as function of the number of antennas. This traditional characterization is unable to assess the impact of prominent channel features since, for a majority of channels, the slope equals the minimum of the number of transmit and receive antennas. Furthermore, a characterization based solely on the slope captures only the scaling but it has no notion of the power required for a certain capacity. This paper advocates a more refined characterization whereby, as function of SNRjdB, the high-SNR capacity is expanded as an affine function where the impact of channel features such as antenna correlation, unfaded components, etc, resides in the zero-order term or power offset. The power offset, for which we find insightful closed-form expressions, is shown to play a chief role for SNR levels of practical interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Supported by IEEE 802.15.4 standardization activities, embedded networks have been gaining popularity in recent years. The focus of this paper is to quantify the behavior of key networking metrics of IEEE 802.15.4 beacon-enabled nodes under typical operating conditions, with the inclusion of packet retransmissions. We corrected and extended previous analyses by scrutinizing the assumptions on which the prevalent Markovian modeling is generally based. By means of a comparative study, we singled out which of the assumptions impact each of the performance metrics (throughput, delay, power consumption, collision probability, and packet-discard probability). In particular, we showed that - unlike what is usually assumed - the probability that a node senses the channel busy is not constant for all the stages of the backoff procedure and that these differences have a noticeable impact on backoff delay, packet-discard probability, and power consumption. Similarly, we showed that - again contrary to common assumption - the probability of obtaining transmission access to the channel depends on the number of nodes that is simultaneously sensing it. We evidenced that ignoring this dependence has a significant impact on the calculated values of throughput and collision probability. Circumventing these and other assumptions, we rigorously characterize, through a semianalytical approach, the key metrics in a beacon-enabled IEEE 802.15.4 system with retransmissions.