936 resultados para Cryptographic Protocols, Provable Security, ID-Based Cryptography


Relevância:

40.00% 40.00%

Publicador:

Resumo:

AbstractDigitalization gives to the Internet the power by allowing several virtual representations of reality, including that of identity. We leave an increasingly digital footprint in cyberspace and this situation puts our identity at high risks. Privacy is a right and fundamental social value that could play a key role as a medium to secure digital identities. Identity functionality is increasingly delivered as sets of services, rather than monolithic applications. So, an identity layer in which identity and privacy management services are loosely coupled, publicly hosted and available to on-demand calls could be more realistic and an acceptable situation. Identity and privacy should be interoperable and distributed through the adoption of service-orientation and implementation based on open standards (technical interoperability). Ihe objective of this project is to provide a way to implement interoperable user-centric digital identity-related privacy to respond to the need of distributed nature of federated identity systems. It is recognized that technical initiatives, emerging standards and protocols are not enough to guarantee resolution for the concerns surrounding a multi-facets and complex issue of identity and privacy. For this reason they should be apprehended within a global perspective through an integrated and a multidisciplinary approach. The approach dictates that privacy law, policies, regulations and technologies are to be crafted together from the start, rather than attaching it to digital identity after the fact. Thus, we draw Digital Identity-Related Privacy (DigldeRP) requirements from global, domestic and business-specific privacy policies. The requirements take shape of business interoperability. We suggest a layered implementation framework (DigldeRP framework) in accordance to model-driven architecture (MDA) approach that would help organizations' security team to turn business interoperability into technical interoperability in the form of a set of services that could accommodate Service-Oriented Architecture (SOA): Privacy-as-a-set-of- services (PaaSS) system. DigldeRP Framework will serve as a basis for vital understanding between business management and technical managers on digital identity related privacy initiatives. The layered DigldeRP framework presents five practical layers as an ordered sequence as a basis of DigldeRP project roadmap, however, in practice, there is an iterative process to assure that each layer supports effectively and enforces requirements of the adjacent ones. Each layer is composed by a set of blocks, which determine a roadmap that security team could follow to successfully implement PaaSS. Several blocks' descriptions are based on OMG SoaML modeling language and BPMN processes description. We identified, designed and implemented seven services that form PaaSS and described their consumption. PaaSS Java QEE project), WSDL, and XSD codes are given and explained.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Amb aquest projecte es vol proposar un esquema criptogràfic que permeti realitzar una enquesta de forma electrònica. La solució es basa en criptografia de clau pública, que en la actualitat es fa servir de manera habitual tant en el comerç electrònic com en altres aplicacions criptogràfiques.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A group of European experts was commissioned to establish guidelines on the therapeutic use of repetitive transcranial magnetic stimulation (rTMS) from evidence published up until March 2014, regarding pain, movement disorders, stroke, amyotrophic lateral sclerosis, multiple sclerosis, epilepsy, consciousness disorders, tinnitus, depression, anxiety disorders, obsessive-compulsive disorder, schizophrenia, craving/addiction, and conversion. Despite unavoidable inhomogeneities, there is a sufficient body of evidence to accept with level A (definite efficacy) the analgesic effect of high-frequency (HF) rTMS of the primary motor cortex (M1) contralateral to the pain and the antidepressant effect of HF-rTMS of the left dorsolateral prefrontal cortex (DLPFC). A Level B recommendation (probable efficacy) is proposed for the antidepressant effect of low-frequency (LF) rTMS of the right DLPFC, HF-rTMS of the left DLPFC for the negative symptoms of schizophrenia, and LF-rTMS of contralesional M1 in chronic motor stroke. The effects of rTMS in a number of indications reach level C (possible efficacy), including LF-rTMS of the left temporoparietal cortex in tinnitus and auditory hallucinations. It remains to determine how to optimize rTMS protocols and techniques to give them relevance in routine clinical practice. In addition, professionals carrying out rTMS protocols should undergo rigorous training to ensure the quality of the technical realization, guarantee the proper care of patients, and maximize the chances of success. Under these conditions, the therapeutic use of rTMS should be able to develop in the coming years.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we define a new scheme to develop and evaluate protection strategies for building reliable GMPLS networks. This is based on what we have called the network protection degree (NPD). The NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability, and an a posteriori evaluation, the failure impact degree (FID), which determines the impact on the network in case of failure, in terms of packet loss and recovery time. Having mathematical formulated these components, experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms in order to offer a certain degree of protection

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVES: Renal tubular sodium handling was measured in healthy subjects submitted to acute and chronic salt-repletion/salt-depletion protocols. The goal was to compare the changes in proximal and distal sodium handling induced by the two procedures using the lithium clearance technique. METHODS: In nine subjects, acute salt loading was obtained with a 2 h infusion of isotonic saline, and salt depletion was induced with a low-salt diet and furosemide. In the chronic protocol, 15 subjects randomly received a low-, a regular- and a high-sodium diet for 1 week. In both protocols, renal and systemic haemodynamics and urinary electrolyte excretion were measured after an acute water load. In the chronic study, sodium handling was also determined, based on 12 h day- and night-time urine collections. RESULTS: The acute and chronic protocols induced comparable changes in sodium excretion, renal haemodynamics and hormonal responses. Yet, the relative contribution of the proximal and distal nephrons to sodium excretion in response to salt loading and depletion differed in the two protocols. Acutely, subjects appeared to regulate sodium balance mainly by the distal nephron, with little contribution of the proximal tubule. In contrast, in the chronic protocol, changes in sodium reabsorption could be measured both in the proximal and distal nephrons. Acute water loading was an important confounding factor which increased sodium excretion by reducing proximal sodium reabsorption. This interference of water was particularly marked in salt-depleted subjects. CONCLUSION: Acute and chronic salt loading/salt depletion protocols investigate different renal mechanisms of control of sodium balance. The endogenous lithium clearance technique is a reliable method to assess proximal sodium reabsorption in humans. However, to investigate sodium handling in diseases such as hypertension, lithium should be measured preferably on 24 h or overnight urine collections to avoid the confounding influence of water.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE: Comparison of prospectively treated patients with neoadjuvant cisplatin-based chemotherapy vs radiochemotherapy followed by resection for mediastinoscopically proven stage III N2 non-small cell lung cancer with respect to postoperative morbidity, pathological nodal downstaging, overall and disease-free survival, and site of recurrence. METHODS: Eighty-two patients were enrolled between January 1994 to June 2003, 36 had cisplatin and doxetacel-based chemotherapy (group I) and 46 cisplatin-based radiochemotherapy up to 44 Gy (group II), either as sequential (25 patients) or concomitant (21 patients) treatment. All patients had evaluation of absence of distant metastases by bone scintigraphy, thoracoabdominal CT scan or PET scan, and brain MRI, and all underwent pre-induction mediastinoscopy, resection and mediastinal lymph node dissection by the same surgeon. RESULTS: Group I and II comprised T1/2 tumors in 47 and 28%, T3 tumors in 45 and 41%, and T4 tumors in 8 and 31% of the patients, respectively (P=0.03). There was a similar distribution of the extent of resection (lobectomy, sleeve lobectomy, left and right pneumonectomy) in both groups (P=0.9). Group I and II revealed a postoperative 90-d mortality of 3 and 4% (P=0.6), a R0-resection rate of 92 and 94% (P=0.9), and a pathological mediastinal downstaging in 61 and 78% of the patients (P<0.01), respectively. 5y-overall survival and disease-free survival of all patients were 40 and 36%, respectively, without significant difference between T1-3 and T4 tumors. There was no significant difference in overall survival rate in either induction regimens, however, radiochemotherapy was associated with a longer disease-free survival than chemotherapy (P=0.04). There was no significant difference between concurrent vs sequential radiochemotherapy with respect to postoperative morbidity, resectability, pathological nodal downstaging, survival and disease-free survival. CONCLUSIONS: Neoadjuvant cisplatin-based radiochemotherapy was associated with a similar postoperative mortality, an increased pathological nodal downstaging and a better disease-free survival as compared to cisplatin doxetacel-based chemotherapy in patients with stage III (N2) NSCLC although a higher number of T4 tumors were admitted to radiochemotherapy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study investigated the influence of two warm-up protocols on neural and contractile parameters of knee extensors. A series of neuromuscular tests including voluntary and electrically evoked contractions were performed before and after running- (R (WU); slow running, athletic drills, and sprints) and strength-based (S (WU); bilateral 90 degrees back squats, Olympic lifting movements and reactivity exercises) warm ups (duration ~40 min) in ten-trained subjects. The estimated overall mechanical work was comparable between protocols. Maximal voluntary contraction torque (+15.6%; P < 0.01 and +10.9%; P < 0.05) and muscle activation (+10.9 and +12.9%; P < 0.05) increased to the same extent after R (WU) and S (WU), respectively. Both protocols caused a significant shortening of time to contract (-12.8 and -11.8% after R (WU) and S (WU); P < 0.05), while the other twitch parameters did not change significantly. Running- and strength-based warm ups induce similar increase in knee extensors force-generating capacity by improving the muscle activation. Both protocols have similar effects on M-wave and isometric twitch characteristics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A remarkable feature of the carcinogenicity of inorganic arsenic is that while human exposures to high concentrations of inorganic arsenic in drinking water are associated with increases in skin, lung, and bladder cancer, inorganic arsenic has not typically caused tumors in standard laboratory animal test protocols. Inorganic arsenic administered for periods of up to 2 yr to various strains of laboratory mice, including the Swiss CD-1, Swiss CR:NIH(S), C57Bl/6p53(+/-), and C57Bl/6p53(+/+), has not resulted in significant increases in tumor incidence. However, Ng et al. (1999) have reported a 40% tumor incidence in C57Bl/6J mice exposed to arsenic in their drinking water throughout their lifetime, with no tumors reported in controls. In order to investigate the potential role of tissue dosimetry in differential susceptibility to arsenic carcinogenicity, a physiologically based pharmacokinetic (PBPK) model for inorganic arsenic in the rat, hamster, monkey, and human (Mann et al., 1996a, 1996b) was extended to describe the kinetics in the mouse. The PBPK model was parameterized in the mouse using published data from acute exposures of B6C3F1 mice to arsenate, arsenite, monomethylarsonic acid (MMA), and dimethylarsinic acid (DMA) and validated using data from acute exposures of C57Black mice. Predictions of the acute model were then compared with data from chronic exposures. There was no evidence of changes in the apparent volume of distribution or in the tissue-plasma concentration ratios between acute and chronic exposure that might support the possibility of inducible arsenite efflux. The PBPK model was also used to project tissue dosimetry in the C57Bl/6J study, in comparison with tissue levels in studies having shorter duration but higher arsenic treatment concentrations. The model evaluation indicates that pharmacokinetic factors do not provide an explanation for the difference in outcomes across the various mouse bioassays. Other possible explanations may relate to strain-specific differences, or to the different durations of dosing in each of the mouse studies, given the evidence that inorganic arsenic is likely to be active in the later stages of the carcinogenic process. [Authors]

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This PhD thesis addresses the issue of scalable media streaming in large-scale networking environments. Multimedia streaming is one of the largest sink of network resources and this trend is still growing as testified by the success of services like Skype, Netflix, Spotify and Popcorn Time (BitTorrent-based). In traditional client-server solutions, when the number of consumers increases, the server becomes the bottleneck. To overcome this problem, the Content-Delivery Network (CDN) model was invented. In CDN model, the server copies the media content to some CDN servers, which are located in different strategic locations on the network. However, they require heavy infrastructure investment around the world, which is too expensive. Peer-to-peer (P2P) solutions are another way to achieve the same result. These solutions are naturally scalable, since each peer can act as both a receiver and a forwarder. Most of the proposed streaming solutions in P2P networks focus on routing scenarios to achieve scalability. However, these solutions cannot work properly in video-on-demand (VoD) streaming, when resources of the media server are not sufficient. Replication is a solution that can be used in these situations. This thesis specifically provides a family of replication-based media streaming protocols, which are scalable, efficient and reliable in P2P networks. First, it provides SCALESTREAM, a replication-based streaming protocol that adaptively replicates media content in different peers to increase the number of consumers that can be served in parallel. The adaptiveness aspect of this solution relies on the fact that it takes into account different constraints like bandwidth capacity of peers to decide when to add or remove replicas. SCALESTREAM routes media blocks to consumers over a tree topology, assuming a reliable network composed of homogenous peers in terms of bandwidth. Second, this thesis proposes RESTREAM, an extended version of SCALESTREAM that addresses the issues raised by unreliable networks composed of heterogeneous peers. Third, this thesis proposes EAGLEMACAW, a multiple-tree replication streaming protocol in which two distinct trees, named EAGLETREE and MACAWTREE, are built in a decentralized manner on top of an underlying mesh network. These two trees collaborate to serve consumers in an efficient and reliable manner. The EAGLETREE is in charge of improving efficiency, while the MACAWTREE guarantees reliability. Finally, this thesis provides TURBOSTREAM, a hybrid replication-based streaming protocol in which a tree overlay is built on top of a mesh overlay network. Both these overlays cover all peers of the system and collaborate to improve efficiency and low-latency in streaming media to consumers. This protocol is implemented and tested in a real networking environment using PlanetLab Europe testbed composed of peers distributed in different places in Europe.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There is no doubt about the necessity of protecting digital communication: Citizens are entrusting their most confidential and sensitive data to digital processing and communication, and so do governments, corporations, and armed forces. Digital communication networks are also an integral component of many critical infrastructures we are seriously depending on in our daily lives. Transportation services, financial services, energy grids, food production and distribution networks are only a few examples of such infrastructures. Protecting digital communication means protecting confidentiality and integrity by encrypting and authenticating its contents. But most digital communication is not secure today. Nevertheless, some of the most ardent problems could be solved with a more stringent use of current cryptographic technologies. Quite surprisingly, a new cryptographic primitive emerges from the ap-plication of quantum mechanics to information and communication theory: Quantum Key Distribution. QKD is difficult to understand, it is complex, technically challenging, and costly-yet it enables two parties to share a secret key for use in any subsequent cryptographic task, with an unprecedented long-term security. It is disputed, whether technically and economically fea-sible applications can be found. Our vision is, that despite technical difficulty and inherent limitations, Quantum Key Distribution has a great potential and fits well with other cryptographic primitives, enabling the development of highly secure new applications and services. In this thesis we take a structured approach to analyze the practical applicability of QKD and display several use cases of different complexity, for which it can be a technology of choice, either because of its unique forward security features, or because of its practicability.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Combined positron emission tomography and computed tomography (PET/CT) scanners play a major role in medicine for in vivo imaging in an increasing number of diseases in oncology, cardiology, neurology, and psychiatry. With the advent of short-lived radioisotopes other than 18F and newer scanners, there is a need to optimize radioisotope activity and acquisition protocols, as well as to compare scanner performances on an objective basis. The Discovery-LS (D-LS) was among the first clinical PET/CT scanners to be developed and has been extensively characterized with older National Electrical Manufacturer Association (NEMA) NU 2-1994 standards. At the time of publication of the latest version of the standards (NU 2-2001) that have been adapted for whole-body imaging under clinical conditions, more recent models from the same manufacturer, i.e., Discovery-ST (D-ST) and Discovery-STE (D-STE), were commercially available. We report on the full characterization both in the two- and three-dimensional acquisition mode of the D-LS according to latest NEMA NU 2-2001 standards (spatial resolution, sensitivity, count rate performance, accuracy of count losses, and random coincidence correction and image quality), as well as a detailed comparison with the newer D-ST widely used and whose characteristics are already published.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the clinical performance of glass-ceramic/zirconia crowns fabricated using intraoral digital impressions - a retrospective study with a three-year follow-up. METHODS: 70 consecutive patients with a total of 86 glass-ceramic/zirconia crowns were treated by a single clinician using standardized clinical and laboratory protocols. A complete digital workflow was adopted for the purpose except for the veneering procedure for the glass-ceramic crowns. Occlusal adjustments were made before the ceramic glazing procedure. Before cementation, all abutments where carefully cleaned with a 70% alcoholic solution and air dried. Cementation was performed using dual-curing, self-adhesive resin cement. Patients were re-examined after 12, 24 and 36 months, to assess crown chipping/fractures. RESULTS: After the three-year follow-up, none of the zirconia-based restoration was lost ("apparent" survival rate 100%) otherwise, the chipping rate of the veneering material increased from 9.3% after 12 months, to 14% after 24 months to 30.2% after 36 months. As a consequence, the "real" success rate after 3 years was 69.8%. CONCLUSIONS: After 3 years the success rate of zirconia-based crowns was 69.8%, while the incidence of the chipping was 30.2%. Assuming an exponential increase in chipping rate between 12 and 36 months it can be argued that, among others, the fatigue-mechanism could be advocated as the main factor for the failure of glass-ceramic veneered zirconia especially after 24 months.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multi-centre data repositories like the Alzheimer's Disease Neuroimaging Initiative (ADNI) offer a unique research platform, but pose questions concerning comparability of results when using a range of imaging protocols and data processing algorithms. The variability is mainly due to the non-quantitative character of the widely used structural T1-weighted magnetic resonance (MR) images. Although the stability of the main effect of Alzheimer's disease (AD) on brain structure across platforms and field strength has been addressed in previous studies using multi-site MR images, there are only sparse empirically-based recommendations for processing and analysis of pooled multi-centre structural MR data acquired at different magnetic field strengths (MFS). Aiming to minimise potential systematic bias when using ADNI data we investigate the specific contributions of spatial registration strategies and the impact of MFS on voxel-based morphometry in AD. We perform a whole-brain analysis within the framework of Statistical Parametric Mapping, testing for main effects of various diffeomorphic spatial registration strategies, of MFS and their interaction with disease status. Beyond the confirmation of medial temporal lobe volume loss in AD, we detect a significant impact of spatial registration strategy on estimation of AD related atrophy. Additionally, we report a significant effect of MFS on the assessment of brain anatomy (i) in the cerebellum, (ii) the precentral gyrus and (iii) the thalamus bilaterally, showing no interaction with the disease status. We provide empirical evidence in support of pooling data in multi-centre VBM studies irrespective of disease status or MFS.