223 resultados para step-up converter
Resumo:
The success of therapies for a number of pediatric disorders has posed new challenges for the long-term follow-up of adolescents with chronic endocrinopathies. Unfortunately, too many patients are lost during the transfer from pediatric to adult clinics. The transition process should be well-organized and include the young person and family. Recognizing the special needs of these adolescents is an important step in developing patient-centered approaches to care that enable patients to develop autonomy and self care skills. Key elements in this process include structured policies and guidelines, communication and close collaboration between pediatric and adult clinics, and integrating nurse clinicians in the transition process to help close the gaps in care.
Resumo:
A new radiolarian order - Archaeospicularia - is proposed for some Lower Paleozoic radiolarians previously considered to belong to Spumellaria and to Collodaria. It is characterized by a globular shell made of several spicules which can be free, interlocked, or fused to formed a latticed wall. The present paper gives the definition of this order and proposes a first classification. It is supposed that the Archaeospicularia represents the oldest radiolarian group and that in the Lower Paleozoic it gave rise to the orders Entactinaria, Albaillellaria, and probably Spumellaria by the reduction of the number of initial spicules. The origin of this order and its relationships with other groups of organisms with siliceous skeletons are also briefly discussed. (C) 2000 Academie des sciences / Editions scientifiques et medicales Elsevier SAS.
Resumo:
Heart transplantation (HTx) started in 1987 at two university hospitals (CHUV, HUG) in the western part of Switzerland, with 223 HTx performed at the CHUV until December 2010. Between 1987 and 2003, 106 HTx were realized at the HUG resulting in a total of 329 HTx in the western part of Switzerland. After the relocation of organ transplantation activity in the western part of Switzerland in 2003, the surgical part and the early postoperative care of HTx remained limited to the CHUV. However, every other HTx activity are pursued at the two university hospitals (CHUV, HUG). This article summarizes the actual protocols for selection and pre-transplant follow-up of HTx candidates in the western part of Switzerland, permitting a uniform structure of pretransplant follow-up in the western part of Switzerland.
Resumo:
Nearly full-length Circumsporozoite protein (CSP) from Plasmodium falciparum, the C-terminal fragments from both P. falciparm and P. yoelii CSP and a fragment comprising 351 amino acids of P.vivax MSPI were expressed in the slime mold Dictyostelium discoideum. Discoidin-tag expression vectors allowed both high yields of these proteins and their purification by a nearly single-step procedure. We exploited the galactose binding activity of Discoidin Ia to separate the fusion proteins by affinity chromatography on Sepharose-4B columns. Inclusion of a thrombin recognition site allowed cleavage of the Discoidin-tag from the fusion protein. Partial secretion of the protein was obtained via an ER independent pathway, whereas routing the recombinant proteins to the ER resulted in glycosylation and retention. Yields of proteins ranged from 0.08 to 3 mg l(-1) depending on the protein sequence and the purification conditions. The recognition of purified MSPI by sera from P. vivax malaria patients was used to confirm the native conformation of the protein expressed in Dictyostelium. The simple purification procedure described here, based on Sepharose-4B, should facilitate the expression and the large-scale purification of various Plasmodium polypeptides.
Resumo:
OBJECTIVE: A single course of antenatal corticosteroids (ACS) is associated with a reduction in respiratory distress syndrome and neonatal death. Multiple Courses of Antenatal Corticosteroids Study (MACS), a study involving 1858 women, was a multicentre randomized placebo-controlled trial of multiple courses of ACS, given every 14 days until 33+6 weeks or birth, whichever came first. The primary outcome of the study, a composite of neonatal mortality and morbidity, was similar for the multiple ACS and placebo groups (12.9% vs. 12.5%), but infants exposed to multiple courses of ACS weighed less, were shorter, and had smaller head circumferences. Thus for women who remain at increased risk of preterm birth, multiple courses of ACS (every 14 days) are not recommended. Chronic use of corticosteroids is associated with numerous side effects including weight gain and depression. The aim of this postpartum assessment was to ascertain if multiple courses of ACS were associated with maternal side effects. METHODS: Three months postpartum, women who participated in MACS were asked to complete a structured questionnaire that asked about maternal side effects of corticosteroid use during MACS and included the Edinburgh Postnatal Depression Scale. Women were also asked to evaluate their study participation. RESULTS: Of the 1858 women randomized, 1712 (92.1%) completed the postpartum questionnaire. There were no significant differences in the risk of maternal side effects between the two groups. Large numbers of women met the criteria for postpartum depression (14.1% in the ACS vs. 16.0% in the placebo group). Most women (94.1%) responded that they would participate in the trial again. CONCLUSION: In pregnancy, corticosteroids are given to women for fetal lung maturation and for the treatment of various maternal diseases. In this international multicentre randomized controlled trial, multiple courses of ACS (every 14 days) were not associated with maternal side effects, and the majority of women responded that they would participate in such a study again.
Resumo:
We present dual-wavelength Digital Holographic Microscopy (DHM) measurements on a certified 8.9 nm high Chromium thin step sample and demonstrate sub-nanometer axial accuracy. We introduce a modified DHM Reference Calibrated Hologram (RCH) reconstruction algorithm taking into account amplitude contributions. By combining this with a temporal averaging procedure and a specific dual-wavelength DHM arrangement, it is shown that specimen topography can be measured with an accuracy, defined as the axial standard deviation, reduced to at least 0.9 nm. Indeed, it is reported that averaging each of the two wavefronts recorded with real-time dual-wavelength DHM can provide up to 30% spatial noise reduction for the given configuration, thanks to their non-correlated nature. ©2008 COPYRIGHT SPIE
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
BACKGROUND: A device to perform sutureless end-to-side coronary artery anastomosis has been developed by means of stent technology (GraftConnector). The present study assesses the long-term quality of the GraftConnector anastomosis in a sheep model. METHODS: In 8 adult sheep, 40-55 kg in weight, through left anterior thoracotomy, the right internal mammary artery (RIMA) was prepared and connected to the left anterior descending artery (LAD) by means of GraftConnector, on beating heart, without using any stabilizer. Ticlopidine 250 mg/day for anticoagulation for 4 weeks and Aspirin 100 mg/day for 6 months were given. The animals were sacrificed after 6 months and histological examination of anastomoses was carried out after slicing with the connector in situ for morphological analysis. RESULTS: All animals survived at 6 months. All anastomoses were patent and mean luminal width at histology was 1.8 +/- 0.2 mm; mean myotomia hyperplasia thickness was 0.21 +/- 0.1 mm. CONCLUSIONS: Long-term results demonstrate that OPCABGs performed with GraftConnector had 100% patency rate. The mean anastomotic luminal width corresponds to mean LAD's adult sheep diameter. We may speculate that myotomia hyperplasia occurred as a result of local device oversizing.
Resumo:
Introduction An impaired ability to oxidize fat may be a factor in the obesity's aetiology (3). Moreover, the exercise intensity (Fatmax) eliciting the maximal fat oxidation rate (MFO) was lower in obese (O) compared with lean (L) individuals (4). However, difference in fat oxidation rate (FOR) during exercise between O and L remains equivocal and little is known about FORs during high intensities (>60% ) in O compared with L. This study aimed to characterize fat oxidation kinetics over a large range of intensities in L and O. Methods 12 healthy L [body mass index (BMI): 22.8±0.4] and 16 healthy O men (BMI: 38.9±1.4) performed submaximal incremental test (Incr) to determine whole-body fat oxidation kinetics using indirect calorimetry. After a 15-min resting period (Rest) and 10-min warm-up at 20% of maximal power output (MPO, determined by a maximal incremental test), the power output was increased by 7.5% MPO every 6-min until respiratory exchange ratio reached 1.0. Venous lactate and glucose and plasma concentration of epinephrine (E), norepinephrine (NE), insulin and non-esterified fatty acid (NEFA) were assessed at each step. A mathematical model (SIN) (1), including three variables (dilatation, symmetry, translation), was used to characterize fat oxidation (normalized by fat-free mass) kinetics and to determine Fatmax and MFO. Results FOR at Rest and MFO were not significantly different between groups (p≥0.1). FORs were similar from 20-60% (p≥0.1) and significantly lower from 65-85% in O than in L (p≤0.04). Fatmax was significantly lower in O than in L (46.5±2.5 vs 56.7±1.9 % respectively; p=0.005). Fat oxidation kinetics was characterized by similar translation (p=0.2), significantly lower dilatation (p=0.001) and tended to a left-shift symmetry in O compared with L (p=0.09). Plasma E, insulin and NEFA were significantly higher in L compared to O (p≤0.04). There were no significant differences in glucose, lactate and plasma NE between groups (p≥0.2). Conclusion The study showed that O presented a lower Fatmax and a lower reliance on fat oxidation at high, but not at moderate, intensities. This may be linked to a: i) higher levels of insulin and lower E concentrations in O, which may induce blunted lipolysis; ii) higher percentage of type II and a lower percentage of type I fibres (5), and iii) decreased mitochondrial content (2), which may reduce FORs at high intensities and Fatmax. These findings may have implications for an appropriate exercise intensity prescription for optimize fat oxidation in O. References 1. Cheneviere et al. Med Sci Sports Exerc. 2009 2. Holloway et al. Am J Clin Nutr. 2009 3. Kelley et al. Am J Physiol. 1999 4. Perez-Martin et al. Diabetes Metab. 2001 5. Tanner et al. Am J Physiol Endocrinol Metab. 2002
Resumo:
All-trans retinoic acid (ATRA) combined to anthracycline-based chemotherapy is the reference treatment of acute promyelocytic leukemia (APL). Whereas, in high-risk patients, cytarabine (AraC) is often considered useful in combination with anthracycline to prevent relapse, its usefulness in standard-risk APL is uncertain. In APL 2000 trial, patients with standard-risk APL [i.e., with baseline white blood cell (WBC) count <10,000/mm(3) ] were randomized between treatment with ATRA with Daunorubicin (DNR) and AraC (AraC group) and ATRA with DNR but without AraC (no AraC group). All patients subsequently received combined maintenance treatment. The trial had been prematurely terminated due to significantly more relapses in the no AraC group (J Clin Oncol, (24) 2006, 5703-10), but follow-up was still relatively short. With long-term follow-up (median 103 months), the 7-year cumulative incidence of relapses was 28.6% in the no AraC group, compared to 12.9% in the AraC group (P = 0.0065). In standard-risk APL, at least when the anthracycline used is DNR, avoiding AraC may lead to an increased risk of relapse suggesting that the need for AraC is regimen-dependent.
Resumo:
Neuroblastoma (NBL) is the commonest extra-cranial solid tumor in children and the leading cause of cancer related deaths in childhood between the age of 1 to 4 years. NBL may behave in very different ways, from the less aggressive stage 4S NBL or congenital forms that may resolve without treatment in up to 90% of the children, to the high-risk disseminated stage 4 disease in older children with a cure rate of 35 to 40%. Initial staging is crucial for effective management and radiolabeled metaiodobenzylguanidine (MIBG) with iodine-123 is a powerful tool with a sensitivity around 90% and a specificity close to 100% for the diagnosis of NBL. MIBG scintigraphy is used routinely and is mandatory in most investigational clinical trials both for the initial staging of the disease, the evaluation of the response to treatment, as well as for the detection of recurrence during follow-up. With respect to outcome of children presenting disseminated stage 4 NBL, the role of post-therapeutic [(123)I]MIBG scan has been investigated by several groups but so far there is no consensus whereas a complete or very good partial response as assessed by MIBG may be of prognostic value. NBL needs a multimodality approach at diagnosis and during follow-up and MIBG scintigraphy keeps its pivotal role, in particular with respect to bone marrow involvement and/or cortical bone metastases.