47 resultados para GDP Interpolation
Resumo:
If childcare policy has become topical in most OECD countries over the last ten years or so, actual developments display huge cross-national variations. Countries like Sweden and Denmark spend around 2 per cent of GDP on this service, and provide affordable childcare places to most children below school age. At the other extreme, in Southern Europe, only around 10 per cent of this age group has access to formal daycare. Against this background, this article aims to account for cross-national variations in childcare services. It distinguishes two dependent variables: the coverage rate and the proportion of GDP spent subsidising childcare services. Using a mix of cross-sectional and pooled times-series methods, it tests a series of hypotheses concerning the determinants of the development of this policy. Its main conclusion for the coverage rate is that key factors are public spending and wage dispersion (both positive). For spending, key factors are the proportion of women in parliaments (positive) and spending on age-related policies (negative).
Resumo:
This paper deals with the problem of spatial data mapping. A new method based on wavelet interpolation and geostatistical prediction (kriging) is proposed. The method - wavelet analysis residual kriging (WARK) - is developed in order to assess the problems rising for highly variable data in presence of spatial trends. In these cases stationary prediction models have very limited application. Wavelet analysis is used to model large-scale structures and kriging of the remaining residuals focuses on small-scale peculiarities. WARK is able to model spatial pattern which features multiscale structure. In the present work WARK is applied to the rainfall data and the results of validation are compared with the ones obtained from neural network residual kriging (NNRK). NNRK is also a residual-based method, which uses artificial neural network to model large-scale non-linear trends. The comparison of the results demonstrates the high quality performance of WARK in predicting hot spots, reproducing global statistical characteristics of the distribution and spatial correlation structure.
Resumo:
The significance of the insurance industry in the functioning of the world economy is often underestimated, with premiums reaching 7.5 % of world gross domestic product (GDP), three times as much as worldwide military expenses. Insurance services mutualise risks in such a way as they provide a form of private governance that complements or makes up for guarantees otherwise supplied by the State. This case study of international standards developed for the insurance market provides evidence that deviates from conventional accounts considering service standards as heavily dependent of national environments and industry specificities. The chapter examines the relationship between tertiarisation, internationalisation and standardisation of contemporary economies by highlighting the complementarity between institutionalist approaches of the French regulation school and international political economy scholarship shedding light on the polarisation in the possible use of standards, notwithstanding thesectoraland institutional specificities of the activities concerned.
Resumo:
Financial markets play an important role in an economy performing various functions like mobilizing and pooling savings, producing information about investment opportunities, screening and monitoring investments, implementation of corporate governance, diversification and management of risk. These functions influence saving rates, investment decisions, technological innovation and, therefore, have important implications for welfare. In my PhD dissertation I examine the interplay of financial and product markets by looking at different channels through which financial markets may influence an economy.My dissertation consists of four chapters. The first chapter is a co-authored work with Martin Strieborny, a PhD student from the University of Lausanne. The second chapter is a co-authored work with Melise Jaud, a PhD student from the Paris School of Economics. The third chapter is co-authored with both Melise Jaud and Martin Strieborny. The last chapter of my PhD dissertation is a single author paper.Chapter 1 of my PhD thesis analyzes the effect of financial development on growth of contract intensive industries. These industries intensively use intermediate inputs that neither can be sold on organized exchange, nor are reference-priced (Levchenko, 2007; Nunn, 2007). A typical example of a contract intensive industry would be an industry where an upstream supplier has to make investments in order to customize a product for needs of a downstream buyer. After the investment is made and the product is adjusted, the buyer may refuse to meet a commitment and trigger ex post renegotiation. Since the product is customized to the buyer's needs, the supplier cannot sell the product to a different buyer at the original price. This is referred in the literature as the holdup problem. As a consequence, the individually rational suppliers will underinvest into relationship-specific assets, hurting the downstream firms with negative consequences for aggregate growth. The standard way to mitigate the hold up problem is to write a binding contract and to rely on the legal enforcement by the state. However, even the most effective contract enforcement might fail to protect the supplier in tough times when the buyer lacks a reliable source of external financing. This suggests the potential role of financial intermediaries, banks in particular, in mitigating the incomplete contract problem. First, financial products like letters of credit and letters of guarantee can substantially decrease a risk and transaction costs of parties. Second, a bank loan can serve as a signal about a buyer's true financial situation, an upstream firm will be more willing undertake relationship-specific investment knowing that the business partner is creditworthy and will abstain from myopic behavior (Fama, 1985; von Thadden, 1995). Therefore, a well-developed financial (especially banking) system should disproportionately benefit contract intensive industries.The empirical test confirms this hypothesis. Indeed, contract intensive industries seem to grow faster in countries with a well developed financial system. Furthermore, this effect comes from a more developed banking sector rather than from a deeper stock market. These results are reaffirmed examining the effect of US bank deregulation on the growth of contract intensive industries in different states. Beyond an overall pro-growth effect, the bank deregulation seems to disproportionately benefit the industries requiring relationship-specific investments from their suppliers.Chapter 2 of my PhD focuses on the role of the financial sector in promoting exports of developing countries. In particular, it investigates how credit constraints affect the ability of firms operating in agri-food sectors of developing countries to keep exporting to foreign markets.Trade in high-value agri-food products from developing countries has expanded enormously over the last two decades offering opportunities for development. However, trade in agri-food is governed by a growing array of standards. Sanitary and Phytosanitary standards (SPS) and technical regulations impose additional sunk, fixed and operating costs along the firms' export life. Such costs may be detrimental to firms' survival, "pricing out" producers that cannot comply. The existence of these costs suggests a potential role of credit constraints in shaping the duration of trade relationships on foreign markets. A well-developed financial system provides the funds to exporters necessary to adjust production processes in order to meet quality and quantity requirements in foreign markets and to maintain long-standing trade relationships. The products with higher needs for financing should benefit the most from a well functioning financial system. This differential effect calls for a difference-in-difference approach initially proposed by Rajan and Zingales (1998). As a proxy for demand for financing of agri-food products, the sanitary risk index developed by Jaud et al. (2009) is used. The empirical literature on standards and norms show high costs of compliance, both variable and fixed, for high-value food products (Garcia-Martinez and Poole, 2004; Maskus et al., 2005). The sanitary risk index reflects the propensity of products to fail health and safety controls on the European Union (EU) market. Given the high costs of compliance, the sanitary risk index captures the demand for external financing to comply with such regulations.The prediction is empirically tested examining the export survival of different agri-food products from firms operating in Ghana, Mali, Malawi, Senegal and Tanzania. The results suggest that agri-food products that require more financing to keep up with food safety regulation of the destination market, indeed sustain longer in foreign market, when they are exported from countries with better developed financial markets.Chapter 3 analyzes the link between financial markets and efficiency of resource allocation in an economy. Producing and exporting products inconsistent with a country's factor endowments constitutes a serious misallocation of funds, which undermines competitiveness of the economy and inhibits its long term growth. In this chapter, inefficient exporting patterns are analyzed through the lens of the agency theories from the corporate finance literature. Managers may pursue projects with negative net present values because their perquisites or even their job might depend on them. Exporting activities are particularly prone to this problem. Business related to foreign markets involves both high levels of additional spending and strong incentives for managers to overinvest. Rational managers might have incentives to push for exports that use country's scarce factors which is suboptimal from a social point of view. Export subsidies might further skew the incentives towards inefficient exporting. Management can divert the export subsidies into investments promoting inefficient exporting.Corporate finance literature stresses the disciplining role of outside debt in counteracting the internal pressures to divert such "free cash flow" into unprofitable investments. Managers can lose both their reputation and the control of "their" firm if the unpaid external debt triggers a bankruptcy procedure. The threat of possible failure to satisfy debt service payments pushes the managers toward an efficient use of available resources (Jensen, 1986; Stulz, 1990; Hart and Moore, 1995). The main sources of debt financing in the most countries are banks. The disciplining role of banks might be especially important in the countries suffering from insufficient judicial quality. Banks, in pursuing their rights, rely on comparatively simple legal interventions that can be implemented even by mediocre courts. In addition to their disciplining role, banks can promote efficient exporting patterns in a more direct way by relaxing credit constraints of producers, through screening, identifying and investing in the most profitable investment projects. Therefore, a well-developed domestic financial system, and particular banking system, would help to push a country's exports towards products congruent with its comparative advantage.This prediction is tested looking at the survival of different product categories exported to US market. Products are identified according to the Euclidian distance between their revealed factor intensity and the country's factor endowments. The results suggest that products suffering from a comparative disadvantage (labour-intensive products from capital-abundant countries) survive less on the competitive US market. This pattern is stronger if the exporting country has a well-developed banking system. Thus, a strong banking sector promotes exports consistent with a country comparative advantage.Chapter 4 of my PhD thesis further examines the role of financial markets in fostering efficient resource allocation in an economy. In particular, the allocative efficiency hypothesis is investigated in the context of equity market liberalization.Many empirical studies document a positive and significant effect of financial liberalization on growth (Levchenko et al. 2009; Quinn and Toyoda 2009; Bekaert et al., 2005). However, the decrease in the cost of capital and the associated growth in investment appears rather modest in comparison to the large GDP growth effect (Bekaert and Harvey, 2005; Henry, 2000, 2003). Therefore, financial liberalization may have a positive impact on growth through its effect on the allocation of funds across firms and sectors.Free access to international capital markets allows the largest and most profitable domestic firms to borrow funds in foreign markets (Rajan and Zingales, 2003). As domestic banks loose some of their best clients, they reoptimize their lending practices seeking new clients among small and younger industrial firms. These firms are likely to be more risky than large and established companies. Screening of customers becomes prevalent as the return to screening rises. Banks, ceteris paribus, tend to focus on firms operating in comparative-advantage sectors because they are better risks. Firms in comparative-disadvantage sectors finding it harder to finance their entry into or survival in export markets either exit or refrain from entering export markets. On aggregate, one should therefore expect to see less entry, more exit, and shorter survival on export markets in those sectors after financial liberalization.The paper investigates the effect of financial liberalization on a country's export pattern by comparing the dynamics of entry and exit of different products in a country export portfolio before and after financial liberalization.The results suggest that products that lie far from the country's comparative advantage set tend to disappear relatively faster from the country's export portfolio following the liberalization of financial markets. In other words, financial liberalization tends to rebalance the composition of a country's export portfolio towards the products that intensively use the economy's abundant factors.
Resumo:
BACKGROUND: RalA and RalB are multifuntional GTPases involved in a variety of cellular processes including proliferation, oncogenic transformation and membrane trafficking. Here we investigated the mechanisms leading to activation of Ral proteins in pancreatic beta-cells and analyzed the impact on different steps of the insulin-secretory process. METHODOLOGY/PRINCIPAL FINDINGS: We found that RalA is the predominant isoform expressed in pancreatic islets and insulin-secreting cell lines. Silencing of this GTPase in INS-1E cells by RNA interference led to a decrease in secretagogue-induced insulin release. Real-time measurements by fluorescence resonance energy transfer revealed that RalA activation in response to secretagogues occurs within 3-5 min and reaches a plateau after 10-15 min. The activation of the GTPase is triggered by increases in intracellular Ca2+ and cAMP and is prevented by the L-type voltage-gated Ca2+ channel blocker Nifedipine and by the protein kinase A inhibitor H89. Defective insulin release in cells lacking RalA is associated with a decrease in the secretory granules docked at the plasma membrane detected by Total Internal Reflection Fluorescence microscopy and with a strong impairment in Phospholipase D1 activation in response to secretagogues. RalA was found to be activated by RalGDS and to be severely hampered upon silencing of this GDP/GTP exchange factor. Accordingly, INS-1E cells lacking RalGDS displayed a reduction in hormone secretion induced by secretagogues and in the number of insulin-containing granules docked at the plasma membrane. CONCLUSIONS/SIGNIFICANCE: Taken together, our data indicate that RalA activation elicited by the exchange factor RalGDS in response to a rise in intracellular Ca2+ and cAMP controls hormone release from pancreatic beta-cell by coordinating the execution of different events in the secretory pathway.
Resumo:
PURPOSE: This study investigated the isolated and combined effects of heat [temperate (22 °C/30 % rH) vs. hot (35 °C/40 % rH)] and hypoxia [sea level (FiO2 0.21) vs. moderate altitude (FiO2 0.15)] on exercise capacity and neuromuscular fatigue characteristics. METHODS: Eleven physically active subjects cycled to exhaustion at constant workload (66 % of the power output associated with their maximal oxygen uptake in temperate conditions) in four different environmental conditions [temperate/sea level (control), hot/sea level (hot), temperate/moderate altitude (hypoxia) and hot/moderate altitude (hot + hypoxia)]. Torque and electromyography (EMG) responses following electrical stimulation of the tibial nerve (plantar-flexion; soleus) were recorded before and 5 min after exercise. RESULTS: Time to exhaustion was reduced (P < 0.05) in hot (-35 ± 15 %) or hypoxia (-36 ± 14 %) compared to control (61 ± 28 min), while hot + hypoxia (-51 ± 20 %) further compromised exercise capacity (P < 0.05). However, the effect of temperature or altitude on end-exercise core temperature (P = 0.089 and P = 0.070, respectively) and rating of perceived exertion (P > 0.05) did not reach significance. Maximal voluntary contraction torque, voluntary activation (twitch interpolation) and peak twitch torque decreased from pre- to post-exercise (-9 ± 1, -4 ± 1 and -6 ± 1 % all trials compounded, respectively; P < 0.05), with no effect of the temperature or altitude. M-wave amplitude and root mean square activity were reduced (P < 0.05) in hot compared to temperate conditions, while normalized maximal EMG activity did not change. Altitude had no effect on any measured parameters. CONCLUSION: Moderate hypoxia in combination with heat stress reduces cycling time to exhaustion without modifying neuromuscular fatigue characteristics. Impaired oxygen delivery or increased cardiovascular strain, increasing relative exercise intensity, may have also contributed to earlier exercise cessation.
Resumo:
The Haemophilia Registry of the Swiss Haemophilia Society was created in the year 2000. The latest records from October 31st 2011 are presented here. Included are all patients with haemophilia A or B and other inherited coagulation disorders (including VWD patients with R-Co activity below 10%) known and followed by the 11 paediatric and 12 adult haemophilia treatment or reference centers. Currently there are 950 patients registered, the majority of which (585) having haemophilia A. Disease severity is graded according to ISTH criteria and its distribution between mild, moderate and severe haemophilia is similar to data from other European and American registries. The majority (about two thirds) of Swiss patients with haemophilia A or B are treated on-demand, with only about 20% of patients being on prophylaxis. The figure is different in paediatrics and young adults (1st and 2nd decades), where 80 to 90% of patients with haemophilia A are under regular prophylaxis. Interestingly enough, use of factor concentrates, although readily available, is rather low in Switzerland, especially when taking the country's GDP into account: The total amount of factor VIII and IX was 4.94 U pro capita, comparable to other European countries with distinctly lower incomes (Poland, Slovakia, Hungary). This finding is mainly due to the afore mentioned low rate of prophylactic treatment of haemophilia in our country. Our registry remains an important instrument of quality control of haemophilia therapy in Switzerland.
Resumo:
We present an open-source ITK implementation of a directFourier method for tomographic reconstruction, applicableto parallel-beam x-ray images. Direct Fourierreconstruction makes use of the central-slice theorem tobuild a polar 2D Fourier space from the 1D transformedprojections of the scanned object, that is resampled intoa Cartesian grid. Inverse 2D Fourier transform eventuallyyields the reconstructed image. Additionally, we providea complex wrapper to the BSplineInterpolateImageFunctionto overcome ITKâeuro?s current lack for image interpolatorsdealing with complex data types. A sample application ispresented and extensively illustrated on the Shepp-Loganhead phantom. We show that appropriate input zeropaddingand 2D-DFT oversampling rates together with radial cubicb-spline interpolation improve 2D-DFT interpolationquality and are efficient remedies to reducereconstruction artifacts.
Resumo:
Introduction: Low brain tissue oxygen pressure (PbtO2) is associated with worse outcome in patients with severe traumatic brain injury (TBI). However, it is unclear whether brain tissue hypoxia is merely a marker of injury severity or a predictor of prognosis, independent from intracranial pressure (ICP) and injury severity. Hypothesis: We hypothesized that brain tissue hypoxia was an independent predictor of outcome in patients wih severe TBI, irrespective of elevated ICP and of the severity of cerebral and systemic injury. Methods: This observational study was conducted at the Neurological ICU, Hospital of the University of Pennsylvania, an academic level I trauma center. Patients admitted with severe TBI who had PbtO2 and ICP monitoring were included in the study. PbtO2, ICP, mean arterial pressure (MAP) and cerebral perfusion pressure (CPP = MAP-ICP) were monitored continuously and recorded prospectively every 30 min. Using linear interpolation, duration and cumulative dose (area under the curve, AUC) of brain tissue hypoxia (PbtO2 < 15 mm Hg), elevated ICP >20 mm Hg and low CPP <60 mm Hg were calculated, and the association with outcome at hospital discharge, dichotomized as good (Glasgow Outcome Score [GOS] 4-5) vs. poor (GOS 1-3), was analyzed. Results: A total of 103 consecutive patients, monitored for an average of 5 days, was studied. Brain tissue hypoxia was observed in 66 (64%) patients despite ICP was < 20 mm Hg and CPP > 60 mm Hg (72 +/- 39% and 49 +/- 41% of brain hypoxic time, respectively). Compared with patients with good outcome, those with poor outcome had a longer duration of brain hypoxia (1.7 +/- 3.7 vs. 8.3 +/- 15.9 hrs, P<0.01), as well as a longer duration (11.5 +/- 16.5 vs. 21.6 +/- 29.6 hrs, P=0.03) and a greater cumulative dose (56 +/- 93 vs. 143 +/- 218 mm Hg*hrs, P<0.01) of elevated ICP. By multivariable logistic regression, admission Glasgow Coma Scale (OR, 0.83, 95% CI: 0.70-0.99, P=0.04), Marshall CT score (OR 2.42, 95% CI: 1.42-4.11, P<0.01), APACHE II (OR 1.20, 95% CI: 1.03-1.43, P=0.03), and the duration of brain tissue hypoxia (OR 1.13; 95% CI: 1.01-1.27; P=0.04) were all significantly associated with poor outcome. No independent association was found between the AUC for elevated ICP and outcome (OR 1.01, 95% CI 0.97-1.02, P=0.11) in our prospective cohort. Conclusions: In patients with severe TBI, brain tissue hypoxia is frequent, despite normal ICP and CPP, and is associated with poor outcome, independent of intracranial hypertension and the severity of cerebral and systemic injury. Our findings indicate that PbtO2 is a strong physiologic prognostic marker after TBI. Further study is warranted to examine whether PbtO2-directed therapy improves outcome in severely head-injured patients .
Resumo:
In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations
Resumo:
The paper deals with the development and application of the generic methodology for automatic processing (mapping and classification) of environmental data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve the problem of spatial data mapping (regression). The Probabilistic Neural Network (PNN) is considered as an automatic tool for spatial classifications. The automatic tuning of isotropic and anisotropic GRNN/PNN models using cross-validation procedure is presented. Results are compared with the k-Nearest-Neighbours (k-NN) interpolation algorithm using independent validation data set. Real case studies are based on decision-oriented mapping and classification of radioactively contaminated territories.
Resumo:
Objective: To implement a carotid sparing protocol using helical Tomotherapy(HT) in T1N0 squamous-cell laryngeal carcinoma.Materials/Methods: Between July and August 2010, 7 men with stage T1N0 laryngeal carcinoma were included in this study. Age ranged from 47-74 years. Staging included endoscopic examination, CT-scan and MRI when indicated.Planned irradiation dose was 70 Gy in 35 fractions over 7 weeks. A simple treatment planning algorithm for carotidsparing was used: maximum point dose to the carotids 35 Gy, to the spinal cord 30 Gy, and 100% PTV volume to becovered with 95% of the prescribed dose. Carotid volume of interest extended to 1 cm above and below of the PTV.Doses to the carotid arteries, critical organs, and planned target volume (PTV) with our standard laryngealirradiation protocol was compared. Daily megavoltage scans were obtained before each fraction. When necessary, thePlanned Adaptive? software (TomoTherapy Inc., Madison, WI) was used to evaluate the need for a re-planning,which has never been indicated. Dose data were extracted using the VelocityAI software (Atlanta, GA), and datanormalization and dosevolume histogram (DVH) interpolation were realized using the Igor Pro software (Portland,OR).Results: A significant (p < 0.05) carotid dose sparing compared to our standard protocol with an average maximum point dose of 38.3 Gy (standard devaition [SD] 4.05 Gy), average mean dose of 18.59 Gy (SD 0.83 Gy) was achieved.In all patients, 95% of the carotid volume received less than 28.4 Gy (SD 0.98 Gy). The average maximum point doseto the spinal cord was 25.8 Gy (SD 3.24 Gy). PTV was fully covered with more than 95% of the prescribed dose forall patients with an average maximum point dose of 74.1 Gy and the absolute maximum dose in a single patient of75.2 Gy. To date, the clinical outcomes have been excellent. Three patients (42%) developed stage 1 mucositis that was conservatively managed, and all the patients presented a mild to moderate dysphonia. All adverse effectsresolved spontaneously in the month following the end of treatment. Early local control rate is 100% considering a 4-5months post treatment follow-up.Conclusions: HT allows a clinically significant decrease of carotid irradiation dose compared tostandard irradiation protocols with an acceptable spinal cord dose tradeoff. Moreover, this technique allows the PTV to be homogenously covered with a curative irradiation dose. Daily control imaging brings added security marginsespecially when working with high dose gradients. Further investigations and follow-up are underway to better evaluatethe late clinical outcomes especially the local control rate, late laryngeal and vascular toxicity, and expected potentialimpact on cerebrovascular events.
Resumo:
Introduction. Development of the fetal brain surfacewith concomitant gyrification is one of the majormaturational processes of the human brain. Firstdelineated by postmortem studies or by ultrasound, MRIhas recently become a powerful tool for studying in vivothe structural correlates of brain maturation. However,the quantitative measurement of fetal brain developmentis a major challenge because of the movement of the fetusinside the amniotic cavity, the poor spatial resolution,the partial volume effect and the changing appearance ofthe developing brain. Today extensive efforts are made todeal with the âeurooepost-acquisitionâeuro reconstruction ofhigh-resolution 3D fetal volumes based on severalacquisitions with lower resolution (Rousseau, F., 2006;Jiang, S., 2007). We here propose a framework devoted tothe segmentation of the basal ganglia, the gray-whitetissue segmentation, and in turn the 3D corticalreconstruction of the fetal brain. Method. Prenatal MRimaging was performed with a 1-T system (GE MedicalSystems, Milwaukee) using single shot fast spin echo(ssFSE) sequences in fetuses aged from 29 to 32gestational weeks (slice thickness 5.4mm, in planespatial resolution 1.09mm). For each fetus, 6 axialvolumes shifted by 1 mm were acquired (about 1 min pervolume). First, each volume is manually segmented toextract fetal brain from surrounding fetal and maternaltissues. Inhomogeneity intensity correction and linearintensity normalization are then performed. A highspatial resolution image of isotropic voxel size of 1.09mm is created for each fetus as previously published byothers (Rousseau, F., 2006). B-splines are used for thescattered data interpolation (Lee, 1997). Then, basalganglia segmentation is performed on this superreconstructed volume using active contour framework witha Level Set implementation (Bach Cuadra, M., 2010). Oncebasal ganglia are removed from the image, brain tissuesegmentation is performed (Bach Cuadra, M., 2009). Theresulting white matter image is then binarized andfurther given as an input in the Freesurfer software(http://surfer.nmr.mgh.harvard.edu/) to provide accuratethree-dimensional reconstructions of the fetal brain.Results. High-resolution images of the cerebral fetalbrain, as obtained from the low-resolution acquired MRI,are presented for 4 subjects of age ranging from 29 to 32GA. An example is depicted in Figure 1. Accuracy in theautomated basal ganglia segmentation is compared withmanual segmentation using measurement of Dice similarity(DSI), with values above 0.7 considering to be a verygood agreement. In our sample we observed DSI valuesbetween 0.785 and 0.856. We further show the results ofgray-white matter segmentation overlaid on thehigh-resolution gray-scale images. The results arevisually checked for accuracy using the same principlesas commonly accepted in adult neuroimaging. Preliminary3D cortical reconstructions of the fetal brain are shownin Figure 2. Conclusion. We hereby present a completepipeline for the automated extraction of accuratethree-dimensional cortical surface of the fetal brain.These results are preliminary but promising, with theultimate goal to provide âeurooemovieâeuro of the normal gyraldevelopment. In turn, a precise knowledge of the normalfetal brain development will allow the quantification ofsubtle and early but clinically relevant deviations.Moreover, a precise understanding of the gyraldevelopment process may help to build hypotheses tounderstand the pathogenesis of several neurodevelopmentalconditions in which gyrification have been shown to bealtered (e.g. schizophrenia, autismâeuro¦). References.Rousseau, F. (2006), 'Registration-Based Approach forReconstruction of High-Resolution In Utero Fetal MR Brainimages', IEEE Transactions on Medical Imaging, vol. 13,no. 9, pp. 1072-1081. Jiang, S. (2007), 'MRI of MovingSubjects Using Multislice Snapshot Images With VolumeReconstruction (SVR): Application to Fetal, Neonatal, andAdult Brain Studies', IEEE Transactions on MedicalImaging, vol. 26, no. 7, pp. 967-980. Lee, S. (1997),'Scattered data interpolation with multilevel B-splines',IEEE Transactions on Visualization and Computer Graphics,vol. 3, no. 3, pp. 228-244. Bach Cuadra, M. (2010),'Central and Cortical Gray Mater Segmentation of MagneticResonance Images of the Fetal Brain', ISMRM Conference.Bach Cuadra, M. (2009), 'Brain tissue segmentation offetal MR images', MICCAI.
Resumo:
AIMS: To determine the economic burden pertaining to alcohol dependence in Europe. METHODS: Database searching was combined with grey literature searching to identify costs and resource use in Europe relating to alcohol dependence as defined by the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) or the World Health Organisation's International Classification of Diseases (ICD-10). Searches combined MeSH headings for both economic terms and terms pertaining to alcohol dependence. Relevant outcomes included direct healthcare costs and indirect societal costs. Main resource use outcomes included hospitalization and drug costs. RESULTS: Compared with the number of studies of the burden of alcohol use disorders in general, relatively few focussed specifically on alcohol dependence. Twenty-two studies of variable quality were eligible for inclusion. The direct costs of alcohol dependence in Europe were substantial, the treatment costs for a single alcohol-dependent patient lying within the range euro1591-euro7702 per hospitalization and the annual total direct costs accounting for 0.04-0.31% of an individual country's gross domestic product (GDP). These costs were driven primarily by hospitalization; in contrast, the annual drug costs for alcohol dependence were low. The indirect costs were more substantial than the direct costs, accounting for up to 0.64% of GDP per country annually. Alcohol dependence may be more costly in terms of health costs per patient than alcohol abuse. CONCLUSIONS: This review confirms that alcohol dependence represents a significant burden for European healthcare systems and society. Difficulties in comparing across cost-of-illness studies in this disease area, however, prevent specific estimation of the economic burden.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.