41 resultados para Cointegration analysis with structural breaks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

MAP5, a microtubule-associated protein characteristic of differentiating neurons, was studied in the developing visual cortex and corpus callosum of the cat. In juvenile cortical tissue, during the first month after birth, MAP5 is present as a protein doublet of molecular weights of 320 and 300 kDa, defined as MAP5a and MAP5b, respectively. MAP5a is the phosphorylated form. MAP5a decreases two weeks after birth and is no longer detectable at the beginning of the second postnatal month; MAP5b also decreases after the second postnatal week but more slowly and it is still present in the adult. In the corpus callosum only MAP5a is present between birth and the end of the first postnatal month. Afterwards only MAP5b is present but decreases in concentration more than 3-fold towards adulthood. Our immunocytochemical studies show MAP5 in somata, dendrites and axonal processes of cortical neurons. In adult tissue it is very prominent in pyramidal cells of layer V. In the corpus callosum MAP5 is present in axons at all ages. There is strong evidence that MAP5a is located in axons while MAP5b seems restricted to somata and dendrites until P28, but is found in callosal axons from P39 onwards. Biochemical experiments indicate that the state of phosphorylation of MAP5 influences its association with structural components. After high speed centrifugation of early postnatal brain tissue, MAP5a remains with pellet fractions while most MAP5b is soluble. In conclusion, phosphorylation of MAP5 may regulate (1) its intracellular distribution within axons and dendrites, and (2) its ability to interact with other subcellular components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

IMPORTANCE: Owing to a considerable shift toward bioprosthesis implantation rather than mechanical valves, it is expected that patients will increasingly present with degenerated bioprostheses in the next few years. Transcatheter aortic valve-in-valve implantation is a less invasive approach for patients with structural valve deterioration; however, a comprehensive evaluation of survival after the procedure has not yet been performed. OBJECTIVE: To determine the survival of patients after transcatheter valve-in-valve implantation inside failed surgical bioprosthetic valves. DESIGN, SETTING, AND PARTICIPANTS: Correlates for survival were evaluated using a multinational valve-in-valve registry that included 459 patients with degenerated bioprosthetic valves undergoing valve-in-valve implantation between 2007 and May 2013 in 55 centers (mean age, 77.6 [SD, 9.8] years; 56% men; median Society of Thoracic Surgeons mortality prediction score, 9.8% [interquartile range, 7.7%-16%]). Surgical valves were classified as small (≤21 mm; 29.7%), intermediate (>21 and <25 mm; 39.3%), and large (≥25 mm; 31%). Implanted devices included both balloon- and self-expandable valves. MAIN OUTCOMES AND MEASURES: Survival, stroke, and New York Heart Association functional class. RESULTS: Modes of bioprosthesis failure were stenosis (n = 181 [39.4%]), regurgitation (n = 139 [30.3%]), and combined (n = 139 [30.3%]). The stenosis group had a higher percentage of small valves (37% vs 20.9% and 26.6% in the regurgitation and combined groups, respectively; P = .005). Within 1 month following valve-in-valve implantation, 35 (7.6%) patients died, 8 (1.7%) had major stroke, and 313 (92.6%) of surviving patients had good functional status (New York Heart Association class I/II). The overall 1-year Kaplan-Meier survival rate was 83.2% (95% CI, 80.8%-84.7%; 62 death events; 228 survivors). Patients in the stenosis group had worse 1-year survival (76.6%; 95% CI, 68.9%-83.1%; 34 deaths; 86 survivors) in comparison with the regurgitation group (91.2%; 95% CI, 85.7%-96.7%; 10 deaths; 76 survivors) and the combined group (83.9%; 95% CI, 76.8%-91%; 18 deaths; 66 survivors) (P = .01). Similarly, patients with small valves had worse 1-year survival (74.8% [95% CI, 66.2%-83.4%]; 27 deaths; 57 survivors) vs with intermediate-sized valves (81.8%; 95% CI, 75.3%-88.3%; 26 deaths; 92 survivors) and with large valves (93.3%; 95% CI, 85.7%-96.7%; 7 deaths; 73 survivors) (P = .001). Factors associated with mortality within 1 year included having small surgical bioprosthesis (≤21 mm; hazard ratio, 2.04; 95% CI, 1.14-3.67; P = .02) and baseline stenosis (vs regurgitation; hazard ratio, 3.07; 95% CI, 1.33-7.08; P = .008). CONCLUSIONS AND RELEVANCE: In this registry of patients who underwent transcatheter valve-in-valve implantation for degenerated bioprosthetic aortic valves, overall 1-year survival was 83.2%. Survival was lower among patients with small bioprostheses and those with predominant surgical valve stenosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hypertension is one of the most common complex genetic disorders. We have described previously 38 single nucleotide polymorphisms (SNPs) with suggestive association with hypertension in Japanese individuals. In this study we extend our previous findings by analyzing a large sample of Japanese individuals (n=14 105) for the most associated SNPs. We also conducted replication analyses in Japanese of susceptibility loci for hypertension identified recently from genome-wide association studies of European ancestries. Association analysis revealed significant association of the ATP2B1 rs2070759 polymorphism with hypertension (P=5.3×10(-5); allelic odds ratio: 1.17 [95% CI: 1.09 to 1.26]). Additional SNPs in ATP2B1 were subsequently genotyped, and the most significant association was with rs11105378 (odds ratio: 1.31 [95% CI: 1.21 to 1.42]; P=4.1×10(-11)). Association of rs11105378 with hypertension was cross-validated by replication analysis with the Global Blood Pressure Genetics consortium data set (odds ratio: 1.13 [95% CI: 1.05 to 1.21]; P=5.9×10(-4)). Mean adjusted systolic blood pressure was highly significantly associated with the same SNP in a meta-analysis with individuals of European descent (P=1.4×10(-18)). ATP2B1 mRNA expression levels in umbilical artery smooth muscle cells were found to be significantly different among rs11105378 genotypes. Seven SNPs discovered in published genome-wide association studies were also genotyped in the Japanese population. In the combined analysis with replicated 3 genes, FGF5 rs1458038, CYP17A1, rs1004467, and CSK rs1378942, odds ratio of the highest risk group was 2.27 (95% CI: 1.65 to 3.12; P=4.6×10(-7)) compared with the lower risk group. In summary, this study confirmed common genetic variation in ATP2B1, as well as FGF5, CYP17A1, and CSK, to be associated with blood pressure levels and risk of hypertension.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interaction of tunneling with groundwater is a problem both from an environmental and an engineering point of view. In fact, tunnel drilling may cause a drawdown of piezometric levels and water inflows into tunnels that may cause problems during excavation of the tunnel. While the influence of tunneling on the regional groundwater systems may be adequately predicted in porous media using analytical solutions, such an approach is difficult to apply in fractured rocks. Numerical solutions are preferable and various conceptual approaches have been proposed to describe and model groundwater flow through fractured rock masses, ranging from equivalent continuum models to discrete fracture network simulation models. However, their application needs many preliminary investigations on the behavior of the groundwater system based on hydrochemical and structural data. To study large scale flow systems in fractured rocks of mountainous terrains, a comprehensive study was conducted in southern Switzerland, using as case studies two infrastructures actually under construction: (i) the Monte Ceneri base railway tunnel (Ticino), and the (ii) San Fedele highway tunnel (Roveredo, Graubiinden). The chosen approach in this study combines the temporal and spatial variation of geochemical and geophysical measurements. About 60 localities from both surface and underlying tunnels were temporarily and spatially monitored during more than one year. At first, the project was focused on the collection of hydrochemical and structural data. A number of springs, selected in the area surrounding the infrastructures, were monitored for discharge, electric conductivity, pH, and temperature. Water samples (springs, tunnel inflows and rains) were taken for isotopic analysis; in particular the stable isotope composition (δ2Η, δ180 values) can reflect the origin of the water, because of spatial (recharge altitude, topography, etc.) and temporal (seasonal) effects on precipitation which in turn strongly influence the isotopic composition of groundwater. Tunnel inflows in the accessible parts of the tunnels were also sampled and, if possible, monitored with time. Noble-gas concentrations and their isotope ratios were used in selected locations to better understand the origin and the circulation of the groundwater. In addition, electrical resistivity and VLF-type electromagnetic surveys were performed to identify water bearing fractures and/or weathered areas that could be intersected at depth during tunnel construction. The main goal of this work was to demonstrate that these hydrogeological data and geophysical methods, combined with structural and hydrogeological information, can be successfully used in order to develop hydrogeological conceptual models of the groundwater flow in regions to be exploited for tunnels. The main results of the project are: (i) to have successfully tested the application of electrical resistivity and VLF-electromagnetic surveys to asses water-bearing zones during tunnel drilling; (ii) to have verified the usefulness of noble gas, major ion and stable isotope compositions as proxies for the detection of faults and to understand the origin of the groundwater and its flow regimes (direct rain water infiltration or groundwater of long residence time); and (iii) to have convincingly tested the combined application of a geochemical and geophysical approach to assess and predict the vulnerability of springs to tunnel drilling. - L'interférence entre eaux souterraines et des tunnels pose des problèmes environnementaux et de génie civile. En fait, la construction d'un tunnel peut faire abaisser le niveau des nappes piézométriques et faire infiltrer de l'eau dans le tunnel et ainsi créer des problème pendant l'excavation. Alors que l'influence de la construction d'un tunnel sur la circulation régionale de l'eau souterraine dans des milieux poreux peut être prédite relativement facilement par des solution analytiques de modèles, ceci devient difficile dans des milieux fissurés. Dans ce cas-là, des solutions numériques sont préférables et plusieurs approches conceptuelles ont été proposées pour décrire et modéliser la circulation d'eau souterraine à travers les roches fissurées, en allant de modèles d'équivalence continue à des modèles de simulation de réseaux de fissures discrètes. Par contre, leur application demande des investigations importantes concernant le comportement du système d'eau souterraine basées sur des données hydrochimiques et structurales. Dans le but d'étudier des grands systèmes de circulation d'eau souterraine dans une région de montagnes, une étude complète a été fait en Suisse italienne, basée sur deux grandes infrastructures actuellement en construction: (i) Le tunnel ferroviaire de base du Monte Ceneri (Tessin) et (ii) le tunnel routière de San Fedele (Roveredo, Grisons). L'approche choisie dans cette étude est la combinaison de variations temporelles et spatiales des mesures géochimiques et géophysiques. Environs 60 localités situées à la surface ainsi que dans les tunnels soujacents ont été suiviès du point de vue temporel et spatial pendant plus de un an. Dans un premier temps le projet se focalisait sur la collecte de données hydrochimiques et structurales. Un certain nombre de sources, sélectionnées dans les environs des infrastructures étudiées ont été suivies pour le débit, la conductivité électrique, le pH et la température. De l'eau (sources, infiltration d'eau de tunnel et pluie) a été échantillonnés pour des analyses isotopiques; ce sont surtout les isotopes stables (δ2Η, δ180) qui peuvent indiquer l'origine d'une eaux, à cause de la dépendance d'effets spatiaux (altitude de recharge, topographie etc.) ainsi que temporels (saisonaux) sur les précipitations météoriques , qui de suite influencent ainsi la composition isotopique de l'eau souterraine. Les infiltrations d'eau dans les tunnels dans les parties accessibles ont également été échantillonnées et si possible suivies au cours du temps. La concentration de gaz nobles et leurs rapports isotopiques ont également été utilisées pour quelques localités pour mieux comprendre l'origine et la circulation de l'eau souterraine. En plus, des campagnes de mesures de la résistivité électrique et électromagnétique de type VLF ont été menées afin d'identifier des zone de fractures ou d'altération qui pourraient interférer avec les tunnels en profondeur pendant la construction. Le but principal de cette étude était de démontrer que ces données hydrogéologiques et géophysiques peuvent être utilisées avec succès pour développer des modèles hydrogéologiques conceptionels de tunnels. Les résultats principaux de ce travail sont : i) d'avoir testé avec succès l'application de méthodes de la tomographie électrique et des campagnes de mesures électromagnétiques de type VLF afin de trouver des zones riches en eau pendant l'excavation d'un tunnel ; ii) d'avoir prouvé l'utilité des gaz nobles, des analyses ioniques et d'isotopes stables pour déterminer l'origine de l'eau infiltrée (de la pluie par le haut ou ascendant de l'eau remontant des profondeurs) et leur flux et pour déterminer la position de failles ; et iii) d'avoir testé d'une manière convainquant l'application combinée de méthodes géochimiques et géophysiques pour juger et prédire la vulnérabilité de sources lors de la construction de tunnels. - L'interazione dei tunnel con il circuito idrico sotterraneo costituisce un problema sia dal punto di vista ambientale che ingegneristico. Lo scavo di un tunnel puô infatti causare abbassamenti dei livelli piezometrici, inoltre le venute d'acqua in galleria sono un notevole problema sia in fase costruttiva che di esercizio. Nel caso di acquiferi in materiale sciolto, l'influenza dello scavo di un tunnel sul circuito idrico sotterraneo, in genere, puô essere adeguatamente predetta attraverso l'applicazione di soluzioni analitiche; al contrario un approccio di questo tipo appare inadeguato nel caso di scavo in roccia. Per gli ammassi rocciosi fratturati sono piuttosto preferibili soluzioni numeriche e, a tal proposito, sono stati proposti diversi approcci concettuali; nella fattispecie l'ammasso roccioso puô essere modellato come un mezzo discreto ο continuo équivalente. Tuttavia, una corretta applicazione di qualsiasi modello numerico richiede necessariamente indagini preliminari sul comportamento del sistema idrico sotterraneo basate su dati idrogeochimici e geologico strutturali. Per approfondire il tema dell'idrogeologia in ammassi rocciosi fratturati tipici di ambienti montani, è stato condotto uno studio multidisciplinare nel sud della Svizzera sfruttando come casi studio due infrastrutture attualmente in costruzione: (i) il tunnel di base del Monte Ceneri (canton Ticino) e (ii) il tunnel autostradale di San Fedele (Roveredo, canton Grigioni). L'approccio di studio scelto ha cercato di integrare misure idrogeochimiche sulla qualité e quantité delle acque e indagini geofisiche. Nella fattispecie sono state campionate le acque in circa 60 punti spazialmente distribuiti sia in superficie che in sotterraneo; laddove possibile il monitoraggio si è temporalmente prolungato per più di un anno. In una prima fase, il progetto di ricerca si è concentrato sull'acquisizione dati. Diverse sorgenti, selezionate nelle aree di possibile influenza attorno allé infrastrutture esaminate, sono state monitorate per quel che concerne i parametri fisico-chimici: portata, conduttività elettrica, pH e temperatura. Campioni d'acqua sono stati prelevati mensilmente su sorgenti, venute d'acqua e precipitazioni, per analisi isotopiche; nella fattispecie, la composizione in isotopi stabili (δ2Η, δ180) tende a riflettere l'origine delle acque, in quanto, variazioni sia spaziali (altitudine di ricarica, topografia, etc.) che temporali (variazioni stagionali) della composizione isotopica delle precipitazioni influenzano anche le acque sotterranee. Laddove possibile, sono state campionate le venute d'acqua in galleria sia puntualmente che al variare del tempo. Le concentrazioni dei gas nobili disciolti nell'acqua e i loro rapporti isotopici sono stati altresi utilizzati in alcuni casi specifici per meglio spiegare l'origine delle acque e le tipologie di circuiti idrici sotterranei. Inoltre, diverse indagini geofisiche di resistività elettrica ed elettromagnetiche a bassissima frequenza (VLF) sono state condotte al fine di individuare le acque sotterranee circolanti attraverso fratture dell'ammasso roccioso. Principale obiettivo di questo lavoro è stato dimostrare come misure idrogeochimiche ed indagini geofisiche possano essere integrate alio scopo di sviluppare opportuni modelli idrogeologici concettuali utili per lo scavo di opere sotterranee. I principali risultati ottenuti al termine di questa ricerca sono stati: (i) aver testato con successo indagini geofisiche (ERT e VLF-EM) per l'individuazione di acque sotterranee circolanti attraverso fratture dell'ammasso roccioso e che possano essere causa di venute d'acqua in galleria durante lo scavo di tunnel; (ii) aver provato l'utilità di analisi su gas nobili, ioni maggiori e isotopi stabili per l'individuazione di faglie e per comprendere l'origine delle acque sotterranee (acque di recente infiltrazione ο provenienti da circolazioni profonde); (iii) aver testato in maniera convincente l'integrazione delle indagini geofisiche e di misure geochimiche per la valutazione della vulnérabilité delle sorgenti durante lo scavo di nuovi tunnel. - "La NLFA (Nouvelle Ligne Ferroviaire à travers les Alpes) axe du Saint-Gothard est le plus important projet de construction de Suisse. En bâtissant la nouvelle ligne du Saint-Gothard, la Suisse réalise un des plus grands projets de protection de l'environnement d'Europe". Cette phrase, qu'on lit comme présentation du projet Alptransit est particulièrement éloquente pour expliquer l'utilité des nouvelles lignes ferroviaires transeuropéens pour le développement durable. Toutefois, comme toutes grandes infrastructures, la construction de nouveaux tunnels ont des impacts inévitables sur l'environnement. En particulier, le possible drainage des eaux souterraines réalisées par le tunnel peut provoquer un abaissement du niveau des nappes piézométriques. De plus, l'écoulement de l'eau à l'intérieur du tunnel, conduit souvent à des problèmes d'ingénierie. Par exemple, d'importantes infiltrations d'eau dans le tunnel peuvent compliquer les phases d'excavation, provoquant un retard dans l'avancement et dans le pire des cas, peuvent mettre en danger la sécurité des travailleurs. Enfin, l'infiltration d'eau peut être un gros problème pendant le fonctionnement du tunnel. Du point de vue de la science, avoir accès à des infrastructures souterraines représente une occasion unique d'obtenir des informations géologiques en profondeur et pour échantillonner des eaux autrement inaccessibles. Dans ce travail, nous avons utilisé une approche pluridisciplinaire qui intègre des mesures d'étude hydrogéochimiques effectués sur les eaux de surface et des investigations géophysiques indirects, tels que la tomographic de résistivité électrique (TRE) et les mesures électromagnétiques de type VLF. L'étude complète a été fait en Suisse italienne, basée sur deux grandes infrastructures actuellement en construction, qui sont le tunnel ferroviaire de base du Monte Ceneri, une partie du susmentionné projet Alptransit, situé entièrement dans le canton Tessin, et le tunnel routière de San Fedele, situé a Roveredo dans le canton des Grisons. Le principal objectif était de montrer comment il était possible d'intégrer les deux approches, géophysiques et géochimiques, afin de répondre à la question de ce que pourraient être les effets possibles dû au drainage causés par les travaux souterrains. L'accès aux galeries ci-dessus a permis une validation adéquate des enquêtes menées confirmant, dans chaque cas, les hypothèses proposées. A cette fin, nous avons fait environ 50 profils géophysiques (28 imageries électrique bidimensionnels et 23 électromagnétiques) dans les zones de possible influence par le tunnel, dans le but d'identifier les fractures et les discontinuités dans lesquelles l'eau souterraine peut circuler. De plus, des eaux ont été échantillonnés dans 60 localités situées la surface ainsi que dans les tunnels subjacents, le suivi mensuelle a duré plus d'un an. Nous avons mesurés tous les principaux paramètres physiques et chimiques: débit, conductivité électrique, pH et température. De plus, des échantillons d'eaux ont été prélevés pour l'analyse mensuelle des isotopes stables de l'hydrogène et de l'oxygène (δ2Η, δ180). Avec ces analyses, ainsi que par la mesure des concentrations des gaz rares dissous dans les eaux et de leurs rapports isotopiques que nous avons effectués dans certains cas spécifiques, il était possible d'expliquer l'origine des différents eaux souterraines, les divers modes de recharge des nappes souterraines, la présence de possible phénomènes de mélange et, en général, de mieux expliquer les circulations d'eaux dans le sous-sol. Le travail, même en constituant qu'une réponse partielle à une question très complexe, a permis d'atteindre certains importants objectifs. D'abord, nous avons testé avec succès l'applicabilité des méthodes géophysiques indirectes (TRE et électromagnétiques de type VLF) pour prédire la présence d'eaux souterraines dans le sous-sol des massifs rocheux. De plus, nous avons démontré l'utilité de l'analyse des gaz rares, des isotopes stables et de l'analyses des ions majeurs pour la détection de failles et pour comprendre l'origine des eaux souterraines (eau de pluie par le haut ou eau remontant des profondeurs). En conclusion, avec cette recherche, on a montré que l'intégration des ces informations (géophysiques et géochimiques) permet le développement de modèles conceptuels appropriés, qui permettant d'expliquer comment l'eau souterraine circule. Ces modèles permettent de prévoir les infiltrations d'eau dans les tunnels et de prédire la vulnérabilité de sources et des autres ressources en eau lors de construction de tunnels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The article is concerned with the formal definition of a largely unnoticed factor in narrative structure. Based on the assumptions that (1) the semantics of a written text depend, among other factors, directly on its visual alignment in space, that (2) the formal structure of a text has to meet that of its spatial presentation and that (3) these assumptions hold true also for narrative texts (which, however, in modern times typically conceal their spatial dimensions by a low-key linear layout), it is argued that, how ever low-key, the expected material shape of a given narrative determines the configuration of its plot by its author. The ,implied book' thus denotes an author's historically assumable, not necessarily conscious idea of how his text, which is still in the process of creation, will be dimensionally presented and under these circumstances visually absorbed. Assuming that an author's knowledge of this later (potentially) substantiated material form influences the composition, the implied book is to be understood as a text-genetically determined, structuring moment of the text. Historically reconstructed, it thus serves the methodical analysis of structural characteristics of a completed text.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mice with homologous disruption of the gene coding for the ligand-binding chain of the interferon (IFN) gamma receptor and derived from a strain genetically resistant to infection with Leishmania major have been used to study further the role of this cytokine in the differentiation of functional CD4+ T cell subsets in vivo and resistance to infection. Wild-type 129/Sv/Ev mice are resistant to infection with this parasite, developing only small lesions, which resolve spontaneously within 6 wk. In contrast, mice lacking the IFN-gamma receptor develop large, progressing lesions. After infection, lymph nodes (LN) and spleens from both wild-type and knockout mice showed an expansion of CD4+ cells producing IFN-gamma as revealed by measuring IFN-gamma in supernatants of specifically stimulated CD4+ T cells, by enumerating IFN-gamma-producing T cells, and by Northern blot analysis of IFN-gamma transcripts. No biologically active interleukin (IL) 4 was detected in supernatants of in vitro-stimulated LN or spleen cells from infected wild-type or deficient mice. Reverse transcription polymerase chain reaction analysis with primers specific for IL-4 showed similar IL-4 message levels in LN from both types of mice. The IL-4 message levels observed were comparable to those found in similarly infected C57BL/6 mice and significantly lower than the levels found in BALB/c mice. Anti-IFN-gamma treatment of both types of mice failed to alter the pattern of cytokines produced after infection. These data show that even in the absence of IFN-gamma receptors, T helper cell (Th) 1-type responses still develop in genetically resistant mice with no evidence for the expansion of Th2 cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La gouvernance de l'Internet est une thématique récente dans la politique mondiale. Néanmoins, elle est devenue au fil des années un enjeu économique et politique important. La question a même pris une importance particulière au cours des derniers mois en devenant un sujet d'actualité récurrent. Forte de ce constat, c ette recherche retrace l'histoire de la gouvernance de l'Internet depuis son émergence comme enjeu politique dans les années 1980 jusqu'à la fin du Sommet Mondial sur la Société de l'Information (SMSI) en 2005. Plutôt que de se focaliser sur l'une ou l'autre des institutions impliquées dans la régulation du réseau informatique mondial, cette recherche analyse l'émergence et l'évolution historique d'un espace de luttes rassemblant un nombre croissant d'acteurs différents. Cette évolution est décrite à travers le prisme de la relation dialectique entre élites et non-élites et de la lutte autour de la définition de la gouvernance de l'Internet. Cette thèse explore donc la question de comment les relations au sein des élites de la gouvernance de l'Internet et entre ces élites et les non-élites expliquent l'emergence, l'évolution et la structuration d'un champ relativement autonome de la politique mondiale centré sur la gouvernance de l'Internet. Contre les perspectives dominantes réaliste et libérales, cette recherche s'ancre dans une approche issue de la combinaison des traditions hétérodoxes en économie politique internationale et des apports de la sociologie politique internationale. Celle-ci s'articule autour des concepts de champ, d'élites et d'hégémonie. Le concept de champ, développé par Bourdieu inspire un nombre croissant d'études de la politique mondiale. Il permet à la fois une étude différenciée de la mondialisation et l'émergence d'espaces de lutte et de domination au niveau transnational. La sociologie des élites, elle, permet une approche pragmatique et centrée sur les acteurs des questions de pouvoir dans la mondialisation. Cette recherche utilise plus particulièrement le concept d'élite du pouvoir de Wright Mills pour étudier l'unification d'élites a priori différentes autour de projets communs. Enfin, cette étude reprend le concept néo-gramscien d'hégémonie afin d'étudier à la fois la stabilité relative du pouvoir d'une élite garantie par la dimension consensuelle de la domination, et les germes de changement contenus dans tout ordre international. A travers l'étude des documents produits au cours de la période étudiée et en s'appuyant sur la création de bases de données sur les réseaux d'acteurs, cette étude s'intéresse aux débats qui ont suivi la commercialisation du réseau au début des années 1990 et aux négociations lors du SMSI. La première période a abouti à la création de l'Internet Corporation for Assigned Names and Numbers (ICANN) en 1998. Cette création est le résultat de la recherche d'un consensus entre les discours dominants des années 1990. C'est également le fruit d'une coalition entre intérêts au sein d'une élite du pouvoir de la gouvernance de l'Internet. Cependant, cette institutionnalisation de l'Internet autour de l'ICANN excluait un certain nombre d'acteurs et de discours qui ont depuis tenté de renverser cet ordre. Le SMSI a été le cadre de la remise en cause du mode de gouvernance de l'Internet par les États exclus du système, des universitaires et certaines ONG et organisations internationales. C'est pourquoi le SMSI constitue la seconde période historique étudiée dans cette thèse. La confrontation lors du SMSI a donné lieu à une reconfiguration de l'élite du pouvoir de la gouvernance de l'Internet ainsi qu'à une redéfinition des frontières du champ. Un nouveau projet hégémonique a vu le jour autour d'éléments discursifs tels que le multipartenariat et autour d'insitutions telles que le Forum sur la Gouvernance de l'Internet. Le succès relatif de ce projet a permis une stabilité insitutionnelle inédite depuis la fin du SMSI et une acceptation du discours des élites par un grand nombre d'acteurs du champ. Ce n'est que récemment que cet ordre a été remis en cause par les pouvoirs émergents dans la gouvernance de l'Internet. Cette thèse cherche à contribuer au débat scientifique sur trois plans. Sur le plan théorique, elle contribue à l'essor d'un dialogue entre approches d'économie politique mondiale et de sociologie politique internationale afin d'étudier à la fois les dynamiques structurelles liées au processus de mondialisation et les pratiques localisées des acteurs dans un domaine précis. Elle insiste notamment sur l'apport de les notions de champ et d'élite du pouvoir et sur leur compatibilité avec les anlayses néo-gramsciennes de l'hégémonie. Sur le plan méthodologique, ce dialogue se traduit par une utilisation de méthodes sociologiques telles que l'anlyse de réseaux d'acteurs et de déclarations pour compléter l'analyse qualitative de documents. Enfin, sur le plan empirique, cette recherche offre une perspective originale sur la gouvernance de l'Internet en insistant sur sa dimension historique, en démontrant la fragilité du concept de gouvernance multipartenaire (multistakeholder) et en se focalisant sur les rapports de pouvoir et les liens entre gouvernance de l'Internet et mondialisation. - Internet governance is a recent issue in global politics. However, it gradually became a major political and economic issue. It recently became even more important and now appears regularly in the news. Against this background, this research outlines the history of Internet governance from its emergence as a political issue in the 1980s to the end of the World Summit on the Information Society (WSIS) in 2005. Rather than focusing on one or the other institution involved in Internet governance, this research analyses the emergence and historical evolution of a space of struggle affecting a growing number of different actors. This evolution is described through the analysis of the dialectical relation between elites and non-elites and through the struggle around the definition of Internet governance. The thesis explores the question of how the relations among the elites of Internet governance and between these elites and non-elites explain the emergence, the evolution, and the structuration of a relatively autonomous field of world politics centred around Internet governance. Against dominant realist and liberal perspectives, this research draws upon a cross-fertilisation of heterodox international political economy and international political sociology. This approach focuses on concepts such as field, elites and hegemony. The concept of field, as developed by Bourdieu, is increasingly used in International Relations to build a differentiated analysis of globalisation and to describe the emergence of transnational spaces of struggle and domination. Elite sociology allows for a pragmatic actor-centred analysis of the issue of power in the globalisation process. This research particularly draws on Wright Mill's concept of power elite in order to explore the unification of different elites around shared projects. Finally, this thesis uses the Neo-Gramscian concept of hegemony in order to study both the consensual dimension of domination and the prospect of change contained in any international order. Through the analysis of the documents produced within the analysed period, and through the creation of databases of networks of actors, this research focuses on the debates that followed the commercialisation of the Internet throughout the 1990s and during the WSIS. The first time period led to the creation of the Internet Corporation for Assigned Names and Numbers (ICANN) in 1998. This creation resulted from the consensus-building between the dominant discourses of the time. It also resulted from the coalition of interests among an emerging power elite. However, this institutionalisation of Internet governance around the ICANN excluded a number of actors and discourses that resisted this mode of governance. The WSIS became the institutional framework within which the governance system was questioned by some excluded states, scholars, NGOs and intergovernmental organisations. The confrontation between the power elite and counter-elites during the WSIS triggered a reconfiguration of the power elite as well as a re-definition of the boundaries of the field. A new hegemonic project emerged around discursive elements such as the idea of multistakeholderism and institutional elements such as the Internet Governance Forum. The relative success of the hegemonic project allowed for a certain stability within the field and an acceptance by most non-elites of the new order. It is only recently that this order began to be questioned by the emerging powers of Internet governance. This research provides three main contributions to the scientific debate. On the theoretical level, it contributes to the emergence of a dialogue between International Political Economy and International Political Sociology perspectives in order to analyse both the structural trends of the globalisation process and the located practices of actors in a given issue-area. It notably stresses the contribution of concepts such as field and power elite and their compatibility with a Neo-Gramscian framework to analyse hegemony. On the methodological level, this perspective relies on the use of mixed methods, combining qualitative content analysis with social network analysis of actors and statements. Finally, on the empirical level, this research provides an original perspective on Internet governance. It stresses the historical dimension of current Internet governance arrangements. It also criticise the notion of multistakeholde ism and focuses instead on the power dynamics and the relation between Internet governance and globalisation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the cerebral cortex, the activity levels of neuronal populations are continuously fluctuating. When neuronal activity, as measured using functional MRI (fMRI), is temporally coherent across 2 populations, those populations are said to be functionally connected. Functional connectivity has previously been shown to correlate with structural (anatomical) connectivity patterns at an aggregate level. In the present study we investigate, with the aid of computational modeling, whether systems-level properties of functional networks-including their spatial statistics and their persistence across time-can be accounted for by properties of the underlying anatomical network. We measured resting state functional connectivity (using fMRI) and structural connectivity (using diffusion spectrum imaging tractography) in the same individuals at high resolution. Structural connectivity then provided the couplings for a model of macroscopic cortical dynamics. In both model and data, we observed (i) that strong functional connections commonly exist between regions with no direct structural connection, rendering the inference of structural connectivity from functional connectivity impractical; (ii) that indirect connections and interregional distance accounted for some of the variance in functional connectivity that was unexplained by direct structural connectivity; and (iii) that resting-state functional connectivity exhibits variability within and across both scanning sessions and model runs. These empirical and modeling results demonstrate that although resting state functional connectivity is variable and is frequently present between regions without direct structural linkage, its strength, persistence, and spatial statistics are nevertheless constrained by the large-scale anatomical structure of the human cerebral cortex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis focuses on the social-psychological factors that help coping with structural disadvantage, and specifically on the role of cohesive ingroups and the sense of connectedness and efficacy they entail in this process. It aims to complement existing group-based models of coping that are grounded in a categorization perspective to groups and consequently focus exclusively on the large-scale categories made salient in intergroup contexts of comparisons. The dissertation accomplishes this aim through a reconsideration of between-persons relational interdependence as a sufficient and independent antecedent of a sense of groupness, and the benefits that a sense of group connectedness in one's direct environment, regardless of the categorical or relational basis of groupness, might have in the everyday struggles of disadvantaged group members. The three empirical papers aim to validate this approach, outlined in the first theoretical introduction, by testing derived hypotheses. They are based on data collected with youth populations (15-30) from three institutions in French-speaking Switzerland within the context of a larger project on youth transitions. Methods of data collection are paper-pencil questionnaires and in-depth interviews with a selected sub-sample of participants. The key argument of the first paper is that members of socially disadvantaged categories face higher barriers to their life project and that a general sense of connectedness, either based on categorical identities or other proximal groups and relations, mitigates the feeling of powerlessness associated with this experience. The second paper develops and tests a model that defines individual needs satisfaction as antecedent of self-group bonds and the efficacy beliefs derived from these intragroup bonds as the mechanism underlining the role of ingroups in coping. The third paper highlights the complexities that might be associated with the construction of a sense of groupness directly from intergroup comparisons and categorization-based disadvantage, and points out a more subtle understanding of the processes underling the emergence of groupness out of the situation of structural disadvantage. Overall, the findings confirm the central role of ingroups in coping with structural disadvantage and the importance of an understanding of groupness and its role that goes beyond the dominant focus on intergroup contexts and categorization processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: We present the retrospective analysis of a single-institution experience for radiosurgery (RS) in brain metastasis (BM) with Gamma Knife (GK) and Linac. Methods: From July 2010 to July 2012, 28 patients (with 83 lesions) had RS with GK and 35 patients (with 47 lesions) with Linac. The primary outcome was the local progression-free survival (LPFS). The secondary outcome was the overall survival (OS). Apart a standard statistical analysis, we included a Cox regression model with shared frailty, to modulate the within-patient correlation (preliminary evaluation showed a significant frailty effect, meaning that the correlation within patient could be ignored). Results: The mean follow-up period was 11.7 months (median 7.9, 1.7-22.7) for GK and 18.1 (median 17, 7.5-28.7) for Linac. The median number of lesions per patient was 2.5 (1-9) in GK compared with 1 (1-3) in Linac. There were more radioresistant lesions (melanoma) and more lesions located in functional areas for the GK group. The median dose was 24 Gy (GK) compared with 20 Gy (Linac). The LPFS actuarial rate was as follows: for GK at 3, 6, 9, 12, and 17 months: 96.96, 96.96, 96.96, 88.1, and 81.5%, and remained stable till 32 months; for Linac at 3, 6, 12, 17, 24, and 33 months, it was 91.5, 91.5, 91.5, 79.9, 55.5, and 17.1%, respectively (p = 0.03, chi-square test). After the Cox regression analysis with shared frailty, the p-value was not statistically significant between groups. The median overall survival was 9.7 months for GK and 23.6 months for Linac group. Uni- and multivariate analysis showed a lower GPA score and noncontrolled systemic status were associated with lower OS. Cox regression analysis adjusting for these two parameters showed comparable OS rate. Conclusions: In this comparative report between GK and Linac, preliminary analysis showed that more difficult cases are treated by GK, with patients harboring more lesions, radioresistant tumors, and highly functional located. The groups look, in this sense, very heterogeneous at baseline. After a Cox frailty model, the LPFS rates seemed very similar (p < 0.05). The OS was similar, after adjusting for systemic status and GPA score (p < 0.05). The technical reasons for choosing GK instead of Linac were the anatomical location related to highly functional areas, histology, technical limitations of Linac movements, especially lower posterior fossa locations, or closeness of multiple lesions to highly functional areas optimal dosimetry with Linac