967 resultados para Odd third order intensity parameters


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a method for enhancing current QoS routing methods by means of QoS protection is presented. In an MPLS network, the segments (links) to be protected are predefined and an LSP request involves, apart from establishing a working path, creating a specific type of backup path (local, reverse or global). Different QoS parameters, such as network load balancing, resource optimization and minimization of LSP request rejection should be considered. QoS protection is defined as a function of QoS parameters, such as packet loss, restoration time, and resource optimization. A framework to add QoS protection to many of the current QoS routing algorithms is introduced. A backup decision module to select the most suitable protection method is formulated and different case studies are analyzed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trypanosoma cruzi infection has a large public health impact in Latin American countries. Although the transmission rates via blood transfusions and insect vectors have declined sharply in the past 20 years due to policies of the Southern Cone countries, a large number of people are still at risk for infection. Currently, no accepted experimental model or descriptions of the clinical signs that occur during the course of acute murine infection are available. The aim of this work was to use non-invasive methods to evaluate the clinical signs of Balb/c mice infected with the Y strain of T. cruzi. The infected mice displayed evident clinical changes beginning in the third week of infection. The mice were evaluated based on physical characteristics, spontaneous activity, exploratory behaviour and physiological alterations. We hope that the results presented in this report provide parameters that complement the effective monitoring of trypanocidal treatment and other interventions used to treat experimental Chagas disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An aeropalynological study was carried out in the atmosphere of Estepona, a very popular tourist resort situated in the "Costa del Sol", (southern Spain) based on the data obtained during a three year air-monitoring programme (March 1995 to March 1998) using a volumetric pollen trap. The 34 taxa that reached a 10-day mean air pollen concentration equal to or greater than 1 grain of pollen/m(3) of air are reflected in the calendar. The first 10 taxa, in order of abundance, were: Cupressaceae, Olea europaea, Quercus, Poaceae, Urticaceae, Plantago, Pinus, Chenopodiaceae-Amaranthaceae, Ericaceae and Castanea, the first 3 of which accounted for approximately 56 % of the annual total pollen count. The greatest diversity of pollen type occurred during spring, while the highest pollen concentrations were reached from February-June, when approximately more than 80 % of the annual total pollen was registered. The lowest concentrations were obtaining during January, August and September. The annual quantity of pollen collected, the intensity and the dates on which the maximum peaks were recorded differed for the 3 years studied, which can be explained by reference to various meteorological parameters, especially rainfall and temperature. The pollen calendar spectrum is typically Mediterranean and similar to those of nearby localities, in which many pollen types are represented and the long tails indicating long flowering periods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in Boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a Boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred Boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity and solution space, thus making it easier to investigate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using optimized voxel-based morphometry, we performed grey matter density analyses on 59 age-, sex- and intelligence-matched young adults with three distinct, progressive levels of musical training intensity or expertise. Structural brain adaptations in musicians have been repeatedly demonstrated in areas involved in auditory perception and motor skills. However, musical activities are not confined to auditory perception and motor performance, but are entangled with higher-order cognitive processes. In consequence, neuronal systems involved in such higher-order processing may also be shaped by experience-driven plasticity. We modelled expertise as a three-level regressor to study possible linear relationships of expertise with grey matter density. The key finding of this study resides in a functional dissimilarity between areas exhibiting increase versus decrease of grey matter as a function of musical expertise. Grey matter density increased with expertise in areas known for their involvement in higher-order cognitive processing: right fusiform gyrus (visual pattern recognition), right mid orbital gyrus (tonal sensitivity), left inferior frontal gyrus (syntactic processing, executive function, working memory), left intraparietal sulcus (visuo-motor coordination) and bilateral posterior cerebellar Crus II (executive function, working memory) and in auditory processing: left Heschl's gyrus. Conversely, grey matter density decreased with expertise in bilateral perirolandic and striatal areas that are related to sensorimotor function, possibly reflecting high automation of motor skills. Moreover, a multiple regression analysis evidenced that grey matter density in the right mid orbital area and the inferior frontal gyrus predicted accuracy in detecting fine-grained incongruities in tonal music.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The yeast Schizosaccharomyces pombe is frequently used as a model for studying the cell cycle. The cells are rod-shaped and divide by medial fission. The process of cell division, or cytokinesis, is controlled by a network of signaling proteins called the Septation Initiation Network (SIN); SIN proteins associate with the SPBs during nuclear division (mitosis). Some SIN proteins associate with both SPBs early in mitosis, and then display strongly asymmetric signal intensity at the SPBs in late mitosis, just before cytokinesis. This asymmetry is thought to be important for correct regulation of SIN signaling, and coordination of cytokinesis and mitosis. In order to study the dynamics of organelles or large protein complexes such as the spindle pole body (SPB), which have been labeled with a fluorescent protein tag in living cells, a number of the image analysis problems must be solved; the cell outline must be detected automatically, and the position and signal intensity associated with the structures of interest within the cell must be determined. RESULTS: We present a new 2D and 3D image analysis system that permits versatile and robust analysis of motile, fluorescently labeled structures in rod-shaped cells. We have designed an image analysis system that we have implemented as a user-friendly software package allowing the fast and robust image-analysis of large numbers of rod-shaped cells. We have developed new robust algorithms, which we combined with existing methodologies to facilitate fast and accurate analysis. Our software permits the detection and segmentation of rod-shaped cells in either static or dynamic (i.e. time lapse) multi-channel images. It enables tracking of two structures (for example SPBs) in two different image channels. For 2D or 3D static images, the locations of the structures are identified, and then intensity values are extracted together with several quantitative parameters, such as length, width, cell orientation, background fluorescence and the distance between the structures of interest. Furthermore, two kinds of kymographs of the tracked structures can be established, one representing the migration with respect to their relative position, the other representing their individual trajectories inside the cell. This software package, called "RodCellJ", allowed us to analyze a large number of S. pombe cells to understand the rules that govern SIN protein asymmetry. CONCLUSIONS: "RodCell" is freely available to the community as a package of several ImageJ plugins to simultaneously analyze the behavior of a large number of rod-shaped cells in an extensive manner. The integration of different image-processing techniques in a single package, as well as the development of novel algorithms does not only allow to speed up the analysis with respect to the usage of existing tools, but also accounts for higher accuracy. Its utility was demonstrated on both 2D and 3D static and dynamic images to study the septation initiation network of the yeast Schizosaccharomyces pombe. More generally, it can be used in any kind of biological context where fluorescent-protein labeled structures need to be analyzed in rod-shaped cells. AVAILABILITY: RodCellJ is freely available under http://bigwww.epfl.ch/algorithms.html, (after acceptance of the publication).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vivo dosimetry is a way to verify the radiation dose delivered to the patient in measuring the dose generally during the first fraction of the treatment. It is the only dose delivery control based on a measurement performed during the treatment. In today's radiotherapy practice, the dose delivered to the patient is planned using 3D dose calculation algorithms and volumetric images representing the patient. Due to the high accuracy and precision necessary in radiation treatments, national and international organisations like ICRU and AAPM recommend the use of in vivo dosimetry. It is also mandatory in some countries like France. Various in vivo dosimetry methods have been developed during the past years. These methods are point-, line-, plane- or 3D dose controls. A 3D in vivo dosimetry provides the most information about the dose delivered to the patient, with respect to ID and 2D methods. However, to our knowledge, it is generally not routinely applied to patient treatments yet. The aim of this PhD thesis was to determine whether it is possible to reconstruct the 3D delivered dose using transmitted beam measurements in the context of narrow beams. An iterative dose reconstruction method has been described and implemented. The iterative algorithm includes a simple 3D dose calculation algorithm based on the convolution/superposition principle. The methodology was applied to narrow beams produced by a conventional 6 MV linac. The transmitted dose was measured using an array of ion chambers, as to simulate the linear nature of a tomotherapy detector. We showed that the iterative algorithm converges quickly and reconstructs the dose within a good agreement (at least 3% / 3 mm locally), which is inside the 5% recommended by the ICRU. Moreover it was demonstrated on phantom measurements that the proposed method allows us detecting some set-up errors and interfraction geometry modifications. We also have discussed the limitations of the 3D dose reconstruction for dose delivery error detection. Afterwards, stability tests of the tomotherapy MVCT built-in onboard detector was performed in order to evaluate if such a detector is suitable for 3D in-vivo dosimetry. The detector showed stability on short and long terms comparable to other imaging devices as the EPIDs, also used for in vivo dosimetry. Subsequently, a methodology for the dose reconstruction using the tomotherapy MVCT detector is proposed in the context of static irradiations. This manuscript is composed of two articles and a script providing further information related to this work. In the latter, the first chapter introduces the state-of-the-art of in vivo dosimetry and adaptive radiotherapy, and explains why we are interested in performing 3D dose reconstructions. In chapter 2 a dose calculation algorithm implemented for this work is reviewed with a detailed description of the physical parameters needed for calculating 3D absorbed dose distributions. The tomotherapy MVCT detector used for transit measurements and its characteristics are described in chapter 3. Chapter 4 contains a first article entitled '3D dose reconstruction for narrow beams using ion chamber array measurements', which describes the dose reconstruction method and presents tests of the methodology on phantoms irradiated with 6 MV narrow photon beams. Chapter 5 contains a second article 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations. A dose reconstruction process specific to the use of the tomotherapy MVCT detector is presented in chapter 6. A discussion and perspectives of the PhD thesis are presented in chapter 7, followed by a conclusion in chapter 8. The tomotherapy treatment device is described in appendix 1 and an overview of 3D conformai- and intensity modulated radiotherapy is presented in appendix 2. - La dosimétrie in vivo est une technique utilisée pour vérifier la dose délivrée au patient en faisant une mesure, généralement pendant la première séance du traitement. Il s'agit de la seule technique de contrôle de la dose délivrée basée sur une mesure réalisée durant l'irradiation du patient. La dose au patient est calculée au moyen d'algorithmes 3D utilisant des images volumétriques du patient. En raison de la haute précision nécessaire lors des traitements de radiothérapie, des organismes nationaux et internationaux tels que l'ICRU et l'AAPM recommandent l'utilisation de la dosimétrie in vivo, qui est devenue obligatoire dans certains pays dont la France. Diverses méthodes de dosimétrie in vivo existent. Elles peuvent être classées en dosimétrie ponctuelle, planaire ou tridimensionnelle. La dosimétrie 3D est celle qui fournit le plus d'information sur la dose délivrée. Cependant, à notre connaissance, elle n'est généralement pas appliquée dans la routine clinique. Le but de cette recherche était de déterminer s'il est possible de reconstruire la dose 3D délivrée en se basant sur des mesures de la dose transmise, dans le contexte des faisceaux étroits. Une méthode itérative de reconstruction de la dose a été décrite et implémentée. L'algorithme itératif contient un algorithme simple basé sur le principe de convolution/superposition pour le calcul de la dose. La dose transmise a été mesurée à l'aide d'une série de chambres à ionisations alignées afin de simuler la nature linéaire du détecteur de la tomothérapie. Nous avons montré que l'algorithme itératif converge rapidement et qu'il permet de reconstruire la dose délivrée avec une bonne précision (au moins 3 % localement / 3 mm). De plus, nous avons démontré que cette méthode permet de détecter certaines erreurs de positionnement du patient, ainsi que des modifications géométriques qui peuvent subvenir entre les séances de traitement. Nous avons discuté les limites de cette méthode pour la détection de certaines erreurs d'irradiation. Par la suite, des tests de stabilité du détecteur MVCT intégré à la tomothérapie ont été effectués, dans le but de déterminer si ce dernier peut être utilisé pour la dosimétrie in vivo. Ce détecteur a démontré une stabilité à court et à long terme comparable à d'autres détecteurs tels que les EPIDs également utilisés pour l'imagerie et la dosimétrie in vivo. Pour finir, une adaptation de la méthode de reconstruction de la dose a été proposée afin de pouvoir l'implémenter sur une installation de tomothérapie. Ce manuscrit est composé de deux articles et d'un script contenant des informations supplémentaires sur ce travail. Dans ce dernier, le premier chapitre introduit l'état de l'art de la dosimétrie in vivo et de la radiothérapie adaptative, et explique pourquoi nous nous intéressons à la reconstruction 3D de la dose délivrée. Dans le chapitre 2, l'algorithme 3D de calcul de dose implémenté pour ce travail est décrit, ainsi que les paramètres physiques principaux nécessaires pour le calcul de dose. Les caractéristiques du détecteur MVCT de la tomothérapie utilisé pour les mesures de transit sont décrites dans le chapitre 3. Le chapitre 4 contient un premier article intitulé '3D dose reconstruction for narrow beams using ion chamber array measurements', qui décrit la méthode de reconstruction et présente des tests de la méthodologie sur des fantômes irradiés avec des faisceaux étroits. Le chapitre 5 contient un second article intitulé 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations'. Un procédé de reconstruction de la dose spécifique pour l'utilisation du détecteur MVCT de la tomothérapie est présenté au chapitre 6. Une discussion et les perspectives de la thèse de doctorat sont présentées au chapitre 7, suivies par une conclusion au chapitre 8. Le concept de la tomothérapie est exposé dans l'annexe 1. Pour finir, la radiothérapie «informationnelle 3D et la radiothérapie par modulation d'intensité sont présentées dans l'annexe 2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To compare the sexual behavior of adolescent males who do and do not watch pornographic websites. Methods: This study was presented as a school survey. Data were drawn from the 2002 Swiss Multicenter Adolescent Survey on Health (SMASH02) database, a survey including 7,548 adolescents age 16-20. The setting was post-mandatory schools in Switzerland. A total of 2,891 male students who connected to the internet in the last 30 days were enrolled and distributed into two groups: boys who deliberately watched pornographic websites in the last 30 days (n ¼ 942; 33%) and boys who did not (n ¼ 1,949; 67%). Socio-demographic characteristics; frequency of connection to the internet; sexual behavior parameters (having a girlfriend and if yes, for more or less than 6 months; having had sexual intercourse; age at first sexual intercourse; use of a condom at last sexual intercourse; number of sexual partners; having made a partner pregnant). Results: A logistic regression was performed using STATA 9.2. The only significant socio-demographic variable was having a low socioeconomic status (adjusted odds ratio [AOR] 1.66); no difference was found for age and academic track between the two groups. Boys who watch pornographic websites were also significantly more likely to connect frequently to the internet (one day a week: AOR 1.75; several days a week: AOR 2.36; every day: AOR 3.11), to have had sexual intercourse (AOR 2.06), and to have had their first sexual intercourse before age 15 (AOR 1.48). The stability of the relationship with their girlfriend did not appear to have any influence on the search for pornography on the internet. Conclusions: About one third of boys in our sample report having accessed pornographic websites in the last 30 days, a proportion similar to other studies. Watching such websites increases with the frequency of connection to the internet and seems to be correlated with an earlier sexual activity debut among adolescent males. However, having had first sexual intercourse before age 15 is the only sexual risk behavior that seems to be increased when watching pornographic websites among boys. Further studies should address the causality of this correlation and the factors influencing the search for pornography on the web among boys, in order to explore some new ways of prevention about sexual risk behaviors. Sources of Support: The SMASH02 survey was carried out with the financial support of the Swiss Federal Office of Public Health and the participating cantons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing multilingual trend in movie production comes with a challenge for dubbing translators since they are increasingly confronted with more than one source language. The main purpose of this master’s thesis is to provide a case study on how these third languages (see CORRIUS and ZABALBEASCOA 2011) are rendered. Another aim is to put a particular focus on their textual and narrative functions and detect possible shifts that might occur in translations. By applying a theoretical model for translation analysis (CORRIUS and ZABALBEASCOA 2011), this study describes how third languages are rendered in the German, Spanish, and Italian dubbed versions of the 2009 Tarantino movie Inglourious Basterds. A broad range of solution-types are thereby revealed and prevalent restrictions of the translation process identified. The target texts are brought in context with some sociohistorical aspects of dubbing in order to detect prevalent norms of the respective cultures andto discuss the acceptability of translations (TOURY 1995). The translatability potential of even highly complex multilingual audiovisual texts is demonstrated in this study. Moreover, proposals for further studies in multilingual audiovisual translation are outlined and the potential for future investigations in this field thereby emphasised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Population viability analyses (PVA) are increasingly used in metapopulation conservation plans. Two major types of models are commonly used to assess vulnerability and to rank management options: population-based stochastic simulation models (PSM such as RAMAS or VORTEX) and stochastic patch occupancy models (SPOM). While the first set of models relies on explicit intrapatch dynamics and interpatch dispersal to predict population levels in space and time, the latter is based on spatially explicit metapopulation theory where the probability of patch occupation is predicted given the patch area and isolation (patch topology). We applied both approaches to a European tree frog (Hyla arborea) metapopulation in western Switzerland in order to evaluate the concordances of both models and their applications to conservation. Although some quantitative discrepancies appeared in terms of network occupancy and equilibrium population size, the two approaches were largely concordant regarding the ranking of patch values and sensitivities to parameters, which is encouraging given the differences in the underlying paradigms and input data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To evaluate the short- and mid-term evolutions of the apparent diffusion coefficient of lesions treated with RF, in order to determine if the ADC can be used as a marker of tumour response. Methods and Materials: Twenty patients were treated for a liver malignancy with RF and were examined on a 1.5 T/3.0 T machine with T2, gadolinium-enhanced T1 and diffusion sequences: before treatment (< 1 month), just after treatment (< 1 month) and midterm (3-6 months). The ADC was measured in the whole lesion and in the area with the most restricted diffusion (MRDA). The ROI size was also measured on the diffusion map. The Pearson/ANOVA tests were used. Results: All patients were successfully treated with complete disappearance of CE. The lesional size on T2 showed a negative evolution in time (p < 0.002). The ADC in the whole lesion showed a bell-shaped evolution (increasing just after RF, then decreasing, p = 0.02). The ROI size on the diffusion map followed a similar course (p = 0.01). For the MRDA, such evolutions were also found, but they were not significant. There was a negative correlation between CE and the ADC (p < 0.02) and between the lesional size on T2 and ADC (p = 0.03) in the whole lesion. There were also positive correlations between the ROI size and ADC (p = 0.0008) and between CE and the size on T2 (p = 0.0002). The ADC in MRDA showed some non-significant correlations with other variables. Conclusion: The lesions successfully treated with RF have a clear and predictable evolution in terms of T2 size, CE and ADC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Les glissements de terrain représentent un des principaux risques naturels dans les régions montagneuses. En Suisse, chaque année les glissements de terrains causent des dégâts qui affectent les infrastructures et ont des coûts financiers importants. Une bonne compréhension des mécanismes des glissements peut permettre d'atténuer leur impact. Celle-ci passe notamment par la connaissance de la structure interne du glissement, la détermination de son volume et de son ou ses plans de glissement. Dans un glissement de terrain, la désorganisation et la présence de fractures dans le matériel déplacé engendre un changement des paramètres physiques et en particulier une diminution des vitesses de propagation des ondes sismiques ainsi que de la densité du matériel. Les méthodes sismiques sont de ce fait bien adaptées à l'étude des glissements de terrain. Parmi les méthodes sismiques, l'analyse de la dispersion des ondes de surface est une méthode simple à mettre en oeuvre. Elle présente l'avantage d'estimer les variations des vitesses de cisaillement avec la profondeur sans avoir spécifiquement recours à l'utilisation d'une source d'onde S et de géophones horizontaux. Sa mise en oeuvre en trois étapes implique la mesure de la dispersion des ondes de surface sur des réseaux étendus, la détermination des courbes de dispersion pour finir par l'inversion de ces courbes. Les modèles de vitesse obtenus à partir de cette procédure ne sont valides que lorsque les milieux explorés ne présentent pas de variations latérales. En pratique cette hypothèse est rarement vérifiée, notamment pour un glissement de terrain dans lequel les couches remaniées sont susceptibles de présenter de fortes hétérogénéités latérales. Pour évaluer la possibilité de déterminer des courbes de dispersion à partir de réseaux de faible extension des mesures testes ont été effectuées sur un site (Arnex, VD) équipé d'un forage. Un profil sismique de 190 m de long a été implanté dans une vallée creusée dans du calcaire et remplie par des dépôts glacio-lacustres d'une trentaine de mètres d'épaisseur. Les données acquises le long de ce profil ont confirmé que la présence de variations latérales sous le réseau de géophones affecte l'allure des courbes de dispersion jusqu'à parfois empêcher leur détermination. Pour utiliser l'analyse de la dispersion des ondes de surface sur des sites présentant des variations latérales, notre approche consiste à déterminer les courbes de dispersions pour une série de réseaux de faible extension, à inverser chacune des courbes et à interpoler les différents modèles de vitesse obtenus. Le choix de la position ainsi que de l'extension des différents réseaux de géophones est important. Il tient compte de la localisation des hétérogénéités détectées à partir de l'analyse de sismique réfraction, mais également d'anomalies d'amplitudes observées sur des cartes qui représentent dans le domaine position de tir - position du récepteur, l'amplitude mesurée pour différentes fréquences. La procédure proposée par Lin et Lin (2007) s'est avérée être une méthode efficace permettant de déterminer des courbes de dispersion à partir de réseaux de faible extension. Elle consiste à construire à partir d'un réseau de géophones et de plusieurs positions de tir un enregistrement temps-déports qui tient compte d'une large gamme de distances source-récepteur. Au moment d'assembler les différentes données une correction de phase est appliquée pour tenir compte des hétérogénéités situées entre les différents points de tir. Pour évaluer cette correction nous suggérons de calculer pour deux tir successif la densité spectrale croisée des traces de même offset: Sur le site d'Arnex, 22 courbes de dispersions ont été déterminées pour de réseaux de géophones de 10 m d'extension. Nous avons également profité du forage pour acquérir un profil de sismique verticale en ondes S. Le modèle de vitesse S déduit de l'interprétation du profil de sismique verticale est utilisé comme information à priori lors l'inversion des différentes courbes de dispersion. Finalement, le modèle en deux dimension qui a été établi grâce à l'analyse de la dispersion des ondes de surface met en évidence une structure tabulaire à trois couches dont les limites coïncident bien avec les limites lithologiques observées dans le forage. Dans celui-ci des argiles limoneuses associées à une vitesse de propagation des ondes S de l'ordre de 175 m/s surmontent vers 9 m de profondeur des dépôts de moraine argilo-sableuse caractérisés par des vitesses de propagation des ondes S de l'ordre de 300 m/s jusqu'à 14 m de profondeur et supérieur ou égal à 400 m/s entre 14 et 20 m de profondeur. Le glissement de la Grande Combe (Ballaigues, VD) se produit à l'intérieur du remplissage quaternaire d'une combe creusée dans des calcaires Portlandien. Comme dans le cas du site d'Arnex les dépôts quaternaires correspondent à des dépôts glacio-lacustres. Dans la partie supérieure la surface de glissement a été localisée à une vingtaine de mètres de profondeur au niveau de l'interface qui sépare des dépôts de moraine jurassienne et des dépôts glacio-lacustres. Au pied du glissement 14 courbes de dispersions ont été déterminées sur des réseaux de 10 m d'extension le long d'un profil de 144 m. Les courbes obtenues sont discontinues et définies pour un domaine de fréquence de 7 à 35 Hz. Grâce à l'utilisation de distances source-récepteur entre 8 et 72 m, 2 à 4 modes de propagation ont été identifiés pour chacune des courbes. Lors de l'inversion des courbes de dispersion la prise en compte des différents modes de propagation a permis d'étendre la profondeur d'investigation jusqu'à une vingtaine de mètres de profondeur. Le modèle en deux dimensions permet de distinguer 4 couches (Vs1 < 175 m/s, 175 m/s < Vs2 < 225 m/s, 225 m/s < Vs3 < 400 m/s et Vs4 >.400 m/s) qui présentent des variations d'épaisseur. Des profils de sismiques réflexion en ondes S acquis avec une source construite dans le cadre de ce travail, complètent et corroborent le modèle établi à partir de l'analyse de la dispersion des ondes de surface. Un réflecteur localisé entre 5 et 10 m de profondeur et associé à une vitesse de sommation de 180 m/s souligne notamment la géométrie de l'interface qui sépare la deuxième de la troisième couche du modèle établi à partir de l'analyse de la dispersion des ondes de surface. Abstract Landslides are one of the main natural hazards in mountainous regions. In Switzerland, landslides cause damages every year that impact infrastructures and have important financial costs. In depth understanding of sliding mechanisms may help limiting their impact. In particular, this can be achieved through a better knowledge of the internal structure of the landslide, the determination of its volume and its sliding surface or surfaces In a landslide, the disorganization and the presence of fractures in the displaced material generate a change of the physical parameters and in particular a decrease of the seismic velocities and of the material density. Therefoe, seismic methods are well adapted to the study of landslides. Among seismic methods, surface-wave dispersion analysis is a easy to implement. Through it, shearwave velocity variations with depth can be estimated without having to resort to an S-wave source and to horizontal geophones. Its 3-step implementation implies measurement of surface-wave dispersion with long arrays, determination of the dispersion curves and finally inversion of these curves. Velocity models obtained through this approach are only valid when the investigated medium does not include lateral variations. In practice, this assumption is seldom correct, in particular for landslides in which reshaped layers likely include strong lateral heterogeneities. To assess the possibility of determining dispersion curves from short array lengths we carried out tests measurements on a site (Arnex, VD) that includes a borehole. A 190 m long seismic profile was acquired in a valley carved into limestone and filled with 30 m of glacio-lacustrine sediments. The data acquired along this profile confirmed that the presence of lateral variations under the geophone array influences the dispersion-curve shape so much that it sometimes preventes the dispersion curves determination. Our approach to use the analysis of surface-wave dispersion on sites that include lateral variations consists in obtaining dispersion curves for a series of short length arrays; inverting each so obtained curve and interpolating the different obtained velocity model. The choice of the location as well as the geophone array length is important. It takes into account the location of the heterogeneities that are revealed by the seismic refraction interpretation of the data but also, the location of signal amplitude anomalies observed on maps that represent, for a given frequency, the measured amplitude in the shot position - receiver position domain. The procedure proposed by Lin and Lin (2007) turned out to be an efficient one to determine dispersion curves using short extension arrays. It consists in building a time-offset from an array of geophones with a wide offset range by gathering seismograms acquired with different source-to-receiver offsets. When assembling the different data, a phase correction is applied in order to reduce static phase error induced by lateral variation. To evaluate this correction, we suggest to calculate, for two successive shots, the cross power spectral density of common offset traces. On the Arnex site, 22 curves were determined with 10m in length geophone-arrays. We also took advantage of the borehole to acquire a S-wave vertical seismic profile. The S-wave velocity depth model derived from the vertical seismic profile interpretation is used as prior information in the inversion of the dispersion-curves. Finally a 2D velocity model was established from the analysis of the different dispersion curves. It reveals a 3-layer structure in good agreement with the observed lithologies in the borehole. In it a clay layer with a shear-wave of 175 m/s shear-wave velocity overlies a clayey-sandy till layer at 9 m depth that is characterized down to 14 m by a 300 m/s S-wave velocity; these deposits have a S-wave velocity of 400 m/s between depths of 14 to 20 m. The La Grand Combe landslide (Ballaigues, VD) occurs inside the Quaternary filling of a valley carved into Portlandien limestone. As at the Arnex site, the Quaternary deposits correspond to glaciolacustrine sediments. In the upper part of the landslide, the sliding surface is located at a depth of about 20 m that coincides with the discontinuity between Jurassian till and glacio-lacustrine deposits. At the toe of the landslide, we defined 14 dispersion curves along a 144 m long profile using 10 m long geophone arrays. The obtained curves are discontinuous and defined within a frequency range of 7 to 35 Hz. The use of a wide range of offsets (from 8 to 72 m) enabled us to determine 2 to 4 mode of propagation for each dispersion curve. Taking these higher modes into consideration for dispersion curve inversion allowed us to reach an investigation depth of about 20 m. A four layer 2D model was derived (Vs1< 175 m/s, 175 m/s <Vs2< 225 m/s, 225 m/s < Vs3 < 400 m/s, Vs4> 400 m/s) with variable layer thicknesses. S-wave seismic reflection profiles acquired with a source built as part of this work complete and the velocity model revealed by surface-wave analysis. In particular, reflector at a depth of 5 to 10 m associated with a 180 m/s stacking velocity image the geometry of the discontinuity between the second and third layer of the model derived from the surface-wave dispersion analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Financial markets play an important role in an economy performing various functions like mobilizing and pooling savings, producing information about investment opportunities, screening and monitoring investments, implementation of corporate governance, diversification and management of risk. These functions influence saving rates, investment decisions, technological innovation and, therefore, have important implications for welfare. In my PhD dissertation I examine the interplay of financial and product markets by looking at different channels through which financial markets may influence an economy.My dissertation consists of four chapters. The first chapter is a co-authored work with Martin Strieborny, a PhD student from the University of Lausanne. The second chapter is a co-authored work with Melise Jaud, a PhD student from the Paris School of Economics. The third chapter is co-authored with both Melise Jaud and Martin Strieborny. The last chapter of my PhD dissertation is a single author paper.Chapter 1 of my PhD thesis analyzes the effect of financial development on growth of contract intensive industries. These industries intensively use intermediate inputs that neither can be sold on organized exchange, nor are reference-priced (Levchenko, 2007; Nunn, 2007). A typical example of a contract intensive industry would be an industry where an upstream supplier has to make investments in order to customize a product for needs of a downstream buyer. After the investment is made and the product is adjusted, the buyer may refuse to meet a commitment and trigger ex post renegotiation. Since the product is customized to the buyer's needs, the supplier cannot sell the product to a different buyer at the original price. This is referred in the literature as the holdup problem. As a consequence, the individually rational suppliers will underinvest into relationship-specific assets, hurting the downstream firms with negative consequences for aggregate growth. The standard way to mitigate the hold up problem is to write a binding contract and to rely on the legal enforcement by the state. However, even the most effective contract enforcement might fail to protect the supplier in tough times when the buyer lacks a reliable source of external financing. This suggests the potential role of financial intermediaries, banks in particular, in mitigating the incomplete contract problem. First, financial products like letters of credit and letters of guarantee can substantially decrease a risk and transaction costs of parties. Second, a bank loan can serve as a signal about a buyer's true financial situation, an upstream firm will be more willing undertake relationship-specific investment knowing that the business partner is creditworthy and will abstain from myopic behavior (Fama, 1985; von Thadden, 1995). Therefore, a well-developed financial (especially banking) system should disproportionately benefit contract intensive industries.The empirical test confirms this hypothesis. Indeed, contract intensive industries seem to grow faster in countries with a well developed financial system. Furthermore, this effect comes from a more developed banking sector rather than from a deeper stock market. These results are reaffirmed examining the effect of US bank deregulation on the growth of contract intensive industries in different states. Beyond an overall pro-growth effect, the bank deregulation seems to disproportionately benefit the industries requiring relationship-specific investments from their suppliers.Chapter 2 of my PhD focuses on the role of the financial sector in promoting exports of developing countries. In particular, it investigates how credit constraints affect the ability of firms operating in agri-food sectors of developing countries to keep exporting to foreign markets.Trade in high-value agri-food products from developing countries has expanded enormously over the last two decades offering opportunities for development. However, trade in agri-food is governed by a growing array of standards. Sanitary and Phytosanitary standards (SPS) and technical regulations impose additional sunk, fixed and operating costs along the firms' export life. Such costs may be detrimental to firms' survival, "pricing out" producers that cannot comply. The existence of these costs suggests a potential role of credit constraints in shaping the duration of trade relationships on foreign markets. A well-developed financial system provides the funds to exporters necessary to adjust production processes in order to meet quality and quantity requirements in foreign markets and to maintain long-standing trade relationships. The products with higher needs for financing should benefit the most from a well functioning financial system. This differential effect calls for a difference-in-difference approach initially proposed by Rajan and Zingales (1998). As a proxy for demand for financing of agri-food products, the sanitary risk index developed by Jaud et al. (2009) is used. The empirical literature on standards and norms show high costs of compliance, both variable and fixed, for high-value food products (Garcia-Martinez and Poole, 2004; Maskus et al., 2005). The sanitary risk index reflects the propensity of products to fail health and safety controls on the European Union (EU) market. Given the high costs of compliance, the sanitary risk index captures the demand for external financing to comply with such regulations.The prediction is empirically tested examining the export survival of different agri-food products from firms operating in Ghana, Mali, Malawi, Senegal and Tanzania. The results suggest that agri-food products that require more financing to keep up with food safety regulation of the destination market, indeed sustain longer in foreign market, when they are exported from countries with better developed financial markets.Chapter 3 analyzes the link between financial markets and efficiency of resource allocation in an economy. Producing and exporting products inconsistent with a country's factor endowments constitutes a serious misallocation of funds, which undermines competitiveness of the economy and inhibits its long term growth. In this chapter, inefficient exporting patterns are analyzed through the lens of the agency theories from the corporate finance literature. Managers may pursue projects with negative net present values because their perquisites or even their job might depend on them. Exporting activities are particularly prone to this problem. Business related to foreign markets involves both high levels of additional spending and strong incentives for managers to overinvest. Rational managers might have incentives to push for exports that use country's scarce factors which is suboptimal from a social point of view. Export subsidies might further skew the incentives towards inefficient exporting. Management can divert the export subsidies into investments promoting inefficient exporting.Corporate finance literature stresses the disciplining role of outside debt in counteracting the internal pressures to divert such "free cash flow" into unprofitable investments. Managers can lose both their reputation and the control of "their" firm if the unpaid external debt triggers a bankruptcy procedure. The threat of possible failure to satisfy debt service payments pushes the managers toward an efficient use of available resources (Jensen, 1986; Stulz, 1990; Hart and Moore, 1995). The main sources of debt financing in the most countries are banks. The disciplining role of banks might be especially important in the countries suffering from insufficient judicial quality. Banks, in pursuing their rights, rely on comparatively simple legal interventions that can be implemented even by mediocre courts. In addition to their disciplining role, banks can promote efficient exporting patterns in a more direct way by relaxing credit constraints of producers, through screening, identifying and investing in the most profitable investment projects. Therefore, a well-developed domestic financial system, and particular banking system, would help to push a country's exports towards products congruent with its comparative advantage.This prediction is tested looking at the survival of different product categories exported to US market. Products are identified according to the Euclidian distance between their revealed factor intensity and the country's factor endowments. The results suggest that products suffering from a comparative disadvantage (labour-intensive products from capital-abundant countries) survive less on the competitive US market. This pattern is stronger if the exporting country has a well-developed banking system. Thus, a strong banking sector promotes exports consistent with a country comparative advantage.Chapter 4 of my PhD thesis further examines the role of financial markets in fostering efficient resource allocation in an economy. In particular, the allocative efficiency hypothesis is investigated in the context of equity market liberalization.Many empirical studies document a positive and significant effect of financial liberalization on growth (Levchenko et al. 2009; Quinn and Toyoda 2009; Bekaert et al., 2005). However, the decrease in the cost of capital and the associated growth in investment appears rather modest in comparison to the large GDP growth effect (Bekaert and Harvey, 2005; Henry, 2000, 2003). Therefore, financial liberalization may have a positive impact on growth through its effect on the allocation of funds across firms and sectors.Free access to international capital markets allows the largest and most profitable domestic firms to borrow funds in foreign markets (Rajan and Zingales, 2003). As domestic banks loose some of their best clients, they reoptimize their lending practices seeking new clients among small and younger industrial firms. These firms are likely to be more risky than large and established companies. Screening of customers becomes prevalent as the return to screening rises. Banks, ceteris paribus, tend to focus on firms operating in comparative-advantage sectors because they are better risks. Firms in comparative-disadvantage sectors finding it harder to finance their entry into or survival in export markets either exit or refrain from entering export markets. On aggregate, one should therefore expect to see less entry, more exit, and shorter survival on export markets in those sectors after financial liberalization.The paper investigates the effect of financial liberalization on a country's export pattern by comparing the dynamics of entry and exit of different products in a country export portfolio before and after financial liberalization.The results suggest that products that lie far from the country's comparative advantage set tend to disappear relatively faster from the country's export portfolio following the liberalization of financial markets. In other words, financial liberalization tends to rebalance the composition of a country's export portfolio towards the products that intensively use the economy's abundant factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aims of this study were to investigate the usefulness of serum C-reactive protein, procalcitonin, tumor necrosis factor alpha, interleukin-6, and interleukin-8 as postmortem markers of sepsis and to compare C-reactive protein and procalcitonin values in serum, vitreous humor, and cerebrospinal fluid in a series of sepsis cases and control subjects, in order to determine whether these measurements may be employed for the postmortem diagnosis of sepsis. Two study groups were formed, a sepsis group (eight subjects coming from the intensive care unit of two university hospitals, with a clinical diagnosis of sepsis in vivo) and control group (ten autopsy cases admitted to two university medicolegal centers, deceased from natural and unnatural causes, without elements to presume an underlying sepsis as the cause of death). Serum C-reactive protein and procalcitonin concentrations were significantly different between sepsis cases and control cases, whereas serum tumor necrosis factor alpha, interleukin-6, and interleukin-8 values were not significantly different between the two groups, suggesting that measurement of interleukin-6, interleukin-8, and tumor necrosis factor alpha is non-optimal for postmortem discrimination of cases with sepsis. In the sepsis group, vitreous procalcitonin was detectable in seven out of eight cases. In the control group, vitreous procalcitonin was clearly detectable only in one case, which also showed an increase of all markers in serum and for which the cause of death was myocardial infarction associated with multi-organic failure. According to the results of this study, the determination of vitreous procalcitonin may be an alternative to the serum procalcitonin for the postmortem diagnosis of sepsis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using a suitable Hull and White type formula we develop a methodology to obtain asecond order approximation to the implied volatility for very short maturities. Using thisapproximation we accurately calibrate the full set of parameters of the Heston model. Oneof the reasons that makes our calibration for short maturities so accurate is that we alsotake into account the term-structure for large maturities. We may say that calibration isnot "memoryless", in the sense that the option's behavior far away from maturity doesinfluence calibration when the option gets close to expiration. Our results provide a wayto perform a quick calibration of a closed-form approximation to vanilla options that canthen be used to price exotic derivatives. The methodology is simple, accurate, fast, andit requires a minimal computational cost.