16 resultados para Maximum hardness and the minimum polarizability pr

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show that the dispersal routes reconstruction problem can be stated as an instance of a graph theoretical problem known as the minimum cost arborescence problem, for which there exist efficient algorithms. Furthermore, we derive some theoretical results, in a simplified setting, on the possible optimal values that can be obtained for this problem. With this, we place the dispersal routes reconstruction problem on solid theoretical grounds, establishing it as a tractable problem that also lends itself to formal mathematical and computational analysis. Finally, we present an insightful example of how this framework can be applied to real data. We propose that our computational method can be used to define the most parsimonious dispersal (or invasion) scenarios, which can then be tested using complementary methods such as genetic analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A continuous carbon isotope curve from Middle-Upper Jurassic pelagic carbonate rocks was acquired from two sections in the southern part of the Umbria-Marche Apennines in central Italy. At the Colle Bertone section (Terni) and the Terminilletto section (Rieti), the Upper Toarcian to Bajocian Calcari e Marne a Posidonia Formation and the Aalenian to Kimmeridgian Calcari e Marne a Posidonia and Calcari Diasprigni formations were sampled, respectively. Biostratigraphy in both sections is based on rich assemblages of calcareous nannofossils and radiolarians, as well as some ammonites found in the upper Toarcian-Bajocian interval. Both sections revealed a relative minimum of delta(13)C(PDB) close to + 2 parts per thousand in the Aalenian and a maximum around 3.5 parts per thousand in early Bajocian, associated with an increase in visible chert. In basinal sections in Umbria-Marche, this interval includes the very cherry base of the Calcari Diasprigni Formation (e.g. at Valdorbia) or the chert-rich uppermost portion of the Calcari a Posidonia (e.g at Bosso). In the Terminilletto section, the Bajocian-early Barthonian interval shows a gradual decrease in delta(13)C(PDB) values and a low around 2.3 parts per thousand. This part of the section is characterised by more than 40 m of almost chart-free limestones and correlates with a recurrence of limestone-rich facies in basinal sections at Valdorbia. A double peak with values of delta(13)C(PDB) around + 3 parts per thousand was observed in the Callovian and Oxfordian, constrained by well preserved radiolarian faunas. The maxima lie in the Callovian and the middle Oxfordian, and the minimum between the two peaks should be near the Callovian/Oxfordian boundary. In the Terminilletto section, visible chert increases together with delta(13)C(PDB) values from the middle Bathonian and reaches peak values in the Callovian-Oxfordian. In basinal sections in Umbria-Marche, a sharp increase in visible chert is observed at this level within the Calcari Diasprigni. A drop of delta(13)C values towards + 2 parts per thousand occurs in the Kimmeridgian and coincides with a decrease of visible chert in outcrop. The observed delta(13)C positive anomalies during the early Bajocian and the Callovian-Oxfordian may record changes in global climate towards warmer, more humid periods characterised by increased nutrient mobilisation and increased carbon burial. High biosiliceous (radiolarians, siliceous sponges) productivity and preservation appear to coincide with the delta(13)C positive anomalies, when the production of platform carbonates was subdued and ceased in many areas, with a drastic reduction of periplatform ooze input in many Tethyan basins. The carbon and silica cycles appear to be linked through global warming and increased continental weathering. Hydrothermal events related to extensive rifting and/or accelerated oceanic spreading may be the endogenic driving force that created a perturbation of the exogenic system (excess CO2 into the atmosphere and greenhouse conditions) reflected by the positive delta(13)C shifts and biosiliceous episodes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laser desorption ionisation mass spectrometry (LDI-MS) has demonstrated to be an excellent analytical method for the forensic analysis of inks on a questioned document. The ink can be analysed directly on its substrate (paper) and hence offers a fast method of analysis as sample preparation is kept to a minimum and more importantly, damage to the document is minimised. LDI-MS has also previously been reported to provide a high power of discrimination in the statistical comparison of ink samples and has the potential to be introduced as part of routine ink analysis. This paper looks into the methodology further and evaluates statistically the reproducibility and the influence of paper on black gel pen ink LDI-MS spectra; by comparing spectra of three different black gel pen inks on three different paper substrates. Although generally minimal, the influences of sample homogeneity and paper type were found to be sample dependent. This should be taken into account to avoid the risk of false differentiation of black gel pen ink samples. Other statistical approaches such as principal component analysis (PCA) proved to be a good alternative to correlation coefficients for the comparison of whole mass spectra.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The change over time in the fecundity and weight of queens was investigated in three monogynous, independent colony founding species,Lasius niger, Camponotus ligniperda andC. herculaneus, and two polygynous dependent colony founding species,Plagiolepis pygmaea andIridomyrmex humilis. Queens of the three species founding independently exhibited a similar pattern with a significant loss of weight between mating and the emergence of the first workers. In contrast, weights of queens of the species employing dependent colony founding remained more stable. Fecundity of queens founding independently increased slowly with time whereas fecundity of queens founding dependently reached the maximum level some weeks after the beginning of the first reproductive season. These results are discussed in relation to some differences in the life history (e.g., life-span) between queens utilizing independent and dependent colony founding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Etant une importante source d'énergie, les plantes sont constamment attaquées par des pathogènes. Ne pouvant se mouvoir, elles ont développé des systèmes de défense sophistiqués afin de lutter contre ces prédateurs. Parmi ces systèmes, les voies de signalisation mettant en jeu des éliciteurs endog8nes tels que les jasmonates permettent d'induire la production de protéines de défense telles que les protéines dites "liées à la pathogénèse". Les gènes codant pour ces protéines appartiennent à des familles multigéniques. Le premier but de cette thèse est d'évaluer le nombre de ces gènes dans le génome d'Arabidopsis thaliana et d'estimer la part de ce système de défense, dépendant de la voie de signalisation des jasmonates. Nous avons défini un cluster de seulement 1S gènes sur 266, "liés à la pathogénèse", exclusivement régulés par les jasmonates. De multiples membres des familles des lectines de type jacaline et des inhibiteurs de trypsines semblent dépendre du jasmonate. Présente dans tous les systèmes immunitaires des eucaryotes, la famille des défensines est une famille très intéressante. Chez Arabidopsis thaliana, 317 protéines similaires aux défensines ont été définies, cependant seulement 15 défensines (PDF) sont bien annotées. Ces 15 défensines sont séparées en deux groupes dont un semble avoir évolué plus récemment. Le second but de cette thèse est d'étudier ce groupe de défensines à l'aide de la bioinformatique et des techniques de biologie moléculaire (gêne rapporteur, PCR en temps réel). Nous avons montré que ce groupe contenait une défensine acide intéressante, PDF1.5, qui semblait avoir subi une sélection positive. Cette protéine n'avait encore jamais été étudiée. Contrairement à ce que nous pensions, nous avons établi que cette protéine pouvait avoir une activité biologique liée à la défense. Ce travail de thèse a permis de préciser le nombre de gènes "liées à la pathogénèse" induits par la voie des jasmonates et d'apporter des éléments de réponse sur la question de la redondance des gènes de défense. En conclusion, même si de nombreuses familles de gènes intervenant dans la défense sont bien définies chez Arabidopsis, il reste encore de nombreuses études à faire sur chacun de ces membres. Abstract Being an important source of energy, plants are constantly attacked by herbivores and pathogens. As sessile organisms, they have developed sophisticated defense responses to cope with attack. Among these responses, signalling pathways, using endogenous elicitors including jasmonates (JA), allow the plant to induce the production of defense proteins such as pathogenesis-related (PR) proteins. The genes encoding these proteins belong to multigenic families. The first goal of this thesis was to evaluate the number of PR genes in the genome of Arabidopsis thaliana and estimate how much of this plant defense system was dependent on the jasmonate signaling pathway in leaves. Surprisingly a cluster of only 1S genes out of 2ó6 PR genes was exclusively regulated by JA. Multiple members of the jacalin lectin and trypsin inhibitor gene families were shown to be regulated by JA. Present in all eukaryotic immune systems, defensins are an attractive PR family to study. In Arabidopsis thaliana, 317 defensin-related proteins have been found but just 1S defensins (i.e. PDF family) are well annotated. These defensins are split into 2 groups. One of these groups may have appeared and diversified recently. The second goal of this thesis was to study this defensin gene group combining bioinformatic, reporter gene and quantitative PCR techniques. We have shown that this group contains an interesting acidic defensin, PDF1.S, which seems to have undergone positive selection. No information was known on this protein. We have established that this protein may have a biological activity in plant defense. This thesis allowed us to define the number of PR genes induced by the jasmonate pathway and gave initial leads to explain the redundancy of the PR genes in the genome of Arabidopsis. In conclusion, even if many defense gene families are already defined in the Arabidopsis genome, much work remains to be done on individual members.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Whether or not species participating in specialized and obligate interactions display similar and simultaneous demographic variations at the intraspecific level remains an open question in phylogeography. In the present study, we used the mutualistic nursery pollination occurring between the European globeflower Trollius europaeus and its specialized pollinators in the genus Chiastocheta as a case study. Explicitly, we investigated if the phylogeographies of the pollinating flies are significantly different from the expectation under a scenario of plant-insect congruence. Based on a large-scale sampling, we first used mitochondrial data to infer the phylogeographical histories of each fly species. Then, we defined phylogeographical scenarios of congruence with the plant history, and used maximum likelihood and Bayesian approaches to test for plant-insect phylogeographical congruence for the three Chiastocheta species. We show that the phylogeographical histories of the three fly species differ. Only Chiastocheta lophota and Chiastocheta dentifera display strong spatial genetic structures, which do not appear to be statistically different from those expected under scenarios of phylogeographical congruence with the plant. The results of the present study indicate that the fly species responded in independent and different ways to shared evolutionary forces, displaying varying levels of congruence with the plant genetic structure

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Through a rational design approach, we generated a panel of HLA-A*0201/NY-ESO-1(157-165)-specific T cell receptors (TCR) with increasing affinities of up to 150-fold from the wild-type TCR. Using these TCR variants which extend just beyond the natural affinity range, along with an extreme supraphysiologic one having 1400-fold enhanced affinity, and a low-binding one, we sought to determine the effect of TCR binding properties along with cognate peptide concentration on CD8(+) T cell responsiveness. Major histocompatibility complexes (MHC) expressed on the surface of various antigen presenting cells were peptide-pulsed and used to stimulate human CD8(+) T cells expressing the different TCR via lentiviral transduction. At intermediate peptide concentration we measured maximum cytokine/chemokine secretion, cytotoxicity, and Ca(2+) flux for CD8(+) T cells expressing TCR within a dissociation constant (K(D)) range of ∼1-5 μM. Under these same conditions there was a gradual attenuation in activity for supraphysiologic affinity TCR with K(D) < ∼1 μM, irrespective of CD8 co-engagement and of half-life (t(1/2) = ln 2/k(off)) values. With increased peptide concentration, however, the activity levels of CD8(+) T cells expressing supraphysiologic affinity TCR were gradually restored. Together our data support the productive hit rate model of T cell activation arguing that it is not the absolute number of TCR/pMHC complexes formed at equilibrium, but rather their productive turnover, that controls levels of biological activity. Our findings have important implications for various immunotherapies under development such as adoptive cell transfer of TCR-engineered CD8(+) T cells, as well as for peptide vaccination strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Barbiturate-induced coma can be used in patients to treat intractable intracranial hypertension when other therapies, such as osmotic therapy and sedation, have failed. Despite control of intracranial pressure, cerebral infarction may still occur in some patients, and the effect of barbiturates on outcome remains uncertain. In this study, we examined the relationship between barbiturate infusion and brain tissue oxygen (PbtO2). METHODS: Ten volume-resuscitated brain-injured patients who were treated with pentobarbital infusion for intracranial hypertension and underwent PbtO2 monitoring were studied in a neurosurgical intensive care unit at a university-based Level I trauma center. PbtO2, intracranial pressure (ICP), mean arterial pressure, cerebral perfusion pressure (CPP), and brain temperature were continuously monitored and compared in settings in which barbiturates were or were not administered. RESULTS: Data were available from 1595 hours of PbtO2 monitoring. When pentobarbital administration began, the mean ICP, CPP, and PbtO2 were 18 +/- 10, 72 +/- 18, and 28 +/- 12 mm Hg, respectively. During the 3 hours before barbiturate infusion, the maximum ICP was 24 +/- 13 mm Hg and the minimum CPP was 65 +/- 20 mm Hg. In the majority of patients (70%), we observed an increase in PbtO2 associated with pentobarbital infusion. Within this group, logistic regression analysis demonstrated that a higher likelihood of compromised brain oxygen (PbtO2 < 20 mm Hg) was associated with a decrease in pentobarbital dose after controlling for ICP and other physiological parameters (P < 0.001). In the remaining 3 patients, pentobarbital was associated with lower PbtO2 levels. These patients had higher ICP, lower CPP, and later initiation of barbiturates compared with patients whose PbtO2 increased. CONCLUSION: Our preliminary findings suggest that pentobarbital administered for intractable intracranial hypertension is associated with a significant and independent increase in PbtO2 in the majority of patients. However, in some patients with more compromised brain physiology, pentobarbital may have a negative effect on PbtO2, particularly if administered late. Larger studies are needed to examine the relationship between barbiturates and cerebral oxygenation in brain-injured patients with refractory intracranial hypertension and to determine whether PbtO2 responses can help guide therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper uses microdata to evaluate the impact on the steady-state unemployment rate of an increase in maximum benefit duration. We evaluate a policy change in Austria that extended maximum benefit duration and use this policy change to estimate the causal impact of benefit duration on labor market flows. We find that the policy change leads to a significant increase in the steady-state unemployment rate and, surprisingly, most of this increase is due to an increase in the inflow into rather than the outflow from unemployment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to various contexts and processes, forensic science communities may have different approaches, largely influenced by their criminal justice systems. However, forensic science practices share some common characteristics. One is the assurance of a high (scientific) quality within processes and practices. For most crime laboratory directors and forensic science associations, this issue is conditioned by the triangle of quality, which represents the current paradigm of quality assurance in the field. It consists of the implementation of standardization, certification, accreditation, and an evaluation process. It constitutes a clear and sound way to exchange data between laboratories and enables databasing due to standardized methods ensuring reliable and valid results; but it is also a means of defining minimum requirements for practitioners' skills for specific forensic science activities. The control of each of these aspects offers non-forensic science partners the assurance that the entire process has been mastered and is trustworthy. Most of the standards focus on the analysis stage and do not consider pre- and post-laboratory stages, namely, the work achieved at the investigation scene and the evaluation and interpretation of the results, intended for intelligence beneficiaries or for court. Such localized consideration prevents forensic practitioners from identifying where the problems really lie with regard to criminal justice systems. According to a performance-management approach, scientific quality should not be restricted to standardized procedures and controls in forensic science practice. Ensuring high quality also strongly depends on the way a forensic science culture is assimilated (into specific education training and workplaces) and in the way practitioners understand forensic science as a whole.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The epithelial Na(+) channel (ENaC) and the acid-sensing ion channels (ASICs) form subfamilies within the ENaC/degenerin family of Na(+) channels. ENaC mediates transepithelial Na(+) transport, thereby contributing to Na(+) homeostasis and the maintenance of blood pressure and the airway surface liquid level. ASICs are H(+)-activated channels found in central and peripheral neurons, where their activation induces neuronal depolarization. ASICs are involved in pain sensation, the expression of fear, and neurodegeneration after ischemia, making them potentially interesting drug targets. This review summarizes the biophysical properties, cellular functions, and physiologic and pathologic roles of the ASIC and ENaC subfamilies. The analysis of the homologies between ENaC and ASICs and the relation between functional and structural information shows many parallels between these channels, suggesting that some mechanisms that control channel activity are shared between ASICs and ENaC. The available crystal structures and the discovery of animal toxins acting on ASICs provide a unique opportunity to address the molecular mechanisms of ENaC and ASIC function to identify novel strategies for the modulation of these channels by pharmacologic ligands.