875 resultados para probability of informed trading


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A sample scanning confocal optical microscope (SCOM) was designed and constructed in order to perform local measurements of fluorescence, light scattering and Raman scattering. This instrument allows to measure time resolved fluorescence, Raman scattering and light scattering from the same diffraction limited spot. Fluorescence from single molecules and light scattering from metallic nanoparticles can be studied. First, the electric field distribution in the focus of the SCOM was modelled. This enables the design of illumination modes for different purposes, such as the determination of the three-dimensional orientation of single chromophores. Second, a method for the calculation of the de-excitation rates of a chromophore was presented. This permits to compare different detection schemes and experimental geometries in order to optimize the collection of fluorescence photons. Both methods were combined to calculate the SCOM fluorescence signal of a chromophore in a general layered system. The fluorescence excitation and emission of single molecules through a thin gold film was investigated experimentally and modelled. It was demonstrated that, due to the mediation of surface plasmons, single molecule fluorescence near a thin gold film can be excited and detected with an epi-illumination scheme through the film. Single molecule fluorescence as close as 15nm to the gold film was studied in this manner. The fluorescence dynamics (fluorescence blinking and excited state lifetime) of single molecules was studied in the presence and in the absence of a nearby gold film in order to investigate the influence of the metal on the electronic transition rates. The trace-histogram and the autocorrelation methods for the analysis of single molecule fluorescence blinking were presented and compared via the analysis of Monte-Carlo simulated data. The nearby gold influences the total decay rate in agreement to theory. The gold presence produced no influence on the ISC rate from the excited state to the triplet but increased by a factor of 2 the transition rate from the triplet to the singlet ground state. The photoluminescence blinking of Zn0.42Cd0.58Se QDs on glass and ITO substrates was investigated experimentally as a function of the excitation power (P) and modelled via Monte-Carlo simulations. At low P, it was observed that the probability of a certain on- or off-time follows a negative power-law with exponent near to 1.6. As P increased, the on-time fraction reduced on both substrates whereas the off-times did not change. A weak residual memory effect between consecutive on-times and consecutive off-times was observed but not between an on-time and the adjacent off-time. All of this suggests the presence of two independent mechanisms governing the lifetimes of the on- and off-states. The simulated data showed Poisson-distributed off- and on-intensities, demonstrating that the observed non-Poissonian on-intensity distribution of the QDs is not a product of the underlying power-law probability and that the blinking of QDs occurs between a non-emitting off-state and a distribution of emitting on-states with different intensities. All the experimentally observed photo-induced effects could be accounted for by introducing a characteristic lifetime tPI of the on-state in the simulations. The QDs on glass presented a tPI proportional to P-1 suggesting the presence of a one-photon process. Light scattering images and spectra of colloidal and C-shaped gold nano-particles were acquired. The minimum size of a metallic scatterer detectable with the SCOM lies around 20 nm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From the institutional point of view, the legal system of IPR (intellectual property right, hereafter, IPR) is one of incentive institutions of innovation and it plays very important role in the development of economy. According to the law, the owner of the IPR enjoy a kind of exclusive right to use his IP(intellectual property, hereafter, IP), in other words, he enjoys a kind of legal monopoly position in the market. How to well protect the IPR and at the same time to regulate the abuse of IPR is very interested topic in this knowledge-orientated market and it is the basic research question in this dissertation. In this paper, by way of comparing study and by way of law and economic analyses, and based on the Austrian Economics School’s theories, the writer claims that there is no any contradiction between the IPR and competition law. However, in this new economy (high-technology industries), there is really probability of the owner of IPR to abuse his dominant position. And with the characteristics of the new economy, such as, the high rates of innovation, “instant scalability”, network externality and lock-in effects, the IPR “will vest the dominant undertakings with the power not just to monopolize the market but to shift such power from one market to another, to create strong barriers to enter and, in so doing, granting the perpetuation of such dominance for quite a long time.”1 Therefore, in order to keep the order of market, to vitalize the competition and innovation, and to benefit the customer, in EU and US, it is common ways to apply the competition law to regulate the IPR abuse. In Austrian Economic School perspective, especially the Schumpeterian theories, the innovation/competition/monopoly and entrepreneurship are inter-correlated, therefore, we should apply the dynamic antitrust model based on the AES theories to analysis the relationship between the IPR and competition law. China is still a developing country with relative not so high ability of innovation. Therefore, at present, to protect the IPR and to make good use of the incentive mechanism of IPR legal system is the first important task for Chinese government to do. However, according to the investigation reports,2 based on their IPR advantage and capital advantage, some multinational companies really obtained the dominant or monopoly market position in some aspects of some industries, and there are some IPR abuses conducted by such multinational companies. And then, the Chinese government should be paying close attention to regulate any IPR abuse. However, how to effectively regulate the IPR abuse by way of competition law in Chinese situation, from the law and economic theories’ perspective, from the legislation perspective, and from the judicial practice perspective, there is a long way for China to go!

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the ways by which the legal system has responded to different sets of problems is the blurring of the traditional boundaries of criminal law, both procedural and substantive. This study aims to explore under what conditions does this trend lead to the improvement of society's welfare by focusing on two distinguishing sanctions in criminal law, incarceration and social stigma. In analyzing how incarceration affects the incentive to an individual to violate a legal standard, we considered the crucial role of the time constraint. This aspect has not been fully explored in the literature on law and economics, especially with respect to the analysis of the beneficiality of imposing either a fine or a prison term. We observed that that when individuals are heterogeneous with respect to wealth and wage income, and when the level of activity can be considered a normal good, only the middle wage and middle income groups can be adequately deterred by a fixed fines alone regime. The existing literature only considers the case of the very poor, deemed as judgment proof. However, since imprisonment is a socially costly way to deprive individuals of their time, other alternatives may be sought such as the imposition of discriminatory monetary fine, partial incapacitation and other alternative sanctions. According to traditional legal theory, the reason why criminal law is obeyed is not mainly due to the monetary sanctions but to the stigma arising from the community’s moral condemnation that accompanies conviction or merely suspicion. However, it is not sufficiently clear whether social stigma always accompanies a criminal conviction. We addressed this issue by identifying the circumstances wherein a criminal conviction carries an additional social stigma. Our results show that social stigma is seen to accompany a conviction under the following conditions: first, when the law coincides with the society's social norms; and second, when the prohibited act provides information on an unobservable attribute or trait of an individual -- crucial in establishing or maintaining social relationships beyond mere economic relationships. Thus, even if the social planner does not impose the social sanction directly, the impact of social stigma can still be influenced by the probability of conviction and the level of the monetary fine imposed as well as the varying degree of correlation between the legal standard violated and the social traits or attributes of the individual. In this respect, criminal law serves as an institution that facilitates cognitive efficiency in the process of imposing the social sanction to the extent that the rest of society is boundedly rational and use judgment heuristics. Paradoxically, using criminal law in order to invoke stigma for the violation of a legal standard may also serve to undermine its strength. To sum, the results of our analysis reveal that the scope of criminal law is narrow both for the purposes of deterrence and cognitive efficiency. While there are certain conditions where the enforcement of criminal law may lead to an increase in social welfare, particularly with respect to incarceration and stigma, we have also identified the channels through which they could affect behavior. Since such mechanisms can be replicated in less costly ways, society should first try or seek to employ these legal institutions before turning to criminal law as a last resort.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last few years, a great deal of interest has risen concerning the applications of stochastic methods to several biochemical and biological phenomena. Phenomena like gene expression, cellular memory, bet-hedging strategy in bacterial growth and many others, cannot be described by continuous stochastic models due to their intrinsic discreteness and randomness. In this thesis I have used the Chemical Master Equation (CME) technique to modelize some feedback cycles and analyzing their properties, including experimental data. In the first part of this work, the effect of stochastic stability is discussed on a toy model of the genetic switch that triggers the cellular division, which malfunctioning is known to be one of the hallmarks of cancer. The second system I have worked on is the so-called futile cycle, a closed cycle of two enzymatic reactions that adds and removes a chemical compound, called phosphate group, to a specific substrate. I have thus investigated how adding noise to the enzyme (that is usually in the order of few hundred molecules) modifies the probability of observing a specific number of phosphorylated substrate molecules, and confirmed theoretical predictions with numerical simulations. In the third part the results of the study of a chain of multiple phosphorylation-dephosphorylation cycles will be presented. We will discuss an approximation method for the exact solution in the bidimensional case and the relationship that this method has with the thermodynamic properties of the system, which is an open system far from equilibrium.In the last section the agreement between the theoretical prediction of the total protein quantity in a mouse cells population and the observed quantity will be shown, measured via fluorescence microscopy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In den letzten drei Jahrzehnten sind Fernerkundung und GIS in den Geowissenschaften zunehmend wichtiger geworden, um die konventionellen Methoden von Datensammlung und zur Herstellung von Landkarten zu verbessern. Die vorliegende Arbeit befasst sich mit der Anwendung von Fernerkundung und geographischen Informationssystemen (GIS) für geomorphologische Untersuchungen. Durch die Kombination beider Techniken ist es vor allem möglich geworden, geomorphologische Formen im Überblick und dennoch detailliert zu erfassen. Als Grundlagen werden in dieser Arbeit topographische und geologische Karten, Satellitenbilder und Klimadaten benutzt. Die Arbeit besteht aus 6 Kapiteln. Das erste Kapitel gibt einen allgemeinen Überblick über den Untersuchungsraum. Dieser umfasst folgende morphologische Einheiten, klimatischen Verhältnisse, insbesondere die Ariditätsindizes der Küsten- und Gebirgslandschaft sowie das Siedlungsmuster beschrieben. Kapitel 2 befasst sich mit der regionalen Geologie und Stratigraphie des Untersuchungsraumes. Es wird versucht, die Hauptformationen mit Hilfe von ETM-Satellitenbildern zu identifizieren. Angewandt werden hierzu folgende Methoden: Colour Band Composite, Image Rationing und die sog. überwachte Klassifikation. Kapitel 3 enthält eine Beschreibung der strukturell bedingten Oberflächenformen, um die Wechselwirkung zwischen Tektonik und geomorphologischen Prozessen aufzuklären. Es geht es um die vielfältigen Methoden, zum Beispiel das sog. Image Processing, um die im Gebirgskörper vorhandenen Lineamente einwandfrei zu deuten. Spezielle Filtermethoden werden angewandt, um die wichtigsten Lineamente zu kartieren. Kapitel 4 stellt den Versuch dar, mit Hilfe von aufbereiteten SRTM-Satellitenbildern eine automatisierte Erfassung des Gewässernetzes. Es wird ausführlich diskutiert, inwieweit bei diesen Arbeitsschritten die Qualität kleinmaßstäbiger SRTM-Satellitenbilder mit großmaßstäbigen topographischen Karten vergleichbar ist. Weiterhin werden hydrologische Parameter über eine qualitative und quantitative Analyse des Abflussregimes einzelner Wadis erfasst. Der Ursprung von Entwässerungssystemen wird auf der Basis geomorphologischer und geologischer Befunde interpretiert. Kapitel 5 befasst sich mit der Abschätzung der Gefahr episodischer Wadifluten. Die Wahrscheinlichkeit ihres jährlichen Auftretens bzw. des Auftretens starker Fluten im Abstand mehrerer Jahre wird in einer historischen Betrachtung bis 1921 zurückverfolgt. Die Bedeutung von Regentiefs, die sich über dem Roten Meer entwickeln, und die für eine Abflussbildung in Frage kommen, wird mit Hilfe der IDW-Methode (Inverse Distance Weighted) untersucht. Betrachtet werden außerdem weitere, regenbringende Wetterlagen mit Hilfe von Meteosat Infrarotbildern. Genauer betrachtet wird die Periode 1990-1997, in der kräftige, Wadifluten auslösende Regenfälle auftraten. Flutereignisse und Fluthöhe werden anhand von hydrographischen Daten (Pegelmessungen) ermittelt. Auch die Landnutzung und Siedlungsstruktur im Einzugsgebiet eines Wadis wird berücksichtigt. In Kapitel 6 geht es um die unterschiedlichen Küstenformen auf der Westseite des Roten Meeres zum Beispiel die Erosionsformen, Aufbauformen, untergetauchte Formen. Im abschließenden Teil geht es um die Stratigraphie und zeitliche Zuordnung von submarinen Terrassen auf Korallenriffen sowie den Vergleich mit anderen solcher Terrassen an der ägyptischen Rotmeerküste westlich und östlich der Sinai-Halbinsel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dissertation is structured in three parts. The first part compares US and EU agricultural policies since the end of WWII. There is not enough evidence for claiming that agricultural support has a negative impact on obesity trends. I discuss the possibility of an exchange in best practices to fight obesity. There are relevant economic, societal and legal differences between the US and the EU. However, partnerships against obesity are welcomed. The second part presents a socio-ecological model of the determinants of obesity. I employ an interdisciplinary model because it captures the simultaneous influence of several variables. Obesity is an interaction of pre-birth, primary and secondary socialization factors. To test the significance of each factor, I use data from the National Longitudinal Survey of Adolescent Health. I compare the average body mass index across different populations. Differences in means are statistically significant. In the last part I use the National Survey of Children Health. I analyze the effect that family characteristics, built environment, cultural norms and individual factors have on the body mass index (BMI). I use Ordered Probit models and I calculate the marginal effects. I use State and ethnicity fixed effects to control for unobserved heterogeneity. I find that southern US States tend have on average a higher probability of being obese. On the ethnicity side, White Americans have a lower BMI respect to Black Americans, Hispanics and American Indians Native Islanders; being Asian is associated with a lower probability of being obese. In neighborhoods where trust level and safety perception are higher, children are less overweight and obese. Similar results are shown for higher level of parental income and education. Breastfeeding has a negative impact. Higher values of measures of behavioral disorders have a positive and significant impact on obesity, as predicted by the theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first chapter, we consider the joint estimation of objective and risk-neutral parameters for SV option pricing models. We propose a strategy which exploits the information contained in large heterogeneous panels of options, and we apply it to S&P 500 index and index call options data. Our approach breaks the stochastic singularity between contemporaneous option prices by assuming that every observation is affected by measurement error. We evaluate the likelihood function by using a MC-IS strategy combined with a Particle Filter algorithm. The second chapter examines the impact of different categories of traders on market transactions. We estimate a model which takes into account traders’ identities at the transaction level, and we find that the stock prices follow the direction of institutional trading. These results are carried out with data from an anonymous market. To explain our estimates, we examine the informativeness of a wide set of market variables and we find that most of them are unambiguously significant to infer the identity of traders. The third chapter investigates the relationship between the categories of market traders and three definitions of financial durations. We consider trade, price and volume durations, and we adopt a Log-ACD model where we include information on traders at the transaction level. As to trade durations, we observe an increase of the trading frequency when informed traders and the liquidity provider intensify their presence in the market. For price and volume durations, we find the same effect to depend on the state of the market activity. The fourth chapter proposes a strategy to express order aggressiveness in quantitative terms. We consider a simultaneous equation model to examine price and volume aggressiveness at Euronext Paris, and we analyse the impact of a wide set of order book variables on the price-quantity decision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

CdTe and Cu(In,Ga)Se2 (CIGS) thin film solar cells are fabricated, electrically characterized and modelled in this thesis. We start from the fabrication of CdTe thin film devices where the R.F. magnetron sputtering system is used to deposit the CdS/CdTe based solar cells. The chlorine post-growth treatment is modified in order to uniformly cover the cell surface and reduce the probability of pinholes and shunting pathways creation which, in turn, reduces the series resistance. The deionized water etching is proposed, for the first time, as the simplest solution to optimize the effect of shunt resistance, stability and metal-semiconductor inter-diffusion at the back contact. In continue, oxygen incorporation is proposed while CdTe layer deposition. This technique has been rarely examined through R.F sputtering deposition of such devices. The above experiments are characterized electrically and optically by current-voltage characterization, scanning electron microscopy, x-ray diffraction and optical spectroscopy. Furthermore, for the first time, the degradation rate of CdTe devices over time is numerically simulated through AMPS and SCAPS simulators. It is proposed that the instability of electrical parameters is coupled with the material properties and external stresses (bias, temperature and illumination). Then, CIGS materials are simulated and characterized by several techniques such as surface photovoltage spectroscopy is used (as a novel idea) to extract the band gap of graded band gap CIGS layers, surface or bulk defect states. The surface roughness is scanned by atomic force microscopy on nanometre scale to obtain the surface topography of the film. The modified equivalent circuits are proposed and the band gap graded profiles are simulated by AMPS simulator and several graded profiles are examined in order to optimize their thickness, grading strength and electrical parameters. Furthermore, the transport mechanisms and Auger generation phenomenon are modelled in CIGS devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During recent decades, economists' interest in gender-related issues has risen. Researchers aim to show how economic theory can be applied to gender related topics such as peer effect, labor market outcomes, and education. This dissertation aims to contribute to our understandings of the interaction, inequality and sources of differences across genders, and it consists of three empirical papers in the research area of gender economics. The aim of the first paper ("Separating gender composition effect from peer effects in education") is to demonstrate the importance of considering endogenous peer effects in order to identify gender composition effect. This fact is analytically illustrated by employing Manski's (1993) linear-in-means model. The paper derives an innovative solution to the simultaneous identification of endogenous and exogenous peer effects: gender composition effect of interest is estimated from auxiliary reduced-form estimates after identifying the endogenous peer effect by using Graham (2008) variance restriction method. The paper applies this methodology to two different data sets from American and Italian schools. The motivation of the second paper ("Gender differences in vulnerability to an economic crisis") is to analyze the different effect of recent economic crisis on the labor market outcome of men and women. Using triple differences method (before-after crisis, harder-milder hit sectors, men-women) the paper used British data at the occupation level and shows that men suffer more than women in terms of probability of losing their job. Several explanations for the findings are proposed. The third paper ("Gender gap in educational outcome") is concerned with a controversial academic debate on the existence, degree and origin of the gender gap in test scores. The existence of a gap both in mean scores and the variability around the mean is documented and analyzed. The origins of the gap are investigated by looking at wide range of possible explanations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study is to measure the impact of the national subsidy scheme on the olive and fruit sector in two regions of Albania, Shkodra and Fier. From the methodological point of view, we use a non- parametric approach based on the propensity score matching. This method overcomes problem of the missing data, by creating a counterfactual scenario. In the first step, the conditional probability to participate in the program was computed. Afterwards, different matching estimators were applied to establish whether the subsidies have affected sectors performance. One of the strengths of this study stays in the data. Cross-sectional primary data was gathered through about 250 interviews.. We have not found empirical evidence of significant effects of government aid program on production. Differences in production found between beneficiaries and non-beneficiaries disappear after adjustment by the conditional probability of participating into the program. This suggests that subsidized farmers would have performed better than the subsidized households even in the absence of production grants, revealing program self-selection. On the other hand, the scheme has affected positively the farm structure increasing the area under cultivation, but yields has not increased for beneficiaries compared to non beneficiaries. These combined results shed light on the reason of the missed impact. It could be reasonable to believe that the new plantation, in particular in the case of olives, has not yet reached full production. Therefore, we have reasons to believe on positive impacts in the future. Concerning some qualitative results, the extension of area under cultivation is strongly conditioned by the small farm size. This together with a thin land market makes extremely difficult the expansion beyond farm boundaries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present work reports the outcome of the GIMEMA CML WP study CML0811, an independent trial investigating nilotinib as front-line treatment in chronic phase chronic myeloid leukemia (CML). Moreover, the results of the proteomic analysis of the CD34+ cells collected at CML diagnosis, compared to the counterpart from healthy donors, are reported. Our study confirmed that nilotinib is highly effective in the prevention of the progression to accelerated/blast phase, a condition that today is still associated with high mortality rates. Despite the relatively short follow-up, cardiovascular issues, particularly atherosclerotic adverse events (AE), have emerged, and the frequency of these AEs may counterbalance the anti-leukemic efficacy. The deep molecular response rates in our study compare favorably to those obtained with imatinib, in historic cohorts, and confirm the findings of the Company-sponsored ENESTnd study. Considering the increasing rates of deep MR over time we observed, a significant proportion of patients will be candidate to treatment discontinuation in the next years, with higher probability of remaining disease-free in the long term. The presence of the additional and complex changes we found at the proteomic level in CML CD34+ cells should be taken into account for the investigation on novel targeted therapies, aimed at the eradication of the disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Urban centers significantly contribute to anthropogenic air pollution, although they cover only a minor fraction of the Earth's land surface. Since the worldwide degree of urbanization is steadily increasing, the anthropogenic contribution to air pollution from urban centers is expected to become more substantial in future air quality assessments. The main objective of this thesis was to obtain a more profound insight in the dispersion and the deposition of aerosol particles from 46 individual major population centers (MPCs) as well as the regional and global influence on the atmospheric distribution of several aerosol types. For the first time, this was assessed in one model framework, for which the global model EMAC was applied with different representations of aerosol particles. First, in an approach with passive tracers and a setup in which the results depend only on the source location and the size and the solubility of the tracers, several metrics and a regional climate classification were used to quantify the major outflow pathways, both vertically and horizontally, and to compare the balance between pollution export away from and pollution build-up around the source points. Then in a more comprehensive approach, the anthropogenic emissions of key trace species were changed at the MPC locations to determine the cumulative impact of the MPC emissions on the atmospheric aerosol burdens of black carbon, particulate organic matter, sulfate, and nitrate. Ten different mono-modal passive aerosol tracers were continuously released at the same constant rate at each emission point. The results clearly showed that on average about five times more mass is advected quasi-horizontally at low levels than exported into the upper troposphere. The strength of the low-level export is mainly determined by the location of the source, while the vertical transport is mainly governed by the lifting potential and the solubility of the tracers. Similar to insoluble gas phase tracers, the low-level export of aerosol tracers is strongest at middle and high latitudes, while the regions of strongest vertical export differ between aerosol (temperate winter dry) and gas phase (tropics) tracers. The emitted mass fraction that is kept around MPCs is largest in regions where aerosol tracers have short lifetimes; this mass is also critical for assessing the impact on humans. However, the number of people who live in a strongly polluted region around urban centers depends more on the population density than on the size of the area which is affected by strong air pollution. Another major result was that fine aerosol particles (diameters smaller than 2.5 micrometer) from MPCs undergo substantial long-range transport, with about half of the emitted mass being deposited beyond 1000 km away from the source. In contrast to this diluted remote deposition, there are areas around the MPCs which experience high deposition rates, especially in regions which are frequently affected by heavy precipitation or are situated in poorly ventilated locations. Moreover, most MPC aerosol emissions are removed over land surfaces. In particular, forests experience more deposition from MPC pollutants than other land ecosystems. In addition, it was found that the generic treatment of aerosols has no substantial influence on the major conclusions drawn in this thesis. Moreover, in the more comprehensive approach, it was found that emissions of black carbon, particulate organic matter, sulfur dioxide, and nitrogen oxides from MPCs influence the atmospheric burden of various aerosol types very differently, with impacts generally being larger for secondary species, sulfate and nitrate, than for primary species, black carbon and particulate organic matter. While the changes in the burdens of sulfate, black carbon, and particulate organic matter show an almost linear response for changes in the emission strength, the formation of nitrate was found to be contingent upon many more factors, e.g., the abundance of sulfuric acid, than only upon the strength of the nitrogen oxide emissions. The generic tracer experiments were further extended to conduct the first risk assessment to obtain the cumulative risk of contamination from multiple nuclear reactor accidents on the global scale. For this, many factors had to be taken into account: the probability of major accidents, the cumulative deposition field of the radionuclide cesium-137, and a threshold value that defines contamination. By collecting the necessary data and after accounting for uncertainties, it was found that the risk is highest in western Europe, the eastern US, and in Japan, where on average contamination by major accidents is expected about every 50 years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How to evaluate the cost-effectiveness of repair/retrofit intervention vs. demolition/replacement and what level of shaking intensity can the chosen repairing/retrofit technique sustain are open questions affecting either the pre-earthquake prevention, the post-earthquake emergency and the reconstruction phases. The (mis)conception that the cost of retrofit interventions would increase linearly with the achieved seismic performance (%NBS) often discourages stakeholders to consider repair/retrofit options in a post-earthquake damage situation. Similarly, in a pre-earthquake phase, the minimum (by-law) level of %NBS might be targeted, leading in some cases to no-action. Furthermore, the performance measure enforcing owners to take action, the %NBS, is generally evaluated deterministically. Not directly reflecting epistemic and aleatory uncertainties, the assessment can result in misleading confidence on the expected performance. The present study aims at contributing to the delicate decision-making process of repair/retrofit vs. demolition/replacement, by developing a framework to assist stakeholders with the evaluation of the effects in terms of long-term losses and benefits of an increment in their initial investment (targeted retrofit level) and highlighting the uncertainties hidden behind a deterministic approach. For a pre-1970 case study building, different retrofit solutions are considered, targeting different levels of %NBS, and the actual probability of reaching Collapse when considering a suite of ground-motions is evaluated, providing a correlation between %NBS and Risk. Both a simplified and a probabilistic loss modelling are then undertaken to study the relationship between %NBS and expected direct and indirect losses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most important challenges in chemistry and material science is the connection between the contents of a compound and its chemical and physical properties. In solids, these are greatly influenced by the crystal structure.rnrnThe prediction of hitherto unknown crystal structures with regard to external conditions like pressure and temperature is therefore one of the most important goals to achieve in theoretical chemistry. The stable structure of a compound is the global minimum of the potential energy surface, which is the high dimensional representation of the enthalpy of the investigated system with respect to its structural parameters. The fact that the complexity of the problem grows exponentially with the system size is the reason why it can only be solved via heuristic strategies.rnrnImprovements to the artificial bee colony method, where the local exploration of the potential energy surface is done by a high number of independent walkers, are developed and implemented. This results in an improved communication scheme between these walkers. This directs the search towards the most promising areas of the potential energy surface.rnrnThe minima hopping method uses short molecular dynamics simulations at elevated temperatures to direct the structure search from one local minimum of the potential energy surface to the next. A modification, where the local information around each minimum is extracted and used in an optimization of the search direction, is developed and implemented. Our method uses this local information to increase the probability of finding new, lower local minima. This leads to an enhanced performance in the global optimization algorithm.rnrnHydrogen is a highly relevant system, due to the possibility of finding a metallic phase and even superconductor with a high critical temperature. An application of a structure prediction method on SiH12 finds stable crystal structures in this material. Additionally, it becomes metallic at relatively low pressures.