963 resultados para hyperbolic fourth-R quadratic equation
Resumo:
We report the quadratic nonlinearity of one- and two-electron oxidation products of the first series of transition metal complexes of meso-tetraphenylporphyrin (TPP). Among many MTPP complexes, only CuTPP and ZnTPP show reversible oxidation/reduction cycles as seen from cyclic voltammetry experiments. While centrosymmetric neutral metalloporphyrins have zero first hyperpolarizability, β, as expected, the cation radicals and dications of CuTPP and ZnTPP have very high β values. The one- and two-electron oxidation of the MTPPs leads to symmetry-breaking of the metal−porphyrin core, resulting in a large β value that is perhaps aided in part by contributions from the two-photon resonance enhancement. The calculated static first hyperpolarizabilities, β0, which are evaluated in the framework of density functional theory by a coupled perturbed Hartree−Fock method, support the experimental trend. The switching of optical nonlinearity has been achieved between the neutral and the one-electron oxidation products but not between the one- and the two-electron oxidation products since dications that are electrochemically reversible are unstable due to the formation of stable isoporphyrins in the presence of nucleophiles such as halides.
Resumo:
In this study, we investigate the qualitative and quantitative effects of an R&D subsidy for a clean technology and a Pigouvian tax on a dirty technology on environmental R&D when it is uncertain how long the research takes to complete. The model is formulated as an optimal stopping problem, in which the number of successes required to complete the R&D project is finite and learning about the probability of success is incorporated. We show that the optimal R&D subsidy with the consideration of learning is higher than that without it. We also find that an R&D subsidy performs better than a Pigouvian tax unless suppliers have sufficient incentives to continue cost-reduction efforts after the new technology success-fully replaces the old one. Moreover, by using a two-project model, we show that a uniform subsidy is better than a selective subsidy.
Resumo:
Einstein's general relativity is a classical theory of gravitation: it is a postulate on the coupling between the four-dimensional, continuos spacetime and the matter fields in the universe, and it yields their dynamical evolution. It is believed that general relativity must be replaced by a quantum theory of gravity at least at extremely high energies of the early universe and at regions of strong curvature of spacetime, cf. black holes. Various attempts to quantize gravity, including conceptually new models such as string theory, have suggested that modification to general relativity might show up even at lower energy scales. On the other hand, also the late time acceleration of the expansion of the universe, known as the dark energy problem, might originate from new gravitational physics. Thus, although there has been no direct experimental evidence contradicting general relativity so far - on the contrary, it has passed a variety of observational tests - it is a question worth asking, why should the effective theory of gravity be of the exact form of general relativity? If general relativity is modified, how do the predictions of the theory change? Furthermore, how far can we go with the changes before we are face with contradictions with the experiments? Along with the changes, could there be new phenomena, which we could measure to find hints of the form of the quantum theory of gravity? This thesis is on a class of modified gravity theories called f(R) models, and in particular on the effects of changing the theory of gravity on stellar solutions. It is discussed how experimental constraints from the measurements in the Solar System restrict the form of f(R) theories. Moreover, it is shown that models, which do not differ from general relativity at the weak field scale of the Solar System, can produce very different predictions for dense stars like neutron stars. Due to the nature of f(R) models, the role of independent connection of the spacetime is emphasized throughout the thesis.
Resumo:
Several excited states of Ds and Bs mesons have been discovered in the last six years: BaBar, Cleo and Belle discovered the very narrow states D(s0)*(2317)+- and D(s1)(2460)+- in 2003, and CDF and DO Collaborations reported the observation of two narrow Bs resonances, B(s1)(5830)0 and B*(s2)(5840)0 in 2007. To keep up with experiment, meson excited states should be studied from the theoretical aspect as well. The theory that describes the interaction between quarks and gluons is quantum chromodynamics (QCD). In this thesis the properties of the meson states are studied using the discretized version of the theory - lattice QCD. This allows us to perform QCD calculations from first principles, and "measure" not just energies but also the radial distributions of the states on the lattice. This gives valuable theoretical information on the excited states, as we can extract the energy spectrum of a static-light meson up to D wave states (states with orbital angular momentum L=2). We are thus able to predict where some of the excited meson states should lie. We also pay special attention to the order of the states, to detect possible inverted spin multiplets in the meson spectrum, as predicted by H. Schnitzer in 1978. This inversion is connected to the confining potential of the strong interaction. The lattice simulations can also help us understand the strong interaction better, as the lattice data can be treated as "experimental" data and used in testing potential models. In this thesis an attempt is made to explain the energies and radial distributions in terms of a potential model based on a one-body Dirac equation. The aim is to get more information about the nature of the confining potential, as well as to test how well the one-gluon exchange potential explains the short range part of the interaction.
Resumo:
A new deterministic three-dimensional neutral and charged particle transport code, MultiTrans, has been developed. In the novel approach, the adaptive tree multigrid technique is used in conjunction with simplified spherical harmonics approximation of the Boltzmann transport equation. The development of the new radiation transport code started in the framework of the Finnish boron neutron capture therapy (BNCT) project. Since the application of the MultiTrans code to BNCT dose planning problems, the testing and development of the MultiTrans code has continued in conventional radiotherapy and reactor physics applications. In this thesis, an overview of different numerical radiation transport methods is first given. Special features of the simplified spherical harmonics method and the adaptive tree multigrid technique are then reviewed. The usefulness of the new MultiTrans code has been indicated by verifying and validating the code performance for different types of neutral and charged particle transport problems, reported in separate publications.
Resumo:
Half sandwich complexes of the type [CpM(CO)(n)X] {X=Cl, Br, I; If, M=Fe, Ru; n=2 and if M=Mo; n=3} and [CpNiPPh3X] {X=Cl, Br, I} have been synthesized and their second order molecular nonlinearity (beta) measured at 1064 nm in CHCl3 by the hyper-Rayleigh scattering technique. Iron complexes consistently display larger beta values than ruthenium complexes while nickel complexes have marginally larger beta values than iron complexes. In the presence of an acceptor ligand such as CO or PPh3, the role of the halogen atom is that of a pi donor. The better overlap of Cl orbitals with Fe and Ni metal centres make Cl a better pi donor than Br or I in the respective complexes. Consequently, M-pi interaction is stronger in Fe/Ni-Cl complexes. The value of beta decreases as one goes down the halogen group. For the complexes of 4d metal ions where the metal-ligand distance is larger, the influence of pi orbital overlap appears to be less important, resulting in moderate changes in beta as a function of halogen substitution. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
We report formation of new noncentrosymmetric oxides of the formula, R3Mn1.5CuV0.5O9 for R = Y, Ho, Er, Tm, Yb and Lu, possessing the hexagonal RMnO3 (space group P6(3)cm) structure. These oxides could be regarded as the x = 0.5 members of a general series R3Mn3-3xCu2xVxO9. Investigation of the Lu-Mn-Cu-V-O system reveals the existence of isostructural solid solution series, Lu3Mn3-3xCu2xVxO9 for 0 < x <= 0.75. Magnetic and dielectric properties of the oxides are consistent with a random distribution of Mn3+, Cu2+ and V5+ atoms that preserve the noncentrosymmetric RMnO3 structure. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
A novel method for functional lung imaging was introduced by adapting the K-edge subtraction method (KES) to in vivo studies of small animals. In this method two synchrotron radiation energies, which bracket the K-edge of the contrast agent, are used for simultaneous recording of absorption-contrast images. Stable xenon gas is used as the contrast agent, and imaging is performed in projection or computed tomography (CT) mode. Subtraction of the two images yields the distribution of xenon, while removing practically all features due to other structures, and the xenon density can be calculated quantitatively. Because the images are recorded simultaneously, there are no movement artifacts in the subtraction image. Time resolution for a series of CT images is one image/s, which allows functional studies. Voxel size is 0.1mm3, which is an order better than in traditional lung imaging methods. KES imaging technique was used in studies of ventilation distribution and the effects of histamine-induced airway narrowing in healthy, mechanically ventilated, and anaesthetized rabbits. First, the effect of tidal volume on ventilation was studied, and the results show that an increase in tidal volume without an increase in minute ventilation results a proportional increase in regional ventilation. Second, spiral CT was used to quantify the airspace volumes in lungs in normal conditions and after histamine aerosol inhalation, and the results showed large patchy filling defects in peripheral lungs following histamine provocation. Third, the kinetics of proximal and distal airway response to histamine aerosol were examined, and the findings show that the distal airways react immediately to histamine and start to recover, while the reaction and the recovery in proximal airways is slower. Fourth, the fractal dimensions of lungs was studied, and it was found that the fractal dimension is higher at the apical part of the lungs compared to the basal part, indicating structural differences between apical and basal lung level. These results provide new insights to lung function and the effects of drug challenge studies. Nowadays the technique is available at synchrotron radiation facilities, but the compact synchrotron radiation sources are being developed, and in relatively near future the method may be used at hospitals.
Resumo:
Nucleation is the first step of the process by which gas molecules in the atmosphere condense to form liquid or solid particles. Despite the importance of atmospheric new-particle formation for both climate and health-related issues, little information exists on its precise molecular-level mechanisms. In this thesis, potential nucleation mechanisms involving sulfuric acid together with either water and ammonia or reactive biogenic molecules are studied using quantum chemical methods. Quantum chemistry calculations are based on the numerical solution of Schrödinger's equation for a system of atoms and electrons subject to various sets of approximations, the precise details of which give rise to a large number of model chemistries. A comparison of several different model chemistries indicates that the computational method must be chosen with care if accurate results for sulfuric acid - water - ammonia clusters are desired. Specifically, binding energies are incorrectly predicted by some popular density functionals, and vibrational anharmonicity must be accounted for if quantitatively reliable formation free energies are desired. The calculations reported in this thesis show that a combination of different high-level energy corrections and advanced thermochemical analysis can quantitatively replicate experimental results concerning the hydration of sulfuric acid. The role of ammonia in sulfuric acid - water nucleation was revealed by a series of calculations on molecular clusters of increasing size with respect to all three co-ordinates; sulfuric acid, water and ammonia. As indicated by experimental measurements, ammonia significantly assists the growth of clusters in the sulfuric acid - co-ordinate. The calculations presented in this thesis predict that in atmospheric conditions, this effect becomes important as the number of acid molecules increases from two to three. On the other hand, small molecular clusters are unlikely to contain more than one ammonia molecule per sulfuric acid. This implies that the average NH3:H2SO4 mole ratio of small molecular clusters in atmospheric conditions is likely to be between 1:3 and 1:1. Calculations on charged clusters confirm the experimental result that the HSO4- ion is much more strongly hydrated than neutral sulfuric acid. Preliminary calculations on HSO4- NH3 clusters indicate that ammonia is likely to play at most a minor role in ion-induced nucleation in the sulfuric acid - water system. Calculations of thermodynamic and kinetic parameters for the reaction of stabilized Criegee Intermediates with sulfuric acid demonstrate that quantum chemistry is a powerful tool for investigating chemically complicated nucleation mechanisms. The calculations indicate that if the biogenic Criegee Intermediates have sufficiently long lifetimes in atmospheric conditions, the studied reaction may be an important source of nucleation precursors.
Resumo:
Solar UV radiation is harmful for life on planet Earth, but fortunately the atmospheric oxygen and ozone absorb almost entirely the most energetic UVC radiation photons. However, part of the UVB radiation and much of the UVA radiation reaches the surface of the Earth, and affect human health, environment, materials and drive atmospheric and aquatic photochemical processes. In order to quantify these effects and processes there is a need for ground-based UV measurements and radiative transfer modeling to estimate the amounts of UV radiation reaching the biosphere. Satellite measurements with their near-global spatial coverage and long-term data conti-nuity offer an attractive option for estimation of the surface UV radiation. This work focuses on radiative transfer theory based methods used for estimation of the UV radiation reaching the surface of the Earth. The objectives of the thesis were to implement the surface UV algorithm originally developed at NASA Goddard Space Flight Center for estimation of the surface UV irradiance from the meas-urements of the Dutch-Finnish built Ozone Monitoring Instrument (OMI), to improve the original surface UV algorithm especially in relation with snow cover, to validate the OMI-derived daily surface UV doses against ground-based measurements, and to demonstrate how the satellite-derived surface UV data can be used to study the effects of the UV radiation. The thesis consists of seven original papers and a summary. The summary includes an introduction of the OMI instrument, a review of the methods used for modeling of the surface UV using satellite data as well as the con-clusions of the main results of the original papers. The first two papers describe the algorithm used for estimation of the surface UV amounts from the OMI measurements as well as the unique Very Fast Delivery processing system developed for processing of the OMI data received at the Sodankylä satellite data centre. The third and the fourth papers present algorithm improvements related to the surface UV albedo of the snow-covered land. Fifth paper presents the results of the comparison of the OMI-derived daily erythemal doses with those calculated from the ground-based measurement data. It gives an estimate of the expected accuracy of the OMI-derived sur-face UV doses for various atmospheric and other conditions, and discusses the causes of the differences between the satellite-derived and ground-based data. The last two papers demonstrate the use of the satellite-derived sur-face UV data. Sixth paper presents an assessment of the photochemical decomposition rates in aquatic environment. Seventh paper presents use of satellite-derived daily surface UV doses for planning of the outdoor material weathering tests.
Resumo:
This study offers a reconstruction and critical evaluation of globalization theory, a perspective that has been central for sociology and cultural studies in recent decades, from the viewpoint of media and communications. As the study shows, sociological and cultural globalization theorists rely heavily on arguments concerning media and communications, especially the so-called new information and communication technologies, in the construction of their frameworks. Together with deepening the understanding of globalization theory, the study gives new critical knowledge of the problematic consequences that follow from such strong investment in media and communications in contemporary theory. The book is divided into four parts. The first part presents the research problem, the approach and the theoretical contexts of the study. Followed by the introduction in Chapter 1, I identify the core elements of globalization theory in Chapter 2. At the heart of globalization theory is the claim that recent decades have witnessed massive changes in the spatio-temporal constitution of society, caused by new media and communications in particular, and that these changes necessitate the rethinking of the foundations of social theory as a whole. Chapter 3 introduces three paradigms of media research the political economy of media, cultural studies and medium theory the discussion of which will make it easier to understand the key issues and controversies that emerge in academic globalization theorists treatment of media and communications. The next two parts offer a close reading of four theorists whose works I use as entry points into academic debates on globalization. I argue that we can make sense of mainstream positions on globalization by dividing them into two paradigms: on the one hand, media-technological explanations of globalization and, on the other, cultural globalization theory. As examples of the former, I discuss the works of Manuel Castells (Chapter 4) and Scott Lash (Chapter 5). I maintain that their analyses of globalization processes are overtly media-centric and result in an unhistorical and uncritical understanding of social power in an era of capitalist globalization. A related evaluation of the second paradigm (cultural globalization theory), as exemplified by Arjun Appadurai and John Tomlinson, is presented in Chapter 6. I argue that due to their rejection of the importance of nation states and the notion of cultural imperialism for cultural analysis, and their replacement with a framework of media-generated deterritorializations and flows, these theorists underplay the importance of the neoliberalization of cultures throughout the world. The fourth part (Chapter 7) presents a central research finding of this study, namely that the media-centrism of globalization theory can be understood in the context of the emergence of neoliberalism. I find it problematic that at the same time when capitalist dynamics have been strengthened in social and cultural life, advocates of globalization theory have directed attention to media-technological changes and their sweeping socio-cultural consequences, instead of analyzing the powerful material forces that shape the society and the culture. I further argue that this shift serves not only analytical but also utopian functions, that is, the longing for a better world in times when such longing is otherwise considered impracticable.
Resumo:
This paper presents a complete asymptotic analysis of a simple model for the evolution of the nocturnal temperature distribution on bare soil in calm clear conditions. The model is based on a simplified flux emissivity scheme that provides a nondiffusive local approximation for estimating longwave radiative cooling near ground. An examination of the various parameters involved shows that the ratio of the characteristic radiative to the diffusive timescale in the problem is of order 10(-3), and can therefore be treated as a small parameter (mu). Certain other plausible approximations and linearization lead to a new equation whose asymptotic solution as mu --> 0 can be written in closed form. Four regimes, consishttp://eprints.iisc.ernet.in/cgi/users/home?screen=EPrint::Edit&eprintid=27192&stage=core#tting of a transient at nominal sunset, a radiative-diffusive boundary ('Ramdas') layer on ground, a boundary layer transient and a radiative outer solution, are identified. The asymptotic solution reproduces all the qualitative features of more exact numerical simulations, including the occurrence of a lifted temperature minimum and its evolution during night, ranging from continuing growth to relatively sudden collapse of the Ramdas layer.
Resumo:
The study analyses European social policy as a political project that proceeds under the guidance of the European Commission. In the name of modernisation, the project aims to build a new idea for the welfare state. To understand the project, it is necessary to distance oneself from both the juridical competence of the European Union and the traditional national welfare state models. The question is about sharing problems, as well as solutions to them: it is the creation and sharing of common views, concepts and images that play a key role in European integration. Drawing on texts and speeches produced by the European Commission, the study throws light on the development of European social policy during the first years of the 2000s. The study "freeze-frames" the welfare debate having its starting points in the nation states in the name of the entity of Europe. The first article approaches the European social model as a story in itself, a preparatory, persuasive narrative that concerns the management of change. The article shows how the audience can be motivated to work towards a set target by using discursive elements in a persuasive manner: the function of a persuasive story is to convince the target audience of the appropriateness of the chosen direction and to shape their identity so that they are favourably disposed to the desired political targets. This is a kind of "intermediate state" where the story, despite its inner contradictions and inaccuracies, succeeds in appearing as an almost self-evident path towards a modern social policy that Europe is currently seen to be in need of. The second article outlines the European social model as a question of governance. Health as a sector of social policy is detached from the old political order, which was based on the welfare state, and is closely linked to economy. At the same time the population is primarily seen as an economic resource. The Commission is working towards a "Europe of Health" that grapples with the problem of governance with the help of the "healthisation" of society, healthy citizenship and health economics. The way the Commission speaks is guided by the Union's powerful interest to act as "Europe" in the field of welfare policy. At the same time, the traditional separateness of health policy is effaced in order to be able to make health policy reforms a part of the Union's wider modernisation targets. The third article then shows the European social policy as its own area of governance. The article uses an approach based on critical discourse analysis in examining the classification systems and presentation styles adopted by Commission communications, as well as the identities that they help build. In analysing the "new start" of the Lisbon strategy from the perspective of social policy, the article shows how the emphasis has shifted from the persuasive arguments for change with necessary common European targets in the early stages of the strategy towards the implementation of reforms: from a narrative to a vision and from a diagnosis to healing. The phase of global competition represents "the modern" with which European society with its culture and ways of life now has to be matched. The Lisbon strategy is a way to direct this societal change, thus building a modern European social policy. The fourth article describes how the Commission uses its communications policy to build practices and techniques of governance and how it persuades citizens to participate in the creation of a European project of change. This also requires a new kind of agency: agents for whom accountability and responsibilities mean integration into and commitment to European society. Accountability is shaped into a decisive factor in implementing the European Union's strategy of change. As such it will displace hierarchical confrontations and emphasise common action with a view to modernising Europe. However, the Union's discourse cannot be described as being a political language that would genuinely rouse and convince the audience at the level of everyday life. Keywords: European social policy, EU policy, European social model, European Commission, modernisation of welfare, welfare state, communications, discoursiveness.
Resumo:
In Finland, the suicide mortality trend has been decreasing during the last decade and a half, yet suicide was the fourth most common cause of death among both Finnish men and women aged 15 64 years in 2006. However, suicide does not occur equally among population sub-groups. Two notable social factors that position people at different risk of suicide are socioeconomic and employment status: those with low education, employed in manual occupations, having low income and those who are unemployed have been found to have an elevated suicide risk. The purpose of this study was to provide a systematic analysis of these social differences in suicide mortality in Finland. Besides studying socioeconomic trends and differences in suicide according to age and sex, different indicators for socioeconomic status were used simultaneously, taking account of their pathways and mutual associations while also paying attention to confounding and mediatory effects of living arrangements and employment status. Register data obtained from Statistics Finland were used in this study. In some analyses suicides were divided into two groups according to contributory causes of death: the first group consisted of suicide deaths that had alcohol intoxication as one of the contributory causes, and the other group is comprised of all other suicide deaths. Methods included Poisson and Cox regression models. Despite the decrease in suicide mortality trend, social differences still exist. Low occupation-based social class proved to be an important determinant of suicide risk among both men and women, but the strong independent effect of education on alcohol-associated suicide indicates that the roots of these differences are probably established in early adulthood when educational qualifications are obtained and health-behavioural patterns set. High relative suicide mortality among the unemployed during times of economic boom suggests that selective processes may be responsible for some of the employment status differences in suicide. However, long-term unemployment seems to have causal effects on suicide, which, especially among men, partly stem from low income. In conclusion, the results in this study suggest that education, occupation-based social class and employment status have causal effects on suicide risk, but to some extent selection into low education and unemployment are also involved in the explanations for excess suicide mortality among the socially deprived. It is also conceivable that alcohol use is to some extent behind social differences in suicide. In addition to those with low education, manual workers and the unemployed, young people, whose health-related behaviour is still to be adopted, would most probably benefit from suicide prevention programmes.
Resumo:
This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.