51 resultados para first ionization potentials


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a measurement of the $\ttbar$ differential cross section with respect to the $\ttbar$ invariant mass, dSigma/dMttbar, in $\ppbar$ collisions at $\sqrt{s}=1.96$ TeV using an integrated luminosity of $2.7\invfb$ collected by the CDF II experiment. The $\ttbar$ invariant mass spectrum is sensitive to a variety of exotic particles decaying into $\ttbar$ pairs. The result is consistent with the standard model expectation, as modeled by \texttt{PYTHIA} with \texttt{CTEQ5L} parton distribution functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a measurement of the mass of the top quark using data corresponding to an integrated luminosity of 1.9fb^-1 of ppbar collisions collected at sqrt{s}=1.96 TeV with the CDF II detector at Fermilab's Tevatron. This is the first measurement of the top quark mass using top-antitop pair candidate events in the lepton + jets and dilepton decay channels simultaneously. We reconstruct two observables in each channel and use a non-parametric kernel density estimation technique to derive two-dimensional probability density functions from simulated signal and background samples. The observables are the top quark mass and the invariant mass of two jets from the W decay in the lepton + jets channel, and the top quark mass and the scalar sum of transverse energy of the event in the dilepton channel. We perform a simultaneous fit for the top quark mass and the jet energy scale, which is constrained in situ by the hadronic W boson mass. Using 332 lepton + jets candidate events and 144 dilepton candidate events, we measure the top quark mass to be mtop=171.9 +/- 1.7 (stat. + JES) +/- 1.1 (syst.) GeV/c^2 = 171.9 +/- 2.0 GeV/c^2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A combined mass and particle identification fit is used to make the first observation of the decay Bs --> Ds K and measure the branching fraction of Bs --> Ds K relative to Bs --> Ds pi. This analysis uses 1.2 fb^-1 integrated luminosity of pbar-p collisions at sqrt(s) = 1.96 TeV collected with the CDF II detector at the Fermilab Tevatron collider. We observe a Bs --> Ds K signal with a statistical significance of 8.1 sigma and measure Br(Bs --> Ds K)/Br(Bs --> Ds pi) = 0.097 +- 0.018(stat) +- 0.009(sys).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the first observation in hadronic collisions of the electroweak production of vector boson pairs (VV, V=W, Z) where one boson decays to a dijet final state. The data correspond to 3.5  fb-1 of integrated luminosity of pp̅ collisions at √s=1.96  TeV collected by the CDF II detector at the Fermilab Tevatron. We observe 1516±239(stat)±144(syst) diboson candidate events and measure a cross section σ(pp̅ →VV+X) of 18.0±2.8(stat)±2.4(syst)±1.1(lumi)  pb, in agreement with the expectations of the standard model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the first observation in hadronic collisions of the electroweak production of vector boson pairs (VV, V=W,Z) where one boson decays to a dijet final state . The data correspond to 3.5 inverse femtobarns of integrated luminosity of ppbar collisions at sqrt(s)=1.96 TeV collected by the CDFII detector at the Fermilab Tevatron. We observe 1516+/-239(stat)+/-144(syst) diboson candidate events and measure a cross section sigma(ppbar->VV+X) of 18.0+/-2.8(stat)+/-2.4(syst)+/-1.1(lumi) pb, in agreement with the expectations of the standard model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lullabies in Kvevlax. Linguistic structures and constructions. The study is a linguistic analysis of constructions that shape the texts used in lullabies in Kvevlax in Ostrobothnia in Finland. The empirical goal is to identify linguistic constructions in traditional lullabies that make use of the dialect of the region. The theoretical goal was to test the usability of Construction Grammar (CxG) in analyses of this type of material, and to further develop the formal description of Construction Grammar in such a way as to make it possible to analyze all kinds of linguistically complex texts. The material that I collected in the 1960s comprises approximately 600 lullabies and concomitant interviews with the singers on the use of lullabies. In 1991 I collected additional material in Kvevlax. The number of informants is close to 250. Supplementary material covering the Swedish-language regions in Finland was compiled from the archives of the Society of Swedish Literature in Finland. The first part of the study is mainly based on traditional grammar and gives general information about the language and the structures used in the lullabies. In the detailed study of the Kvevlax lullabies in the latter part of the study I use a version of Construction Grammar intended for the linguistic analysis of usage-based texts. The analysis focuses on the most salient constructions in the lullabies. The study shows that Construction Grammar as a method has more general applicability than traditional linguistic methods. The study identifies important constructions, including elements typical of this genre, that structure the text in different variants of the same lullabies. In addition, CxG made it possible to study pragmatic aspects of the interactional, cultural and contextual language that is used in communication with small children. The constructions found in lullabies are also used in language in general. In addition to being able to give detailed linguistic descriptions of the texts, Construction Grammar can also explain the multidimensionality of language and the variations in the texts. The use of CxG made it possible to show that variations are not random but follow prototypical linguistic patterns, constructions. Constructions are thus found to be linguistic resources with built-in variation potentials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study addressed a phenomenon that has become common marketing practice, customer loyalty programs. Although a common type of consumer relationship, there is limited knowledge of its nature. The purpose of the study was to create structured understanding of the nature of customer relationships from both the provider’s and the consumer’s viewpoints by studying relationship drivers and proposing the concept of relational motivation as a provider of a common framework for the analysis of these views. The theoretical exploration focused on reasons for engaging in customer relationships for both the consumer and the provider. The themes of buying behaviour, industrial and network marketing and relationship marketing, as well as the concepts of a customer relationship, customer loyalty, relationship conditions, relational benefits, bonds and commitment were explored and combined in a new way. Concepts from the study of business-to-business relationships were brought over and their power in explaining the nature of consumer relationships examined. The study provided a comprehensive picture of loyalty programs, which is an important contribution to the academic as well as the managerial discussions. The consumer study provided deep insights into the nature of customer relationships. The study provides a new frame of reference to support the existing concepts of loyalty and commitment with the introduction of the relationship driver and relational motivation concepts. The result is a novel view of the nature of customer relationships that creates new understanding of the forces leading to loyal behaviour and commitment. The study concludes with managerial implications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Customer loyalty has been a central topic of both marketing theory and practice for several decades. Customer disloyalty, or relationship ending, has received much less attention. Despite the close relation between customer loyalty and disloyalty, they have rarely been addressed in the same study. The thesis bridges this gap by focusing on both loyal and disloyal customers and the factors characterising them. Based on a qualitative study of loyal and disloyal bank customers in the Finnish retail banking market, both factors that are common to the groups and factors that differentiate between them are identified. A conceptual framework of factors that affect customer loyalty or disloyalty is developed and used to analyse the empirical data. According to the framework, customers’ loyalty status (behavioural and attitudinal loyalty) is influenced by positive, loyalty-supporting, and negative, loyalty-repressing factors. Loyalty-supporting factors either promote customer dedication, making the customer want to remain loyal, or act as constraints, hindering the customer from switching. Among the loyalty-repressing factors it is especially important to identify those that act as triggers of disloyal behaviour, making customers switch service providers. The framework further suggests that by identifying the sources of loyalty-supporting and -repressing factors (the environment, the provider, the customer, the provider-customer interaction, or the core service) one can determine which factors are within the control of the service provider. Attitudinal loyalty is approached through a customer’s “feeling of loyalty”, as described by customers both orally and graphically. By combining the graphs with behavioural loyalty, seven customer groups are identified: Stable Loyals, Rescued Loyals, Loyals at Risk, Positive Disloyals, Healing Disloyals, Fading Disloyals, and Abrupt Disloyals. The framework and models of the thesis can be used to analyse factors that affect customer loyalty and disloyalty in different service contexts. Since the empirical study was carried out in a retail bank setting, the thesis has managerial relevance especially for banks. Christina Nordman is associated with CERS, Center for Relationship Marketing and Service Management at the Swedish School of Economics and Business Administration. The doctoral thesis is part of the Göran Collert Research Project in Customer Relationships and Retail Banking and has been funded by The Göran Collert Foundation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This doctoral dissertation takes a buy side perspective to third-party logistics (3PL) providers’ service tiering by applying a linear serial dyadic view to transactions. It takes its point of departure not only from the unalterable focus on the dyad levels as units of analysis and how to manage them, but also the characteristics both creating and determining purposeful conditions for a longer duration. A conceptual framework is proposed and evaluated on its ability to capture logistics service buyers’ perceptions of service tiering. The problem discussed is in the theoretical context of logistics and reflects value appropriation, power dependencies, visibility in linear serial dyads, a movement towards the more market governed modes of transactions (i.e. service tiering) and buyers’ risk perception of broader utilisation of the logistics services market. Service tiering, in a supply chain setting, with the lack of multilateral agreements between supply chain members, is new. The deductive research approach applied, in which theoretically based propositions are empirically tested with quantitative and qualitative data, provides new insight into (contractual) transactions in 3PL. The study findings imply that the understanding of power dependencies and supply chain dynamics in a 3PL context is still in its infancy. The issues found include separation of service responsibilities, supply chain visibility, price-making behaviour and supply chain strategies under changing circumstances or influence of non-immediate supply chain actors. Understanding (or failing to understand) these issues may mean remarkable implications for the industry. Thus, the contingencies may trigger more open-book policies, larger liability scope of 3PL service providers or insourcing of critical logistics activities from the first-tier buyer core business and customer service perspectives. In addition, a sufficient understanding of the issues surrounding service tiering enables proactive responses to devise appropriate supply chain strategies. The author concludes that qualitative research designs, facilitating data collection on multiple supply chain actors, may capture and increase understanding of the impact of broader supply chain strategies. This would enable pattern-matching through an examination of two or more sides of exchange transactions to measure relational symmetries across linear serial dyads. Indeed, the performance of the firm depends not only on how efficiently it cooperates with its partners, but also on how well exchange partners cooperate with an organisation’s own business.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines values education in Japanese schools at the beginning of the millennium. The topic was approached by asking the following three questions concerning the curricular background, the morality conveyed through textbooks and the characterization of moral education from a comparative viewpoint: 1) What role did moral education play in the curriculum revision which was initiated in 1998 and implemented in 2002? 2) What kinds of moral responsibilities and moral autonomy do the moral texts develop? 3) What does Japanese moral education look like in terms of the comparative framework? The research was based on curriculum research. Its primary empirical data consisted of the national curriculum guidelines for primary school, which were taken into use in 2002, and moral texts, Kokoro no nôto, published by the Ministry of Education in the same context. Since moral education was approached in the education reform context, the secondary research material involved some key documents of the revision process from the mid-1990s to 2003. The research material was collected during three fieldwork periods in Japan (in 2002, 2003 and 2005). The text-analysis was conducted as a theory-dependent qualitative content analysis. Japanese moral education was analyzed as a product of its own cultural tradition and societal answer to the current educational challenges. In order to understand better its character, secular moral education was reflected upon from a comparative viewpoint. The theory chosen for the comparative framework, the value realistic theory of education, represented the European rational education tradition as well as the Christian tradition of values education. Moral education, which was the most important school subject at the beginning of modern school, was eliminated from the curriculum for political reasons in a school reform after the Second World War, but has gradually regained a stronger position since then. It was reinforced particularly at the turn of millennium, when a curriculum revision attempted to respond to educational and learning problems by emphasizing qualitative and value aspects. Although the number of moral lessons and their status as a non-official-subject remained unchanged, the Ministry of Education made efforts to improve moral education by new curricular emphases, new teaching material and additional in-service training possibilities for teachers. The content of the moral texts was summarized in terms of moral responsibility in four moral areas (intrapersonal, interpersonal, natural-supranatural and societal) as follows: 1) continuous self-development, 2) caring for others, 3) awe of life and forces beyond human power, and 4) societal contribution. There was a social-societal and emotional emphasis in what was taught. Moral autonomy, which was studied from the perspectives of rational, affective and individuality development, stressed independence in action through self-discipline and responsibility more than rational self-direction. Japanese moral education can be characterized as the education of kokoro (heart) and the development of character, which arises from virtue ethics. It aims to overcome egoistic individualism by reciprocal and interdependent moral responsibility based on responsible interconnectedness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aerosol particles deteriorate air quality, atmospheric visibility and our health. They affect the Earth s climate by absorbing and scattering sunlight, forming clouds, and also via several feed-back mechanisms. The net effect on the radiative balance is negative, i.e. cooling, which means that particles counteract the effect of greenhouse gases. However, particles are one of the poorly known pieces in the climate puzzle. Some of the airborne particles are natural, some anthropogenic; some enter the atmosphere in particle form, while others form by gas-to-particle conversion. Unless the sources and dynamical processes shaping the particle population are quantified, they cannot be incorporated into climate models. The molecular level understanding of new particle formation is still inadequate, mainly due to the lack of suitable measurement techniques to detect the smallest particles and their precursors. This thesis has contributed to our ability to measure newly formed particles. Three new condensation particle counter applications for measuring the concentration of nano-particles were developed. The suitability of the methods for detecting both charged and electrically neutral particles and molecular clusters as small as 1 nm in diameter was thoroughly tested both in laboratory and field conditions. It was shown that condensation particle counting has reached the size scale of individual molecules, and besides measuring the concentration they can be used for getting size information. In addition to atmospheric research, the particle counters could have various applications in other fields, especially in nanotechnology. Using the new instruments, the first continuous time series of neutral sub-3 nm particle concentrations were measured at two field sites, which represent two different kinds of environments: the boreal forest and the Atlantic coastline, both of which are known to be hot-spots for new particle formation. The contribution of ions to the total concentrations in this size range was estimated, and it could be concluded that the fraction of ions was usually minor, especially in boreal forest conditions. Since the ionization rate is connected to the amount of cosmic rays entering the atmosphere, the relative contribution of neutral to charged nucleation mechanisms extends beyond academic interest, and links the research directly to current climate debate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation deals with the design, fabrication, and applications of microscale electrospray ionization chips for mass spectrometry. The microchip consists of microchannel, which leads to a sharp electrospray tip. Microchannel contain micropillars that facilitate a powerful capillary action in the channels. The capillary action delivers the liquid sample to the electrospray tip, which sprays the liquid sample to gas phase ions that can be analyzed with mass spectrometry. The microchip uses a high voltage, which can be utilized as a valve between the microchip and mass spectrometry. The microchips can be used in various applications, such as for analyses of drugs, proteins, peptides, or metabolites. The microchip works without pumps for liquid transfer, is usable for rapid analyses, and is sensitive. The characteristics of performance of the single microchips are studied and a rotating multitip version of the microchips are designed and fabricated. It is possible to use the microchip also as a microreactor and reaction products can be detected online with mass spectrometry. This property can be utilized for protein identification for example. Proteins can be digested enzymatically on-chip and reaction products, which are in this case peptides, can be detected with mass spectrometry. Because reactions occur faster in a microscale due to shorter diffusion lengths, the amount of protein can be very low, which is a benefit of the method. The microchip is well suited to surface activated reactions because of a high surface-to-volume ratio due to a dense micropillar array. For example, titanium dioxide nanolayer on the micropillar array combined with UV radiation produces photocatalytic reactions which can be used for mimicking drug metabolism biotransformation reactions. Rapid mimicking with the microchip eases the detection of possibly toxic compounds in preclinical research and therefore could speed up the research of new drugs. A micropillar array chip can also be utilized in the fabrication of liquid chromatographic columns. Precisely ordered micropillar arrays offer a very homogenous column, where separation of compounds has been demonstrated by using both laser induced fluorescence and mass spectrometry. Because of small dimensions on the microchip, the integrated microchip based liquid chromatography electrospray microchip is especially well suited to low sample concentrations. Overall, this work demonstrates that the designed and fabricated silicon/glass three dimensionally sharp electrospray tip is unique and facilitates stable ion spray for mass spectrometry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analysis of lipid compositions from biological samples has become increasingly important. Lipids have a role in cardiovascular disease, metabolic syndrome and diabetes. They also participate in cellular processes such as signalling, inflammatory response, aging and apoptosis. Also, the mechanisms of regulation of cell membrane lipid compositions are poorly understood, partially because a lack of good analytical methods. Mass spectrometry has opened up new possibilities for lipid analysis due to its high resolving power, sensitivity and the possibility to do structural identification by fragment analysis. The introduction of Electrospray ionization (ESI) and the advances in instrumentation revolutionized the analysis of lipid compositions. ESI is a soft ionization method, i.e. it avoids unwanted fragmentation the lipids. Mass spectrometric analysis of lipid compositions is complicated by incomplete separation of the signals, the differences in the instrument response of different lipids and the large amount of data generated by the measurements. These factors necessitate the use of computer software for the analysis of the data. The topic of the thesis is the development of methods for mass spectrometric analysis of lipids. The work includes both computational and experimental aspects of lipid analysis. The first article explores the practical aspects of quantitative mass spectrometric analysis of complex lipid samples and describes how the properties of phospholipids and their concentration affect the response of the mass spectrometer. The second article describes a new algorithm for computing the theoretical mass spectrometric peak distribution, given the elemental isotope composition and the molecular formula of a compound. The third article introduces programs aimed specifically for the analysis of complex lipid samples and discusses different computational methods for separating the overlapping mass spectrometric peaks of closely related lipids. The fourth article applies the methods developed by simultaneously measuring the progress curve of enzymatic hydrolysis for a large number of phospholipids, which are used to determine the substrate specificity of various A-type phospholipases. The data provides evidence that the substrate efflux from bilayer is the key determining factor for the rate of hydrolysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The four papers summarized in this thesis deal with the Archean and earliest Paleoproterozoic granitoid suites observed in the Suomussalmi district, eastern Finland. Geologically, the area belongs to the Kianta Complex of the Western Karelian Terrane in the Karelian Province of the Fennoscandian shield. The inherited zircons up to 3440 Ma old together with Sm Nd and Pb Pb data confirm the existence of previously anticipated Paleoarchean protocrust in Suomussalmi. The general timeline of granitoid magmatism is similar to that of the surrounding areas. TTG magmatism occurred in three distinct phases: ca 2.95 Ga, 2.83 2.78 Ga and 2.76 2.74 Ga. In Suomussalmi the TTGs sensu stricto (K2O/Na2O less than 0.5) belong to the low-HREE type and are interpreted as partial melts of garnet amphibolites, which did not significantly interact with mantle peridotites. Transitional TTGs (K2O/Na2O more than 0.5), present in Suomussalmi and absent from surrounding areas, display higher LILE concentrations, but otherwise closely resemble the TTGs sensu stricto and indicate that recycling of felsic crust commenced in Suomussalmi 200 Ma earlier than in surrounding areas. The youngest TTG phase was coeval with the intrusion of the Likamännikkö quartz alkali feldspar syenite (2741 ± 2 Ma) complex. The complex contains angular fragments of ultrabasic rock, which display considerable compositional heterogeneity and are interpreted as cumulates containing clinopyroxene (generally altered to actinolite), apatite, allanite, epidote, and albite. The quartz alkali feldspar syenite cannot be regarded as alkaline sensu stricto, despite clear alkaline affinities. Within Likamännikkö there are also calcite carbonatite patches, which display mantle-like O- and C-isotope values, as well as trace element characteristics consistent with a magmatic origin, and could thus be among the oldest known carbonatites in the world. Sanukitoid (2.73 2.71 Ga) and quartz diorite suites (2.70 Ga) overlap within error margins and display compositional similarities, but can be differentiated from each other on the basis of higher Ba, K2O and LREE contents of the sanukitoids. The Likamännikkö complex, sanukitoids and quartz diorites are interpreted as originating from the metasomatized mantle and mark the diversification of the granitoid clan after 200 Ma of evolution dominated by the TTG suite. Widespread migmatization and the intrusion of anatectic leucogranitoids as dykes and intrusions of varying size took place at 2.70 2.69 Ga, following collisional thickening of the crust. The leucogranitoids and leucosomes of migmatized TTGs are compositionally alike and characterized by high silica contents and a leucocratic appearance. Due to compositional overlap, definitive discrimination between leucogranitoids and transitional TTGs requires isotope datings and/or knowledge of field relationships. Leucogranitoids represent partial melts of the local TTGs, both the sensu stricto and transitional types, mostly derived under water fluxed conditions, with possible fluid sources being late sanukitoids and quartz diorites as well as dehydrating lower crust. The Paleoproterozoic 2.44 2.39 Ga A-type granitoids of the Kianta Complex emplaced in an extensional environment are linked to the coeval and more widespread mafic intrusions and dykes observed over most of the Archean nucleus of the Fennoscandian shield. The A-type intrusions in the Suomussalmi area are interpreted as partial melts of the Archean lower crust and display differences in composition and magnetite content, which indicate differences in the composition and oxidation state of the source.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.