891 resultados para unified theories and models of strong and electroweak


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Volume(density)-independent pair-potentials cannot describe metallic cohesion adequately as the presence of the free electron gas renders the total energy strongly dependent on the electron density. The embedded atom method (EAM) addresses this issue by replacing part of the total energy with an explicitly density-dependent term called the embedding function. Finnis and Sinclair proposed a model where the embedding function is taken to be proportional to the square root of the electron density. Models of this type are known as Finnis-Sinclair many body potentials. In this work we study a particular parametrization of the Finnis-Sinclair type potential, called the "Sutton-Chen" model, and a later version, called the "Quantum Sutton-Chen" model, to study the phonon spectra and the temperature variation thermodynamic properties of fcc metals. Both models give poor results for thermal expansion, which can be traced to rapid softening of transverse phonon frequencies with increasing lattice parameter. We identify the power law decay of the electron density with distance assumed by the model as the main cause of this behaviour and show that an exponentially decaying form of charge density improves the results significantly. Results for Sutton-Chen and our improved version of Sutton-Chen models are compared for four fcc metals: Cu, Ag, Au and Pt. The calculated properties are the phonon spectra, thermal expansion coefficient, isobaric heat capacity, adiabatic and isothermal bulk moduli, atomic root-mean-square displacement and Gr\"{u}neisen parameter. For the sake of comparison we have also considered two other models where the distance-dependence of the charge density is an exponential multiplied by polynomials. None of these models exhibits the instability against thermal expansion (premature melting) as shown by the Sutton-Chen model. We also present results obtained via pure pair potential models, in order to identify advantages and disadvantages of methods used to obtain the parameters of these potentials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thèse contribue à une théorie générale de la conception du projet. S’inscrivant dans une demande marquée par les enjeux du développement durable, l’objectif principal de cette recherche est la contribution d’un modèle théorique de la conception permettant de mieux situer l’utilisation des outils et des normes d’évaluation de la durabilité d’un projet. Les principes fondamentaux de ces instruments normatifs sont analysés selon quatre dimensions : ontologique, méthodologique, épistémologique et téléologique. Les indicateurs de certains effets contre-productifs reliés, en particulier, à la mise en compte de ces normes confirment la nécessité d’une théorie du jugement qualitatif. Notre hypothèse principale prend appui sur le cadre conceptuel offert par la notion de « principe de précaution » dont les premières formulations remontent du début des années 1970, et qui avaient précisément pour objectif de remédier aux défaillances des outils et méthodes d’évaluation scientifique traditionnelles. La thèse est divisée en cinq parties. Commençant par une revue historique des modèles classiques des théories de la conception (design thinking) elle se concentre sur l’évolution des modalités de prise en compte de la durabilité. Dans cette perspective, on constate que les théories de la « conception verte » (green design) datant du début des années 1960 ou encore, les théories de la « conception écologique » (ecological design) datant des années 1970 et 1980, ont finalement convergé avec les récentes théories de la «conception durable» (sustainable design) à partir du début des années 1990. Les différentes approches du « principe de précaution » sont ensuite examinées sous l’angle de la question de la durabilité du projet. Les standards d’évaluation des risques sont comparés aux approches utilisant le principe de précaution, révélant certaines limites lors de la conception d’un projet. Un premier modèle théorique de la conception intégrant les principales dimensions du principe de précaution est ainsi esquissé. Ce modèle propose une vision globale permettant de juger un projet intégrant des principes de développement durable et se présente comme une alternative aux approches traditionnelles d’évaluation des risques, à la fois déterministes et instrumentales. L’hypothèse du principe de précaution est dès lors proposée et examinée dans le contexte spécifique du projet architectural. Cette exploration débute par une présentation de la notion classique de «prudence» telle qu’elle fut historiquement utilisée pour guider le jugement architectural. Qu’en est-il par conséquent des défis présentés par le jugement des projets d’architecture dans la montée en puissance des méthodes d’évaluation standardisées (ex. Leadership Energy and Environmental Design; LEED) ? La thèse propose une réinterprétation de la théorie de la conception telle que proposée par Donald A. Schön comme une façon de prendre en compte les outils d’évaluation tels que LEED. Cet exercice révèle cependant un obstacle épistémologique qui devra être pris en compte dans une reformulation du modèle. En accord avec l’épistémologie constructiviste, un nouveau modèle théorique est alors confronté à l’étude et l’illustration de trois concours d'architecture canadienne contemporains ayant adopté la méthode d'évaluation de la durabilité normalisée par LEED. Une série préliminaire de «tensions» est identifiée dans le processus de la conception et du jugement des projets. Ces tensions sont ensuite catégorisées dans leurs homologues conceptuels, construits à l’intersection du principe de précaution et des théories de la conception. Ces tensions se divisent en quatre catégories : (1) conceptualisation - analogique/logique; (2) incertitude - épistémologique/méthodologique; (3) comparabilité - interprétation/analytique, et (4) proposition - universalité/ pertinence contextuelle. Ces tensions conceptuelles sont considérées comme autant de vecteurs entrant en corrélation avec le modèle théorique qu’elles contribuent à enrichir sans pour autant constituer des validations au sens positiviste du terme. Ces confrontations au réel permettent de mieux définir l’obstacle épistémologique identifié précédemment. Cette thèse met donc en évidence les impacts généralement sous-estimés, des normalisations environnementales sur le processus de conception et de jugement des projets. Elle prend pour exemple, de façon non restrictive, l’examen de concours d'architecture canadiens pour bâtiments publics. La conclusion souligne la nécessité d'une nouvelle forme de « prudence réflexive » ainsi qu’une utilisation plus critique des outils actuels d’évaluation de la durabilité. Elle appelle une instrumentalisation fondée sur l'intégration globale, plutôt que sur l'opposition des approches environnementales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Catalysis is a very important process from an industrial point of view since the production of most industrially important chemicals involves catalysis.Solid acid catalysts are appealing since the nature of acid sites is known and their chemical behavior in acid catalyzed reactions can be rationalized by means of existing theories and models. Mixed oxides crystallizing in spinel structure are of special interest because the spinel lattice imparts extra stability to the catalyst under various reaction conditions so that theses systems have sustained activities for longer periods. The thesis entitled" Catalysis By Ferrites And Cobaltites For The Alkylation And Oxidation Of Organic Compounds " presents the preparation ,characterization ,and activity studies of the prepared spinels were modified by incorporating other ions and by changing the stoichiometry.The prepared spinels exhibiting better catalytic activity towards the studied reactions with good product selectivity.Acid-base properties and cation distribution of the spinels were found to control the catalytic activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate chaotic, memory, and cooling rate effects in the three-dimensional Edwards-Anderson model by doing thermoremanent (TRM) and ac susceptibility numerical experiments and making a detailed comparison with laboratory experiments on spin glasses. In contrast to the experiments, the Edwards-Anderson model does not show any trace of reinitialization processes in temperature change experiments (TRM or ac). A detailed comparison with ac relaxation experiments in the presence of dc magnetic field or coupling distribution perturbations reveals that the absence of chaotic effects in the Edwards-Anderson model is a consequence of the presence of strong cooling rate effects. We discuss possible solutions to this discrepancy, in particular the smallness of the time scales reached in numerical experiments, but we also question the validity of the Edwards-Anderson model to reproduce the experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the past decades since Schumpeter’s influential writings economists have pursued research to examine the role of innovation in certain industries on firm as well as on industry level. Researchers describe innovations as the main trigger of industry dynamics, while policy makers argue that research and education are directly linked to economic growth and welfare. Thus, research and education are an important objective of public policy. Firms and public research are regarded as the main actors which are relevant for the creation of new knowledge. This knowledge is finally brought to the market through innovations. What is more, policy makers support innovations. Both actors, i.e. policy makers and researchers, agree that innovation plays a central role but researchers still neglect the role that public policy plays in the field of industrial dynamics. Therefore, the main objective of this work is to learn more about the interdependencies of innovation, policy and public research in industrial dynamics. The overarching research question of this dissertation asks whether it is possible to analyze patterns of industry evolution – from evolution to co-evolution – based on empirical studies of the role of innovation, policy and public research in industrial dynamics. This work starts with a hypothesis-based investigation of traditional approaches of industrial dynamics. Namely, the testing of a basic assumption of the core models of industrial dynamics and the analysis of the evolutionary patterns – though with an industry which is driven by public policy as example. Subsequently it moves to a more explorative approach, investigating co-evolutionary processes. The underlying questions of the research include the following: Do large firms have an advantage because of their size which is attributable to cost spreading? Do firms that plan to grow have more innovations? What role does public policy play for the evolutionary patterns of an industry? Are the same evolutionary patterns observable as those described in the ILC theories? And is it possible to observe regional co-evolutionary processes of science, innovation and industry evolution? Based on two different empirical contexts – namely the laser and the photovoltaic industry – this dissertation tries to answer these questions and combines an evolutionary approach with a co-evolutionary approach. The first chapter starts with an introduction of the topic and the fields this dissertation is based on. The second chapter provides a new test of the Cohen and Klepper (1996) model of cost spreading, which explains the relationship between innovation, firm size and R&D, at the example of the photovoltaic industry in Germany. First, it is analyzed whether the cost spreading mechanism serves as an explanation for size advantages in this industry. This is related to the assumption that the incentives to invest in R&D increase with the ex-ante output. Furthermore, it is investigated whether firms that plan to grow will have more innovative activities. The results indicate that cost spreading serves as an explanation for size advantages in this industry and, furthermore, growth plans lead to higher amount of innovative activities. What is more, the role public policy plays for industry evolution is not finally analyzed in the field of industrial dynamics. In the case of Germany, the introduction of demand inducing policy instruments stimulated market and industry growth. While this policy immediately accelerated market volume, the effect on industry evolution is more ambiguous. Thus, chapter three analyzes this relationship by considering a model of industry evolution, where demand-inducing policies will be discussed as a possible trigger of development. The findings suggest that these instruments can take the same effect as a technical advance to foster the growth of an industry and its shakeout. The fourth chapter explores the regional co-evolution of firm population size, private-sector patenting and public research in the empirical context of German laser research and manufacturing over more than 40 years from the emergence of the industry to the mid-2000s. The qualitative as well as quantitative evidence is suggestive of a co-evolutionary process of mutual interdependence rather than a unidirectional effect of public research on private-sector activities. Chapter five concludes with a summary, the contribution of this work as well as the implications and an outlook of further possible research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The and RT0 finite element schemes are among the most promising low order elements for use in unstructured mesh marine and lake models. They are both free of spurious elevation modes, have good dispersive properties and have a relatively low computational cost. In this paper, we derive both finite element schemes in the same unified framework and discuss their respective qualities in terms of conservation, consistency, propagation factor and convergence rate. We also highlight the impact that the local variables placement can have on the model solution. The main conclusion that we can draw is that the choice between elements is highly application dependent. We suggest that the element is better suited to purely hydrodynamical applications while the RT0 element might perform better for hydrological applications that require scalar transport calculations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Rio Tinto river in SW Spain is a classic example of acid mine drainage and the focus of an increasing amount of research including environmental geochemistry, extremophile microbiology and Mars-analogue studies. Its 5000-year mining legacy has resulted in a wide range of point inputs including spoil heaps and tunnels draining underground workings. The variety of inputs and importance of the river as a research site make it an ideal location for investigating sulphide oxidation mechanisms at the field scale. Mass balance calculations showed that pyrite oxidation accounts for over 93% of the dissolved sulphate derived from sulphide oxidation in the Rio Tinto point inputs. Oxygen isotopes in water and sulphate were analysed from a variety of drainage sources and displayed delta O-18((SO4-H2O)) values from 3.9 to 13.6 parts per thousand, indicating that different oxidation pathways occurred at different sites within the catchment. The most commonly used approach to interpreting field oxygen isotope data applies water and oxygen fractionation factors derived from laboratory experiments. We demonstrate that this approach cannot explain high delta O-18((SO4-H2O)) values in a manner that is consistent with recent models of pyrite and sulphoxyanion oxidation. In the Rio Tinto, high delta O-18((SO4-H2O)) values (11.2-13.6 parts per thousand) occur in concentrated (Fe = 172-829 mM), low pH (0.88-1.4), ferrous iron (68-91% of total Fe) waters and are most simply explained by a mechanism involving a dissolved sulphite intermediate, sulphite-water oxygen equilibrium exchange and finally sulphite oxidation to sulphate with O-2. In contrast, drainage from large waste blocks of acid volcanic tuff with pyritiferous veins also had low pH (1.7). but had a low delta O-18((SO4-H2O)) value of 4.0 parts per thousand and high concentrations of ferric iron (Fe(III) = 185 mM, total Fe = 186 mM), suggesting a pathway where ferric iron is the primary oxidant, water is the primary source of oxygen in the sulphate and where sulphate is released directly from the pyrite surface. However, problems remain with the sulphite-water oxygen exchange model and recommendations are therefore made for future experiments to refine our understanding of oxygen isotopes in pyrite oxidation. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Indirect and direct models of sexual selection make different predictions regarding the quantitative genetic relationships between sexual ornaments and fitness. Indirect models predict that ornaments should have a high heritability and that strong positive genetic covariance should exist between fitness and the ornament. Direct models, on the other hand, make no such assumptions about the level of genetic variance in fitness and the ornament, and are therefore likely to be more important when environmental sources of variation are large. Here we test these predictions in a wild population of the blue tit (Parus caeruleus), a species in which plumage coloration has been shown to be under sexual selection. Using 3 years of cross-fostering data from over 250 breeding attempts, we partition the covariance between parental coloration and aspects of nestling fitness into a genetic and environmental component. Contrary to indirect models of sexual selection, but in agreement with direct models, we show that variation in coloration is only weakly heritable (h(2) < 0.11), and that two components of offspring fitness-nestling size and fledgling recruitment-are strongly dependent on parental effects, rather than genetic effects. Furthermore, there was no evidence of significant positive genetic covariation between parental colour and offspring traits. Contrary to direct benefit models, however, we find little evidence that variation in colour reliably indicates the level of parental care provided by either males or females. Taken together, these results indicate that the assumptions of indirect models of sexual selection are not supported by the genetic basis of the traits reported on here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A comparison of the models of Vitti et al. (2000, J. Anim. Sci. 78, 2706-2712) and Fernandez (1995c, Livest. Prod. Sci. 41, 255-261) was carried out using two data sets on growing pigs as input. The two models compared were based on similar basic principles, although their aims and calculations differed. The Vitti model employs the rate:state formalism and describes phosphorus (P) flow between four pools representing P content in gut, blood, bone and soft tissue in growing goats. The Fernandez model describes flow and fractional recirculation between P pools in gut, blood and bone in growing pigs. The results from both models showed similar trends for P absorption from gut to blood and net retention in bone with increasing P intake, with the exception of the 65 kg results from Date Set 2 calculated using the FernAndez model. Endogenous loss from blood back to gut increased faster with increasing P intake in the FernAndez than in the Vitti model for Data Set 1. However, for Data Set 2, endogenous loss increased with increasing P intake using the Vitti model, but decreased when calculated using the FernAndez model. Incorporation of P into bone was not influenced by intake in the FernAndez model, while in the Vitti model there was an increasing trend. The FernAndez model produced a pattern of decreasing resorption in bone with increasing P intake, with one of the data sets, which was not observed when using the Vitti model. The pigs maintained their P homeostasis in blood by regulation of P excretion in urine. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An example of the evolution of the interacting behaviours of parents and progeny is studied using iterative equations linking the frequencies of the gametes produced by the progeny to the frequencies of the gametes in the parental generation. This population genetics approach shows that a model in which both behaviours are determined by a single locus can lead to a stable equilibrium in which the two behaviours continue to segregate. A model in which the behaviours are determined by genes at two separate loci leads eventually to fixation of the alleles at both loci but this can take many generations of selection. Models of the type described in this paper will be needed to understand the evolution of complex behaviour when genomic or experimental information is available about the genetic determinants of behaviour and the selective values of different genomes. (c) 2007 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Strategy is a contested concept. The generic literature is characterized by a diverse range of competing theories and alternative perspectives. Traditional models of the competitive strategy of construction firms have tended to focus on exogenous factors. In contrast, the resource-based view of strategic management emphasizes the importance of endogenous factors. The more recently espoused concept of dynamic capabilities extends consideration beyond static resources to focus on the ability of firms to reconfigure their operating routines to enable responses to changing environments. The relevance of the dynamics capabilities framework to the construction sector is investigated through an exploratory case study of a regional contractor. The focus on how firms continuously adapt to changing environments provides new insights into competitive strategy in the construction sector. Strong support is found for the importance of path dependency in shaping strategic choice. The case study further suggests that strategy is a collective endeavour enacted by a loosely defined group of individual actors. Dynamic capabilities are characterized by an empirical elusiveness and as such are best construed as situated practices embedded within a social and physical context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scaling of metabolic rates to body size is widely considered to be of great biological and ecological importance, and much attention has been devoted to determining its theoretical and empirical value. Most debate centers on whether the underlying power law describing metabolic rates is 2/3 (as predicted by scaling of surface area/volume relationships) or 3/4 ("Kleiber's law"). Although recent evidence suggests that empirically derived exponents vary among clades with radically different metabolic strategies, such as ectotherms and endotherms, models, such as the metabolic theory of ecology, depend on the assumption that there is at least a predominant, if not universal, metabolic scaling exponent. Most analyses claimed to support the predictions of general models, however, failed to control for phylogeny. We used phylogenetic generalized least-squares models to estimate allometric slopes for both basal metabolic rate (BMR) and field metabolic rate (FMR) in mammals. Metabolic rate scaling conformed to no single theoretical prediction, but varied significantly among phylogenetic lineages. In some lineages we found a 3/4 exponent, in others a 2/3 exponent, and in yet others exponents differed significantly from both theoretical values. Analysis of the phylogenetic signal in the data indicated that the assumptions of neither species-level analysis nor independent contrasts were met. Analyses that assumed no phylogenetic signal in the data (species-level analysis) or a strong phylogenetic signal (independent contrasts), therefore, returned estimates of allometric slopes that were erroneous in 30% and 50% of cases, respectively. Hence, quantitative estimation of the phylogenetic signal is essential for determining scaling exponents. The lack of evidence for a predominant scaling exponent in these analyses suggests that general models of metabolic scaling, and macro-ecological theories that depend on them, have little explanatory power.