981 resultados para cosmological parameters from CMBR


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the problem of measuring the uncertainty of CGE (or RBC)-type model simulations associated with parameter uncertainty. We describe two approaches for building confidence sets on model endogenous variables. The first one uses a standard Wald-type statistic. The second approach assumes that a confidence set (sampling or Bayesian) is available for the free parameters, from which confidence sets are derived by a projection technique. The latter has two advantages: first, confidence set validity is not affected by model nonlinearities; second, we can easily build simultaneous confidence intervals for an unlimited number of variables. We study conditions under which these confidence sets take the form of intervals and show they can be implemented using standard methods for solving CGE models. We present an application to a CGE model of the Moroccan economy to study the effects of policy-induced increases of transfers from Moroccan expatriates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette étude vise à tester la pertinence des images RSO - de moyenne et de haute résolution - à la caractérisation des types d’occupation du sol en milieu urbain. Elle s’est basée sur des approches texturales à partir des statistiques de deuxième ordre. Plus spécifiquement, on recherche les paramètres de texture les plus pertinents pour discriminer les objets urbains. Il a été utilisé à cet égard des images Radarsat-1 en mode fin en polarisation HH et Radarsat-2 en mode fin en double et quadruple polarisation et en mode ultrafin en polarisation HH. Les occupations du sol recherchées étaient le bâti dense, le bâti de densité moyenne, le bâti de densité faible, le bâti industriel et institutionnel, la végétation de faible densité, la végétation dense et l’eau. Les neuf paramètres de textures analysés ont été regroupés, en familles selon leur définition mathématique. Les paramètres de ressemblance/dissemblance regroupent l’Homogénéité, le Contraste, la Similarité et la Dissimilarité. Les paramètres de désordre sont l’Entropie et le Deuxième Moment Angulaire. L’Écart-Type et la Corrélation sont des paramètres de dispersion et la Moyenne est une famille à part. Il ressort des expériences que certaines combinaisons de paramètres de texture provenant de familles différentes utilisés dans les classifications donnent de très bons résultants alors que d’autres associations de paramètres de texture de définition mathématiques proches génèrent de moins bons résultats. Par ailleurs on constate que si l’utilisation de plusieurs paramètres de texture améliore les classifications, la performance de celle-ci plafonne à partir de trois paramètres. Malgré la bonne performance de cette approche basée sur la complémentarité des paramètres de texture, des erreurs systématiques dues aux effets cardinaux subsistent sur les classifications. Pour pallier à ce problème, il a été développé un modèle de compensation radiométrique basé sur la section efficace radar (SER). Une simulation radar à partir du modèle numérique de surface du milieu a permis d'extraire les zones de rétrodiffusion des bâtis et d'analyser les rétrodiffusions correspondantes. Une règle de compensation des effets cardinaux fondée uniquement sur les réponses des objets en fonction de leur orientation par rapport au plan d'illumination par le faisceau du radar a été mise au point. Des applications de cet algorithme sur des images RADARSAT-1 et RADARSAT-2 en polarisations HH, HV, VH, et VV ont permis de réaliser de considérables gains et d’éliminer l’essentiel des erreurs de classification dues aux effets cardinaux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We design optimal band pass filters for electrons in semiconductor heterostructures, under a uniform applied electric field. The inner cells are chosen to provide a desired transmission window. The outer cells are then designed to transform purely incoming or outgoing waves into Bloch states of the inner cells. The transfer matrix is interpreted as a conformal mapping in the complex plane, which allows us to write constraints on the outer cell parameters, from which physically useful values can be obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A perusal of the literature shows that most of the earlier works on the ecology and productivity from the Indian waters have been confined to the estuarine ecosystms and contiguous neritic and oceanic water bodies. Although some information is available on certain aspects of the envirornental parameters from the ‘coastal lagoon ecosystem‘, there is hardly any indepth study on the ecological and productivity problems from "derlict saline lagoonal environment" in India . In view of this, the researcher undertook a study on the subject "ecology and productivity“ of a typical “coastal saline lagoon"(Pilla;headan lagoon) situated along the southeast coast of India for a period of two years!-N11, 1982 to June, 1984) , and the results of the investigations are embodied in the present thesis entitled "studies on the ecology and productivity of saline lagoon‘.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis begins with a review of basic elements of general theory of relativity (GTR) which forms the basis for the theoretical interpretation of the observations in cosmology. The first chapter also discusses the standard model in cosmology, namely the Friedmann model, its predictions and problems. We have also made a brief discussion on fractals and inflation of the early universe in the first chapter. In the second chapter we discuss the formulation of a new approach to cosmology namely a stochastic approach. In this model, the dynam ics of the early universe is described by a set of non-deterministic, Langevin type equations and we derive the solutions using the Fokker—Planck formalism. Here we demonstrate how the problems with the standard model, can be eliminated by introducing the idea of stochastic fluctuations in the early universe. Many recent observations indicate that the present universe may be approximated by a many component fluid and we assume that only the total energy density is conserved. This, in turn, leads to energy transfer between different components of the cosmic fluid and fluctuations in such energy transfer can certainly induce fluctuations in the mean to factor in the equation of state p = wp, resulting in a fluctuating expansion rate for the universe. The third chapter discusses the stochastic evolution of the cosmological parameters in the early universe, using the new approach. The penultimate chapter is about the refinements to be made in the present model, by means of a new deterministic model The concluding chapter presents a discussion on other problems with the conventional cosmology, like fractal correlation of galactic distribution. The author attempts an explanation for this problem using the stochastic approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Verwendung von Aktivkohlen und -koksen stellt eine Alternative zu herkömmlichen Prozessen zur Verminderung der NOx-Emissionen in Rauchgasen dar. An diesen Materialien wird Stickstoffmonoxid adsorbiert und katalytisch zu N2 reduziert. Eine einheitliche Erklärung über die ablaufenden Vorgänge und die Reaktionsmechanismen gibt es noch nicht. Die Ergebnisse der bisher veröffentlichten wissenschaftlichen Arbeiten sind sehr unterschiedlich, wenn nicht sogar widersprüchlich. In dieser Arbeit wird, anhand der Messung von NO-Durchbruchskurven und thermischen Desorptionsspektren, die Adsorption und Reaktion von Stickstoffmonoxid an Aktivkohlen und -koksen in Anwesenheit von Sauerstoff und Wasserdampf untersucht. Zur Durchführung der experimentellen Untersuchungen wird eine Versuchsanlage, bestehend aus einer Vorrichtung zur Gasgemischaufbereitung, einem Festbettreaktor und einer Gasanalytik, konzipiert und aufgebaut. Die Untersuchungen erfolgen bei Temperaturen zwischen 100 und 150 °C. Die NO-, O2- und H2O-Konzentrationen werden anhand der Rauchgaszusammensetzung kohlegefeuerter Kraftwerke gewählt. Die experimentellen Untersuchungen konzentrieren sich auf die Verwendung einer Aktivkohle aus Ölpalmschalen, die in einem Drehrohrreaktor am Institut für Thermische Energietechnik der Universität Kassel hergestellt wurde. Die experimentellen Ergebnisse zeigen, dass während des Prozesses NO-Adsorption, -Reduktion und -Oxidation, NO2-Bildung, -Adsorption und -reduktive Desorption, H2O-Adsorption sowie O2-Vergasung gleichzeitig stattfinden. Bei niedrigen Temperaturen werden die NO2-Bildung und die Adsorption bevorzugt. Die NO-Reduktion läuft über adsorbiertes NO mit CO2- und CO-Bildung. Durch O2-Vergasung werden aktive freie Cf-Plätzen für die NO-Reaktion und -Adsorption gebildet. Wasserdampf wird an der Aktivkohle adsorbiert und belegt aktive Plätze für diese Prozesse. Aus den experimentellen Ergebnissen werden kinetische und Gleichgewichtsparameter der NO-Sorption bestimmt. Ein vereinfachtes mathematisches Modell des Festbettreaktors, das zur Berechnung der NO-Durchbruchskurven bei unterschiedlichen Temperaturen dient, wird aufgestellt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introducción: El glaucoma representa la tercera causa de ceguera a nivel mundial y un diagnóstico oportuno requiere evaluar la excavación del nervio óptico que está relacionada con el área del mismo. Existen reportes de áreas grandes (macrodiscos) que pueden ser protectoras, mientras otros las asocian a susceptibilidad para glaucoma. Objetivo: Establecer si existe asociación entre macrodisco y glaucoma en individuos estudiados con Tomografía Optica Coherente (OCT ) en la Fundación Oftalmológica Nacional. Métodos: Estudio transversal de asociación que incluyó 25 ojos con glaucoma primario de ángulo abierto y 74 ojos sanos. A cada individuo se realizó examen oftalmológico, campo visual computarizado y OCT de nervio óptico. Se compararon por grupos áreas de disco óptico y número de macrodiscos, definidos según Jonas como un área de la media más dos desviaciones estándar y según Adabache como área ≥3.03 mm2 quien evaluó población Mexicana. Resultados: El área promedio de disco óptico fue 2,78 y 2,80 mm2 glaucoma Vs. sanos. De acuerdo al criterio de Jonas, se observó un macrodisco en el grupo sanos y según criterio de Adabache se encontraron ocho y veinticinco macrodiscos glaucoma Vs. sanos. (OR=0,92 IC95%=0.35 – 2.43). Discusión: No hubo diferencia significativa (P=0.870) en el área de disco entre los dos grupos y el porcentaje de macrodiscos para los dos grupos fue similar, aunque el bajo número de éstos no permitió concluir en términos estadísticos sobre la presencia de macrodisco y glaucoma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Improvements in the resolution of satellite imagery have enabled extraction of water surface elevations at the margins of the flood. Comparison between modelled and observed water surface elevations provides a new means for calibrating and validating flood inundation models, however the uncertainty in this observed data has yet to be addressed. Here a flood inundation model is calibrated using a probabilistic treatment of the observed data. A LiDAR guided snake algorithm is used to determine an outline of a flood event in 2006 on the River Dee, North Wales, UK, using a 12.5m ERS-1 image. Points at approximately 100m intervals along this outline are selected, and the water surface elevation recorded as the LiDAR DEM elevation at each point. With a planar water surface from the gauged upstream to downstream water elevations as an approximation, the water surface elevations at points along this flooded extent are compared to their ‘expected’ value. The pattern of errors between the two show a roughly normal distribution, however when plotted against coordinates there is obvious spatial autocorrelation. The source of this spatial dependency is investigated by comparing errors to the slope gradient and aspect of the LiDAR DEM. A LISFLOOD-FP model of the flood event is set-up to investigate the effect of observed data uncertainty on the calibration of flood inundation models. Multiple simulations are run using different combinations of friction parameters, from which the optimum parameter set will be selected. For each simulation a T-test is used to quantify the fit between modelled and observed water surface elevations. The points chosen for use in this T-test are selected based on their error. The criteria for selection enables evaluation of the sensitivity of the choice of optimum parameter set to uncertainty in the observed data. This work explores the observed data in detail and highlights possible causes of error. The identification of significant error (RMSE = 0.8m) between approximate expected and actual observed elevations from the remotely sensed data emphasises the limitations of using this data in a deterministic manner within the calibration process. These limitations are addressed by developing a new probabilistic approach to using the observed data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The alignment of model amyloid peptide YYKLVFFC is investigated in bulk and at a solid surface using a range of spectroscopic methods employing polarized radiation. The peptide is based on a core sequence of the amyloid beta (A beta) peptide, KLVFF. The attached tyrosine and cysteine units are exploited to yield information on alignment and possible formation of disulfide or dityrosine links. Polarized Raman spectroscopy on aligned stalks provides information on tyrosine orientation, which complements data from linear dichroism (LD) on aqueous solutions subjected to shear in a Couette cell. LD provides a detailed picture of alignment of peptide strands and aromatic residues and was also used to probe the kinetics of self-assembly. This suggests initial association of phenylalanine residues, followed by subsequent registry of strands and orientation of tyrosine residues. X-ray diffraction (XRD) data from aligned stalks is used to extract orientational order parameters from the 0.48 nm reflection in the cross-beta pattern, from which an orientational distribution function is obtained. X-ray diffraction on solutions subject to capillary flow confirmed orientation in situ at the level of the cross-beta pattern. The information on fibril and tyrosine orientation from polarized Raman spectroscopy is compared with results from NEXAFS experiments on samples prepared as films on silicon. This indicates fibrils are aligned parallel to the surface, with phenyl ring normals perpendicular to the surface. Possible disulfide bridging leading to peptide dimer formation was excluded by Raman spectroscopy, whereas dityrosine formation was probed by fluorescence experiments and was found not to occur except under alkaline conditions. Congo red binding was found not to influence the cross-beta XRD pattern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A unique parameterization of the perspective projections in all whole-numbered dimensions is reported. The algorithm for generating a perspective transformation from parameters and for recovering parameters from a transformation is a modification of the Givens orthogonalization algorithm. The algorithm for recovering a perspective transformation from a perspective projection is a modification of Roberts' classical algorithm. Both algorithms have been implemented in Pop-11 with call-out to the NAG Fortran libraries. Preliminary monte-carlo tests show that the transformation algorithm is highly accurate, but that the projection algorithm cannot recover magnitude and shear parameters accurately. However, there is reason to believe that the projection algorithm might improve significantly with the use of many corresponding points, or with multiple perspective views of an object. Previous parameterizations of the perspective transformations in the computer graphics and computer vision literature are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents evidence for several features of the population of chess players, and the distribution of their performances measured in terms of Elo ratings and by computer analysis of moves. Evidence that ratings have remained stable since the inception of the Elo system in the 1970’s is given in several forms: by showing that the population of strong players fits a simple logistic-curve model without inflation, by plotting players’ average error against the FIDE category of tournaments over time, and by skill parameters from a model that employs computer analysis keeping a nearly constant relation to Elo rating across that time. The distribution of the model’s Intrinsic Performance Ratings can hence be used to compare populations that have limited interaction, such as between players in a national chess federation and FIDE, and ascertain relative drift in their respective rating systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A procedure is presented for obtaining conformational parameters from oriented but non-crystalline polymers. This is achieved by comparison of the experimental wide angle X-ray scattering with that calculated from models but in such a way that foreknowledge of the orientation distribution function is not required. X-ray scattering intensity values for glassy isotactic poly(methylmethacrylate) are analysed by these techniques. The method could be usefully applied to other oriented molecular systems such as liquid crystalline materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Procedures for obtaining molecular orientational parameters from wide angle X-ray scattering patterns of samples of thermotropic liquid crystalline polymers are presented. The methods described are applied to an extrusion-aligned sample of a random copolyester of poly(ethylene terephthalate) (PET) and p-acetoxybenzoic acid. Values of the orientational parameters are obtained from both the interchain and intrachain maxima in the scattering pattern. The differences in the values so derived suggest some level of local rotational correlation