897 resultados para Transformation-based semi-parametric estimators
Resumo:
Les sociétés contemporaines affrontent le défi de s’intégrer et s’adapter à un processus de transformation qui vise la construction de sociétés du savoir. Ce processus doit notamment son élan aux institutions d’enseignement supérieur qui constituent un espace privilégié où on bâtit l’avenir d’une société à partir des savoirs et celles-ci doivent faire face aux nouveaux enjeux sociaux, économiques et politiques qui affectent tous les pays du monde. La quête de la qualité devient donc un processus constant d’amélioration et surgit l’intérêt par l’évaluation au niveau universitaire. Par conséquent, cette recherche s’attache au sujet de l’évaluation à l’enseignement supérieur et s’enfonce dans le débat actuel sur les changements provoqués par les évaluations institutionnelles produisant un défi puisqu’il s’agit d’une prise de conscience fondée sur la culture de la qualité. L’autoévaluation est une stratégie permettant aux institutions d’enseignement supérieur mener des processus intégraux de valorisation dont le but est d’identifier les faiblesses des facteurs qui ont besoin d’améliorer. Le résultat conduit à l’élaboration et à la mise en œuvre d’un plan d’amélioration pour l'institution, programme académique ou plan d’études. À travers l’orientation du modèle d’évaluation systémique CIPP de Stufflebeam (1987), on a pu analyser de façon holistique la mise en place de l’autoévaluation depuis son contexte, planification, processus et produit. Ainsi les objectifs de la thèse visent l’identification du développement de la deuxième autoévaluation afin d’obtenir une reconnaissance de haute qualité et effectuer la mise en œuvre du plan d’amélioration auprès des programmes académiques de Licence en Comptabilité et Gestion de l’entreprise de la Faculté de Sciences de l’Administration de l’Université du Valle en Colombie. À travers l’appropriation de la théorie Neo-institutionnelle les changements apparus après l’autoévaluation ont été également analysés et interprétés et ont ainsi permis l’achèvement des fins de la recherche. La méthodologie développe la stratégie de l’étude de cas dans les deux programmes académiques avec une approche mixte où la phase qualitative des entretiens semi-structurés est complémentée par la phase quantitative des enquêtes. Des documents institutionnels des programmes et de la faculté ont aussi été considérés. Grâce à ces trois instruments ont pu obtenir plus d’objectivité et d’efficacité pendant la recherche. Les résultats dévoilent que les deux programmes ciblés ont recouru à des procédés et à des actions accordées au modèle de l’Université du Valle quoiqu’il ait fallu faire des adaptations à leurs propres besoins et pertinence ce qui a permis de mener à terme la mise en œuvre du processus d’autoévaluation et ceci a donné lieu à certains changements. Les composantes Processus Académiques et Enseignants sont celles qui ont obtenu le plus fort développement, parmi celles-ci on trouve également : Organisation, Administration et Gestion, Ressources Humaines, Physiques et Financières. D’autre part, parmi les composantes moins développées on a : Anciens Étudiants et Bienêtre Institutionnel. Les conclusions ont révélé que se servir d’un cadre institutionnel fort donne du sens d’identité et du soutien aux programmes. Il faut remarquer qu’il est essentiel d’une part élargir la communication de l’autoévaluation et ses résultats et d’autre part effectuer un suivi permanent des plans d’amélioration afin d’obtenir des changements importants et produire ainsi un enracinement plus fort de la culture de la qualité et l’innovation auprès de la communauté académique. Les résultats dégagés de cette thèse peuvent contribuer à mieux comprendre tant la mise en œuvre de l’autoévaluation et des plans d’amélioration aussi que les aspects facilitateurs, limitants, les blocages aux processus d’évaluation et la génération de changements sur les programmes académiques. Dans ce sens, la recherche devient un guide et une réflexion à propos des thèmes où les résultats sont très faibles. Outre, celle-ci révèle l’influence des cadres institutionnels ainsi que les entraves et tensions internes et externes montrant un certain degré d’agencement par le biais de stratégies de la part des responsables de la prise de décisions dans les universités. On peut déduire que la qualité, l’évaluation, le changement et l’innovation sont des concepts inhérents à la perspective de l’apprentissage organisationnel et à la mobilité des savoirs.
Resumo:
Les changements technologiques ont des effets structurants sur l’organisation des soins dans notre système de santé. Les professionnels de la santé et les patients – les principaux utilisateurs des innovations médicales – sont des acteurs clés dans les trajectoires suivies par les nouvelles technologies en santé. Pour développer des technologies médicales plus efficaces, sécuritaires et conviviales, plusieurs proposent d’intensifier la collaboration entre les utilisateurs et les développeurs. Cette recherche s’intéresse à cette prémisse sur la participation des utilisateurs dans les processus d’innovation médicale. L'objectif général de cette recherche est de mieux comprendre la collaboration entre les utilisateurs et les développeurs impliqués dans la transformation des innovations médicales. Adoptant un cadre d’analyse sociotechnique, cette thèse par articles s’articule autour de trois objectifs : 1) décrire comment la littérature scientifique définit les objectifs, les méthodes et les enjeux de l’engagement des utilisateurs dans le développement des innovations médicales; 2) analyser les perspectives d’utilisateurs et de développeurs de technologies médicales quant à leur collaboration dans le processus d’innovation; et 3) analyser comment sont mobilisés, en pratique, des utilisateurs dans le développement d’une innovation médicale. Le premier objectif s’appuie sur une synthèse structurée des écrits scientifiques (n=101) portant sur le phénomène de la participation des utilisateurs dans les processus d’innovation médicale. Cette synthèse a dégagé les méthodes appliquées ou proposées pour faire participer les utilisateurs, les arguments normatifs véhiculés ainsi que les principaux enjeux soulevés. Le deuxième objectif repose sur l’analyse de trois groupes de discussion délibératifs et d'une plénière impliquant des utilisateurs et des développeurs (n=19) de technologies médicales. L’analyse a permis d’examiner leurs perspectives à l'égard de diverses approches de collaboration dans les processus d'innovation. Le troisième objectif implique l’étude d’une innovation en électrophysiologie lors de la phase de recherche clinique. Cette étude de cas unique s'appuie sur une analyse qualitative d'études cliniques (n=57) et des éditoriaux et synthèses de connaissances dans des revues médicales spécialisées (n=15) couvrant une période de dix ans (1999 à 2008) ainsi que des entrevues semi-dirigées avec des acteurs clés impliqués dans le processus d’innovation (n=3). Cette étude a permis de mieux comprendre comment des utilisateurs donne un sens, s’approprient et légitiment une innovation médicale en contexte de recherche clinique. La contribution générale de cette thèse consiste en une meilleure compréhension de l’apport des utilisateurs dans les processus d’innovation médicale et de sa capacité à aligner plus efficacement le développement technologique avec les objectifs du système de santé.
Resumo:
La tomographie d’émission par positrons (TEP) est une modalité d’imagerie moléculaire utilisant des radiotraceurs marqués par des isotopes émetteurs de positrons permettant de quantifier et de sonder des processus biologiques et physiologiques. Cette modalité est surtout utilisée actuellement en oncologie, mais elle est aussi utilisée de plus en plus en cardiologie, en neurologie et en pharmacologie. En fait, c’est une modalité qui est intrinsèquement capable d’offrir avec une meilleure sensibilité des informations fonctionnelles sur le métabolisme cellulaire. Les limites de cette modalité sont surtout la faible résolution spatiale et le manque d’exactitude de la quantification. Par ailleurs, afin de dépasser ces limites qui constituent un obstacle pour élargir le champ des applications cliniques de la TEP, les nouveaux systèmes d’acquisition sont équipés d’un grand nombre de petits détecteurs ayant des meilleures performances de détection. La reconstruction de l’image se fait en utilisant les algorithmes stochastiques itératifs mieux adaptés aux acquisitions à faibles statistiques. De ce fait, le temps de reconstruction est devenu trop long pour une utilisation en milieu clinique. Ainsi, pour réduire ce temps, on les données d’acquisition sont compressées et des versions accélérées d’algorithmes stochastiques itératifs qui sont généralement moins exactes sont utilisées. Les performances améliorées par l’augmentation de nombre des détecteurs sont donc limitées par les contraintes de temps de calcul. Afin de sortir de cette boucle et permettre l’utilisation des algorithmes de reconstruction robustes, de nombreux travaux ont été effectués pour accélérer ces algorithmes sur les dispositifs GPU (Graphics Processing Units) de calcul haute performance. Dans ce travail, nous avons rejoint cet effort de la communauté scientifique pour développer et introduire en clinique l’utilisation des algorithmes de reconstruction puissants qui améliorent la résolution spatiale et l’exactitude de la quantification en TEP. Nous avons d’abord travaillé sur le développement des stratégies pour accélérer sur les dispositifs GPU la reconstruction des images TEP à partir des données d’acquisition en mode liste. En fait, le mode liste offre de nombreux avantages par rapport à la reconstruction à partir des sinogrammes, entre autres : il permet d’implanter facilement et avec précision la correction du mouvement et le temps de vol (TOF : Time-Of Flight) pour améliorer l’exactitude de la quantification. Il permet aussi d’utiliser les fonctions de bases spatio-temporelles pour effectuer la reconstruction 4D afin d’estimer les paramètres cinétiques des métabolismes avec exactitude. Cependant, d’une part, l’utilisation de ce mode est très limitée en clinique, et d’autre part, il est surtout utilisé pour estimer la valeur normalisée de captation SUV qui est une grandeur semi-quantitative limitant le caractère fonctionnel de la TEP. Nos contributions sont les suivantes : - Le développement d’une nouvelle stratégie visant à accélérer sur les dispositifs GPU l’algorithme 3D LM-OSEM (List Mode Ordered-Subset Expectation-Maximization), y compris le calcul de la matrice de sensibilité intégrant les facteurs d’atténuation du patient et les coefficients de normalisation des détecteurs. Le temps de calcul obtenu est non seulement compatible avec une utilisation clinique des algorithmes 3D LM-OSEM, mais il permet également d’envisager des reconstructions rapides pour les applications TEP avancées telles que les études dynamiques en temps réel et des reconstructions d’images paramétriques à partir des données d’acquisitions directement. - Le développement et l’implantation sur GPU de l’approche Multigrilles/Multitrames pour accélérer l’algorithme LMEM (List-Mode Expectation-Maximization). L’objectif est de développer une nouvelle stratégie pour accélérer l’algorithme de référence LMEM qui est un algorithme convergent et puissant, mais qui a l’inconvénient de converger très lentement. Les résultats obtenus permettent d’entrevoir des reconstructions en temps quasi-réel que ce soit pour les examens utilisant un grand nombre de données d’acquisition aussi bien que pour les acquisitions dynamiques synchronisées. Par ailleurs, en clinique, la quantification est souvent faite à partir de données d’acquisition en sinogrammes généralement compressés. Mais des travaux antérieurs ont montré que cette approche pour accélérer la reconstruction diminue l’exactitude de la quantification et dégrade la résolution spatiale. Pour cette raison, nous avons parallélisé et implémenté sur GPU l’algorithme AW-LOR-OSEM (Attenuation-Weighted Line-of-Response-OSEM) ; une version de l’algorithme 3D OSEM qui effectue la reconstruction à partir de sinogrammes sans compression de données en intégrant les corrections de l’atténuation et de la normalisation dans les matrices de sensibilité. Nous avons comparé deux approches d’implantation : dans la première, la matrice système (MS) est calculée en temps réel au cours de la reconstruction, tandis que la seconde implantation utilise une MS pré- calculée avec une meilleure exactitude. Les résultats montrent que la première implantation offre une efficacité de calcul environ deux fois meilleure que celle obtenue dans la deuxième implantation. Les temps de reconstruction rapportés sont compatibles avec une utilisation clinique de ces deux stratégies.
Resumo:
Material synthesizing and characterization has been one of the major areas of scientific research for the past few decades. Various techniques have been suggested for the preparation and characterization of thin films and bulk samples according to the industrial and scientific applications. Material characterization implies the determination of the electrical, magnetic, optical or thermal properties of the material under study. Though it is possible to study all these properties of a material, we concentrate on the thermal and optical properties of certain polymers. The thermal properties are detennined using photothermal beam deflection technique and the optical properties are obtained from various spectroscopic analyses. In addition, thermal properties of a class of semiconducting compounds, copper delafossites, arc determined by photoacoustic technique.Photothermal technique is one of the most powerful tools for non-destructive characterization of materials. This forms a broad class of technique, which includes laser calorimetry, pyroelectric technique, photoacollstics, photothermal radiometric technique, photothermal beam deflection technique etc. However, the choice of a suitable technique depends upon the nature of sample and its environment, purpose of measurement, nature of light source used etc. The polynler samples under the present investigation are thermally thin and optically transparent at the excitation (pump beam) wavelength. Photothermal beam deflection technique is advantageous in that it can be used for the detennination of thermal diffusivity of samples irrespective of them being thermally thick or thennally thin and optically opaque or optically transparent. Hence of all the abovementioned techniques, photothemlal beam deflection technique is employed for the successful determination of thermal diffusivity of these polymer samples. However, the semi conducting samples studied are themlally thick and optically opaque and therefore, a much simpler photoacoustic technique is used for the thermal characterization.The production of polymer thin film samples has gained considerable attention for the past few years. Different techniques like plasma polymerization, electron bombardment, ultra violet irradiation and thermal evaporation can be used for the preparation of polymer thin films from their respective monomers. Among these, plasma polymerization or glow discharge polymerization has been widely lIsed for polymer thin fi Im preparation. At the earlier stages of the discovery, the plasma polymerization technique was not treated as a standard method for preparation of polymers. This method gained importance only when they were used to make special coatings on metals and began to be recognized as a technique for synthesizing polymers. Thc well-recognized concept of conventional polymerization is based on molecular processcs by which thc size of the molecule increases and rearrangemcnt of atoms within a molecule seldom occurs. However, polymer formation in plasma is recognized as an atomic process in contrast to the above molecular process. These films are pinhole free, highly branched and cross linked, heat resistant, exceptionally dielectric etc. The optical properties like the direct and indirect bandgaps, refractive indices etc of certain plasma polymerized thin films prepared are determined from the UV -VIS-NIR absorption and transmission spectra. The possible linkage in the formation of the polymers is suggested by comparing the FTIR spectra of the monomer and the polymer. The thermal diffusivity has been measured using the photothermal beam deflection technique as stated earlier. This technique measures the refractive index gradient established in the sample surface and in the adjacent coupling medium, by passing another optical beam (probe beam) through this region and hence the name probe beam deflection. The deflection is detected using a position sensitive detector and its output is fed to a lock-in-amplifIer from which the amplitude and phase of the deflection can be directly obtained. The amplitude and phase of the deflection signal is suitably analyzed for determining the thermal diffusivity.Another class of compounds under the present investigation is copper delafossites. These samples in the form of pellets are thermally thick and optically opaque. Thermal diffusivity of such semiconductors is investigated using the photoacoustic technique, which measures the pressure change using an elcctret microphone. The output of the microphone is fed to a lock-in-amplificr to obtain the amplitude and phase from which the thermal properties are obtained. The variation in thermal diffusivity with composition is studied.
Resumo:
Schiff base complexes of transition metal ions have played a significant role in coordination chemistry.The convenient route of synthesis and thermal stability of Schiff base complexes have contributed significantly for their possible applications in catalysis,biology,medicine and photonics.Significant variations in cataltytic activity with structure and type are observed for these complexes.The thesis deals with synthsis and characterization of transition metal complexes of quinoxaline based Schiff base ligands and their catalytic activity study.The Schiff bases synthesized in the present study are quinoxaline-2-carboxalidine-2-amino-5-methylphenol,3-hydroxyquinoxaline-2-carboxalidine-2-amino-5-methylphenol,quinoxaline-2-aminothiophenol.They provide great structural diversity during complexation.To the best of our knowledge, the transition metal complexes of quinoxaline based Schiff bases are poorly utilised in academic and industrial research.
Resumo:
The microwave and electrical applications of some important conducting polymers are analyzed in this investigation.One of the major drawbacks of conducting polymers is their poor processability,and a solution to overcome this is sought in this investigation.Conducting polymer thermoplastic composites were prepared by the insitu polymerization method to improve the extent of miscibility probably to a semi IPN level.The attractive features of the conducting composite developed are excellent processability,good microwave and electrical conductivity,good microwave absorption,load sensitivity and satisfactory mechanical properties.The composite shows typical frequency selective microwave absorption and refelection behaviors.
Resumo:
The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.
Resumo:
Gabion faced re.taining walls are essentially semi rigid structures that can generally accommodate large lateral and vertical movements without excessive structural distress. Because of this inherent feature, they offer technical and economical advantage over the conventional concrete gravity retaining walls. Although they can be constructed either as gravity type or reinforced soil type, this work mainly deals with gabion faced reinforced earth walls as they are more suitable to larger heights. The main focus of the present investigation was the development of a viable plane strain two dimensional non linear finite element analysis code which can predict the stress - strain behaviour of gabion faced retaining walls - both gravity type and reinforced soil type. The gabion facing, backfill soil, In - situ soil and foundation soil were modelled using 20 four noded isoparametric quadrilateral elements. The confinement provided by the gabion boxes was converted into an induced apparent cohesion as per the membrane correction theory proposed by Henkel and Gilbert (1952). The mesh reinforcement was modelled using 20 two noded linear truss elements. The interactions between the soil and the mesh reinforcement as well as the facing and backfill were modelled using 20 four noded zero thickness line interface elements (Desai et al., 1974) by incorporating the nonlinear hyperbolic formulation for the tangential shear stiffness. The well known hyperbolic formulation by Ouncan and Chang (1970) was used for modelling the non - linearity of the soil matrix. The failure of soil matrix, gabion facing and the interfaces were modelled using Mohr - Coulomb failure criterion. The construction stages were also modelled.Experimental investigations were conducted on small scale model walls (both in field as well as in laboratory) to suggest an alternative fill material for the gabion faced retaining walls. The same were also used to validate the finite element programme developed as a part of the study. The studies were conducted using different types of gabion fill materials. The variation was achieved by placing coarse aggregate and quarry dust in different proportions as layers one above the other or they were mixed together in the required proportions. The deformation of the wall face was measured and the behaviour of the walls with the variation of fill materials was analysed. It was seen that 25% of the fill material in gabions can be replaced by a soft material (any locally available material) without affecting the deformation behaviour to large extents. In circumstances where deformation can be allowed to some extents, even up to 50% replacement with soft material can be possible.The developed finite element code was validated using experimental test results and other published results. Encouraged by the close comparison between the theory and experiments, an extensive and systematic parametric study was conducted, in order to gain a closer understanding of the behaviour of the system. Geometric parameters as well as material parameters were varied to understand their effect on the behaviour of the walls. The final phase of the study consisted of developing a simplified method for the design of gabion faced retaining walls. The design was based on the limit state method considering both the stability and deformation criteria. The design parameters were selected for the system and converted to dimensionless parameters. Thus the procedure for fixing the dimensions of the wall was simplified by eliminating the conventional trial and error procedure. Handy design charts were developed which would prove as a hands - on - tool to the design engineers at site. Economic studies were also conducted to prove the cost effectiveness of the structures with respect to the conventional RCC gravity walls and cost prediction models and cost breakdown ratios were proposed. The studies as a whole are expected to contribute substantially to understand the actual behaviour of gabion faced retaining wall systems with particular reference to the lateral deformations.
Resumo:
The present work deals with investigations on some technologically important polymer nanocomposite films and semi crystalline polypyrrole films.The work presented in the thesis deals with the realization of novel polymer nanocomposites with enhanced functionalities and prospects of applications in the fields related to nanophotonics. The development of inorganic/polymer nanocomposites is a rapidly expanding multidisciplinary research area with profound industrial applications. The incorporation of suitable inorganic nanoparticles can endow the resulting nanocomposites with excellent electrical, optical and mechanical properties. The first chapter gives a general introduction to nanotechnology, nanocomposites and conducting polymers. It also emphasizes the significance of ZnO among other semiconductor materials, which forms the inorganic filler in the polymer nanocomposites of the present study. This chapter also gives general ideas on the properties and applications of conducting polymers with special reference to polypyrrole. The objectives of the present investigations are also clearly addressed in this chapter. The second chapter deals with the theoretical aspects and details of all the experimental techniques used in the present work for the synthesis of polymer nanocomposites and polypyrrole samples and their various characterizations. Chapter 3 is based on the preparation and properties of ZnO/Polystyrene nanocomposite film samples. The optical properties of these nanocomoposite films are discussed in detail.Chapter 4 deals with the detailed investigations on the dependence of the optical properties of ZnO/PS nanocomposite films on the size of the nanostructured ZnO filler material. The excellent UV shielding properties of these nanocomposite films form the highlight of this chapter. Chapter 5 gives a detailed analysis of the nonlinear optical properties of ZnO/PS nanocomposite films using Z scan technique. The effect of ZnO particle size in the composite films on the nonlinear properties is discussed. The present study involves two phases of research activities. In the first phase, the linear and nonlinear optical properties of ZnO/polymer nanocomposites are investigated in detail. The second phase of work is centered on the synthesis and related studies on highly crystalline polypyrrole films. In the present study, nanosized ZnO is synthesized using wet chemical method at two different temperatures
Resumo:
This paper reports a novel region-based shape descriptor based on orthogonal Legendre moments. The preprocessing steps for invariance improvement of the proposed Improved Legendre Moment Descriptor (ILMD) are discussed. The performance of the ILMD is compared to the MPEG-7 approved region shape descriptor, angular radial transformation descriptor (ARTD), and the widely used Zernike moment descriptor (ZMD). Set B of the MPEG-7 CE-1 contour database and all the datasets of the MPEG-7 CE-2 region database were used for experimental validation. The average normalized modified retrieval rate (ANMRR) and precision- recall pair were employed for benchmarking the performance of the candidate descriptors. The ILMD has lower ANMRR values than ARTD for most of the datasets, and ARTD has a lower value compared to ZMD. This indicates that overall performance of the ILMD is better than that of ARTD and ZMD. This result is confirmed by the precision-recall test where ILMD was found to have better precision rates for most of the datasets tested. Besides retrieval accuracy, ILMD is more compact than ARTD and ZMD. The descriptor proposed is useful as a generic shape descriptor for content-based image retrieval (CBIR) applications
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
Salient pole brushless alternators coupled to IC engines are extensively used as stand-by power supply units for meeting in- dustrial power demands. Design of such generators demands high power to weight ratio, high e ciency and low cost per KVA out- put. Moreover, the performance characteristics of such machines like voltage regulation and short circuit ratio (SCR) are critical when these machines are put into parallel operation and alterna- tors for critical applications like defence and aerospace demand very low harmonic content in the output voltage. While designing such alternators, accurate prediction of machine characteristics, including total harmonic distortion (THD) is essential to mini- mize development cost and time. Total harmonic distortion in the output voltage of alternators should be as low as possible especially when powering very sophis- ticated and critical applications. The output voltage waveform of a practical AC generator is replica of the space distribution of the ux density in the air gap and several factors such as shape of the rotor pole face, core saturation, slotting and style of coil disposition make the realization of a sinusoidal air gap ux wave impossible. These ux harmonics introduce undesirable e ects on the alternator performance like high neutral current due to triplen harmonics, voltage distortion, noise, vibration, excessive heating and also extra losses resulting in poor e ciency, which in turn necessitate de-rating of the machine especially when connected to non-linear loads. As an important control unit of brushless alternator, the excitation system and its dynamic performance has a direct impact on alternator's stability and reliability. The thesis explores design and implementation of an excitation i system utilizing third harmonic ux in the air gap of brushless al- ternators, using an additional auxiliary winding, wound for 1=3rd pole pitch, embedded into the stator slots and electrically iso- lated from the main winding. In the third harmonic excitation system, the combined e ect of two auxiliary windings, one with 2=3rd pitch and another third harmonic winding with 1=3rd pitch, are used to ensure good voltage regulation without an electronic automatic voltage regulator (AVR) and also reduces the total harmonic content in the output voltage, cost e ectively. The design of the third harmonic winding by analytic methods demands accurate calculation of third harmonic ux density in the air gap of the machine. However, precise estimation of the amplitude of third harmonic ux in the air gap of a machine by conventional design procedures is di cult due to complex geome- try of the machine and non-linear characteristics of the magnetic materials. As such, prediction of the eld parameters by conven- tional design methods is unreliable and hence virtual prototyping of the machine is done to enable accurate design of the third har- monic excitation system. In the design and development cycle of electrical machines, it is recognized that the use of analytical and experimental methods followed by expensive and in exible prototyping is time consum- ing and no longer cost e ective. Due to advancements in com- putational capabilities over recent years, nite element method (FEM) based virtual prototyping has become an attractive al- ternative to well established semi-analytical and empirical design methods as well as to the still popular trial and error approach followed by the costly and time consuming prototyping. Hence, by virtually prototyping the alternator using FEM, the important performance characteristics of the machine are predicted. Design of third harmonic excitation system is done with the help of results obtained from virtual prototype of the machine. Third harmonic excitation (THE) system is implemented in a 45 KVA ii experimental machine and experiments are conducted to validate the simulation results. Simulation and experimental results show that by utilizing third harmonic ux in the air gap of the ma- chine for excitation purposes during loaded conditions, triplen harmonic content in the output phase voltage is signi cantly re- duced. The prototype machine with third harmonic excitation system designed and developed based on FEM analysis proved to be economical due to its simplicity and has the added advan- tage of reduced harmonics in the output phase voltage.
Resumo:
Solid waste generation is a natural consequence of human activity and is increasing along with population growth, urbanization and industrialization. Improper disposal of the huge amount of solid waste seriously affects the environment and contributes to climate change by the release of greenhouse gases. Practicing anaerobic digestion (AD) for the organic fraction of municipal solid waste (OFMSW) can reduce emissions to environment and thereby alleviate the environmental problems together with production of biogas, an energy source, and digestate, a soil amendment. The amenability of substrate for biogasification varies from substrate to substrate and different environmental and operating conditions such as pH, temperature, type and quality of substrate, mixing, retention time etc. Therefore, the purpose of this research work is to develop feasible semi-dry anaerobic digestion process for the treatment of OFMSW from Kerala, India for potential energy recovery and sustainable waste management. This study was carried out in three phases in order to reach the research purpose. In the first phase, batch study of anaerobic digestion of OFMSW was carried out for 100 days at 32°C (mesophilic digestion) for varying substrate concentrations. The aim of this study was to obtain the optimal conditions for biogas production using response surface methodology (RSM). The parameters studied were initial pH, substrate concentration and total organic carbon (TOC). The experimental results showed that the linear model terms of initial pH and substrate concentration and the quadratic model terms of the substrate concentration and TOC had significant individual effect (p < 0.05) on biogas yield. However, there was no interactive effect between these variables (p > 0.05). The optimum conditions for maximizing the biogas yield were a substrate concentration of 99 g/l, an initial pH of 6.5 and TOC of 20.32 g/l. AD of OFMSW with optimized substrate concentration of 99 g/l [Total Solid (TS)-10.5%] is a semi-dry digestion system .Under the optimized condition, the maximum biogas yield was 53.4 L/kg VS (volatile solid).. In the second phase, semi-dry anaerobic digestion of organic solid wastes was conducted for 45 days in a lab-scale batch experiment for substrate concentration of 100 g/l (TS-11.2%) for investigating the start-up performances under thermophilic condition (50°C). The performance of the reactor was evaluated by measuring the daily biogas production and calculating the degradation of total solids and the total volatile solids. The biogas yield at the end of the digestion was 52.9 L/kg VS for the substrate concentration of 100 g/l. About 66.7% of volatile solid degradation was obtained during the digestion. A first order model based on the availability of substrate as the limiting factor was used to perform the kinetic studies of batch anaerobic digestion system. The value of reaction rate constant, k, obtained was 0.0249 day-1. A laboratory bench scale reactor with a capacity of 36.8 litres was designed and fabricated to carry out the continuous anaerobic digestion of OFMSW in the third phase. The purpose of this study was to evaluate the performance of the digester at total solid concentration of 12% (semi-dry) under mesophlic condition (32°C). The digester was operated with different organic loading rates (OLRs) and constant retention time. The performance of the reactor was evaluated using parameters such as pH, volatile fatty acid (VFA), alkalinity, chemical oxygen demand (COD), TOC and ammonia-N as well as biogas yield. During the reactor’s start-up period, the process is stable and there is no inhibition occurred and the average biogas production was 14.7 L/day. The reactor was fed in continuous mode with different OLRs (3.1,4.2 and 5.65 kg VS/m3/d) at constant retention time of 30 days. The highest volatile solid degradation of 65.9%, with specific biogas production of 368 L/kg VS fed was achieved with OLR of 3.1 kg VS/m3/d. Modelling and simulation of anaerobic digestion of OFMSW in continuous operation is done using adapted Anaerobic Digestion Model No 1 (ADM1).The proposed model, which has 34 dynamic state variables, considers both biochemical and physicochemical processes and contains several inhibition factors including three gas components. The number of processes considered is 28. The model is implemented in Matlab® version 7.11.0.584(R2010b). The model based on adapted ADM1 was tested to simulate the behaviour of a bioreactor for the mesophilic anaerobic digestion of OFMSW at OLR of 3.1 kg VS/m3/d. ADM1 showed acceptable simulating results.
Resumo:
Energy production from biomass and the conservation of ecologically valuable grassland habitats are two important issues of agriculture today. The combination of a bioenergy production, which minimises environmental impacts and competition with food production for land with a conversion of semi-natural grasslands through new utilization alternatives for the biomass, led to the development of the IFBB process. Its basic principle is the separation of biomass into a liquid fraction (press fluid, PF) for the production of electric and thermal energy after anaerobic digestion to biogas and a solid fraction (press cake, PC) for the production of thermal energy through combustion. This study was undertaken to explore mass and energy flows as well as quality aspects of energy carriers within the IFBB process and determine their dependency on biomass-related and technical parameters. Two experiments were conducted, in which biomass from semi-natural grassland was conserved as silage and subjected to a hydrothermal conditioning and a subsequent mechanical dehydration with a screw press. Methane yield of the PF and the untreated silage was determined in anaerobic digestion experiments in batch fermenters at 37°C with a fermentation time of 13-15 and 27-35 days for the PF and the silage, respectively. Concentrations of dry matter (DM), ash, crude protein (CP), crude fibre (CF), ether extract (EE), neutral detergent fibre (NDF), acid detergent fibre (ADF), acid detergent ligning (ADL) and elements (K, Mg, Ca, Cl, N, S, P, C, H, N) were determined in the untreated biomass and the PC. Higher heating value (HHV) and ash softening temperature (AST) were calculated based on elemental concentration. Chemical composition of the PF and mass flows of all plant compounds into the PF were calculated. In the first experiment, biomass from five different semi-natural grassland swards (Arrhenaterion I and II, Caricion fuscae, Filipendulion ulmariae, Polygono-Trisetion) was harvested at one late sampling (19 July or 31 August) and ensiled. Each silage was subjected to three different temperature treatments (5°C, 60°C, 80°C) during hydrothermal conditioning. Based on observed methane yields and HHV as energy output parameters as well as literature-based and observed energy input parameters, energy and green house gas (GHG) balances were calculated for IFBB and two reference conversion processes, whole-crop digestion of untreated silage (WCD) and combustion of hay (CH). In the second experiment, biomass from one single semi-natural grassland sward (Arrhenaterion) was harvested at eight consecutive dates (27/04, 02/05, 09/05, 16/05, 24/05, 31/05, 11/06, 21/06) and ensiled. Each silage was subjected to six different treatments (no hydrothermal conditioning and hydrothermal conditioning at 10°C, 30°C, 50°C, 70°C, 90°C). Energy balance was calculated for IFBB and WCD. Multiple regression models were developed to predict mass flows, concentrations of elements in the PC, concentration of organic compounds in the PF and energy conversion efficiency of the IFBB process from temperature of hydrothermal conditioning as well as NDF and DM concentration in the silage. Results showed a relative reduction of ash and all elements detrimental for combustion in the PC compared to the untreated biomass of 20-90%. Reduction was highest for K and Cl and lowest for N. HHV of PC and untreated biomass were in a comparable range (17.8-19.5 MJ kg-1 DM), but AST of PC was higher (1156-1254°C). Methane yields of PF were higher compared to those of WCD when the biomass was harvested late (end of May and later) and in a comparable range when the biomass was harvested early and ranged from 332 to 458 LN kg-1 VS. Regarding energy and GHG balances, IFBB, with a net energy yield of 11.9-14.1 MWh ha-1, a conversion efficiency of 0.43-0.51, and GHG mitigation of 3.6-4.4 t CO2eq ha-1, performed better than WCD, but worse than CH. WCD produces thermal and electric energy with low efficiency, CH produces only thermal energy with a low quality solid fuel with high efficiency, IFBB produces thermal and electric energy with a solid fuel of high quality with medium efficiency. Regression models were able to predict target parameters with high accuracy (R2=0.70-0.99). The influence of increasing temperature of hydrothermal conditioning was an increase of mass flows, a decrease of element concentrations in the PC and a differing effect on energy conversion efficiency. The influence of increasing NDF concentration of the silage was a differing effect on mass flows, a decrease of element concentrations in the PC and an increase of energy conversion efficiency. The influence of increasing DM concentration of the silage was a decrease of mass flows, an increase of element concentrations in the PC and an increase of energy conversion efficiency. Based on the models an optimised IFBB process would be obtained with a medium temperature of hydrothermal conditioning (50°C), high NDF concentrations in the silage and medium DM concentrations of the silage.
Resumo:
Der Einsatz der Particle Image Velocimetry (PIV) zur Analyse selbsterregter Strömungsphänomene und das dafür notwendige Auswerteverfahren werden in dieser Arbeit beschrieben. Zur Untersuchung von solchen Mechanismen, die in Turbo-Verdichtern als Rotierende Instabilitäten in Erscheinung treten, wird auf Datensätze zurückgegriffen, die anhand experimenteller Untersuchungen an einem ringförmigen Verdichter-Leitrad gewonnen wurden. Die Rotierenden Instabilitäten sind zeitabhängige Strömungsphänomene, die bei hohen aerodynamischen Belastungen in Verdichtergittern auftreten können. Aufgrund der fehlenden Phaseninformation kann diese instationäre Strömung mit konventionellen PIV-Systemen nicht erfasst werden. Die Kármánsche Wirbelstraße und Rotierende Instabilitäten stellen beide selbsterregte Strömungsvorgänge dar. Die Ähnlichkeit wird genutzt um die Funktionalität des Verfahrens anhand der Kármánschen Wirbelstraße nachzuweisen. Der mittels PIV zu visualisierende Wirbeltransport erfordert ein besonderes Verfahren, da ein externes Signal zur Festlegung des Phasenwinkels dieser selbsterregten Strömung nicht zur Verfügung steht. Die Methodik basiert auf der Kopplung der PIV-Technik mit der Hitzdrahtanemometrie. Die gleichzeitige Messung mittels einer zeitlich hochaufgelösten Hitzdraht-Messung ermöglicht den Zeitpunkten der PIV-Bilder einen Phasenwinkel zuzuordnen. Hierzu wird das Hitzdrahtsignal mit einem FFT-Verfahren analysiert, um die PIV-Bilder entsprechend ihrer Phasenwinkel zu gruppieren. Dafür werden die aufgenommenen Bilder auf der Zeitachse der Hitzdrahtmessungen markiert. Eine systematische Analyse des Hitzdrahtsignals in der Umgebung der PIV-Messung liefert Daten zur Festlegung der Grundfrequenz und erlaubt es, der markierten PIV-Position einen Phasenwinkel zuzuordnen. Die sich aus den PIV-Bildern einer Klasse ergebenden Geschwindigkeitskomponenten werden anschließend gemittelt. Aus den resultierenden Bildern jeder Klasse ergibt sich das zweidimensionale zeitabhängige Geschwindigkeitsfeld, in dem die Wirbelwanderung der Kármánschen Wirbelstraße ersichtlich wird. In hierauf aufbauenden Untersuchungen werden Zeitsignale aus Messungen in einem Verdichterringgitter analysiert. Dabei zeigt sich, dass zusätzlich Filterfunktionen erforderlich sind. Im Ergebnis wird schließlich deutlich, dass die Übertragung der anhand der Kármánschen Wirbelstraße entwickelten Methode nur teilweise gelingt und weitere Forschungsarbeiten erforderlich sind.