896 resultados para binary and ternary electrocatalysts


Relevância:

30.00% 30.00%

Publicador:

Resumo:

SnS thin films were prepared using automated chemical spray pyrolysis (CSP) technique. Single-phase, p-type, stoichiometric, SnS films with direct band gap of 1.33 eV and having very high absorption coefficient (N105/cm) were deposited at substrate temperature of 375 °C. The role of substrate temperature in determining the optoelectronic and structural properties of SnS films was established and concentration ratios of anionic and cationic precursor solutions were optimized. n-type SnS samples were also prepared using CSP technique at the same substrate temperature of 375 °C, which facilitates sequential deposition of SnS homojunction. A comprehensive analysis of both types of films was done using x-ray diffraction, energy dispersive x-ray analysis, scanning electron microscopy, atomic force microscopy, optical absorption and electrical measurements. Deposition temperatures required for growth of other binary sulfide phases of tin such as SnS2, Sn2S3 were also determined

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The TRIM.SP program which is based on the binary collision approximation was changed to handle not only repulsive interaction potentials, but also potentials with an attractive part. Sputtering yields, average depth and reflection coefficients calculated with four different potentials are compared. Three purely repulsive potentials (Meliere, Kr-C and ZBL) are used and an ab initio pair potential, which is especially calculated for silicon bombardment by silicon. The general trends in the calculated results are similar for all potentials applied, but differences between the repulsive potentials and the ab initio potential occur for the reflection coefficients and the sputtering yield at large angles of incidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A real-time analysis of renewable energy sources, such as arable crops, is of great importance with regard to an optimised process management, since aspects of ecology and biodiversity are considered in crop production in order to provide a sustainable energy supply by biomass. This study was undertaken to explore the potential of spectroscopic measurement procedures for the prediction of potassium (K), chloride (Cl), and phosphate (P), of dry matter (DM) yield, metabolisable energy (ME), ash and crude fibre contents (ash, CF), crude lipid (EE), nitrate free extracts (NfE) as well as of crude protein (CP) and nitrogen (N), respectively in pretreated samples and undisturbed crops. Three experiments were conducted, one in a laboratory using near infrared reflectance spectroscopy (NIRS) and two field spectroscopic experiments. Laboratory NIRS measurements were conducted to evaluate to what extent a prediction of quality parameters is possible examining press cakes characterised by a wide heterogeneity of their parent material. 210 samples were analysed subsequent to a mechanical dehydration using a screw press. Press cakes serve as solid fuel for thermal conversion. Field spectroscopic measurements were carried out with regard to further technical development using different field grown crops. A one year lasting experiment over a binary mixture of grass and red clover examined the impact of different degrees of sky cover on prediction accuracies of distinct plant parameters. Furthermore, an artificial light source was used in order to evaluate to what extent such a light source is able to minimise cloud effects on prediction accuracies. A three years lasting experiment with maize was conducted in order to evaluate the potential of off-nadir measurements inside a canopy to predict different quality parameters in total biomass and DM yield using one sensor for a potential on-the-go application. This approach implements a measurement of the plants in 50 cm segments, since a sensor adjusted sideways is not able to record the entire plant height. Calibration results obtained by nadir top-of-canopy reflectance measurements were compared to calibration results obtained by off-nadir measurements. Results of all experiments approve the applicability of spectroscopic measurements for the prediction of distinct biophysical and biochemical parameters in the laboratory and under field conditions, respectively. The estimation of parameters could be conducted to a great extent with high accuracy. An enhanced basis of calibration for the laboratory study and the first field experiment (grass/clover-mixture) yields in improved robustness of calibration models and allows for an extended application of spectroscopic measurement techniques, even under varying conditions. Furthermore, off-nadir measurements inside a canopy yield in higher prediction accuracies, particularly for crops characterised by distinct height increment as observed for maize.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gegenstand der vorliegenden Arbeit ist die Analyse verschiedener Formalismen zur Berechnung binärer Wortrelationen. Dabei ist die Grundlage aller hier ausgeführten Betrachtungen das Modell der Restart-Automaten, welches 1995 von Jancar et. al. eingeführt wurde. Zum einen wird das bereits für Restart-Automaten bekannte Konzept der input/output- und proper-Relationen weiterführend untersucht, sowie auf Systeme von zwei parallel arbeitenden und miteinander kommunizierenden Restart-Automaten (PC-Systeme) erweitert. Zum anderen wird eine Variante der Restart-Automaten eingeführt, die sich an klassischen Automatenmodellen zur Berechnung von Relationen orientiert. Mit Hilfe dieser Mechanismen kann gezeigt werden, dass einige Klassen, die durch input/output- und proper-Relationen von Restart Automaten definiert werden, mit den traditionellen Relationsklassen der Rationalen Relationen und der Pushdown-Relationen übereinstimmen. Weiterhin stellt sich heraus, dass das Konzept der parallel kommunizierenden Automaten äußerst mächtig ist, da bereits die Klasse der proper-Relationen von monotonen PC-Systemen alle berechenbaren Relationen umfasst. Der Haupteil der Arbeit beschäftigt sich mit den so genannten Restart-Transducern, welche um eine Ausgabefunktion erweiterte Restart-Automaten sind. Es zeigt sich, dass sich insbesondere dieses Modell mit seinen verschiedenen Erweiterungen und Einschränkungen dazu eignet, eine umfassende Hierarchie von Relationsklassen zu etablieren. In erster Linie seien hier die verschiedenen Typen von monotonen Restart-Transducern erwähnt, mit deren Hilfe viele interessante neue und bekannte Relationsklassen innerhalb der längenbeschränkten Pushdown-Relationen charakterisiert werden. Abschließend wird, im Kontrast zu den vorhergehenden Modellen, das nicht auf Restart-Automaten basierende Konzept des Übersetzens durch Beobachtung ("Transducing by Observing") zur Relationsberechnung eingeführt. Dieser, den Restart-Transducern nicht unähnliche Mechanismus, wird im weitesten Sinne dazu genutzt, einen anderen Blickwinkel auf die von Restart-Transducern definierten Relationen einzunehmen, sowie eine obere Schranke für die Berechnungskraft der Restart-Transducer zu gewinnen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that regression analyses involving compositional data need special attention because the data are not of full rank. For a regression analysis where both the dependent and independent variable are components we propose a transformation of the components emphasizing their role as dependent and independent variables. A simple linear regression can be performed on the transformed components. The regression line can be depicted in a ternary diagram facilitating the interpretation of the analysis in terms of components. An exemple with time-budgets illustrates the method and the graphical features

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is almost not a case in exploration geology, where the studied data doesn’t includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these “zero data” represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360°. These are known as “Essential zeros”, but what can we do with “Rounded zeros” that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be “rounded zeros”. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the “rounded” zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the “rounded zeros”, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The amalgamation operation is frequently used to reduce the number of parts of compositional data but it is a non-linear operation in the simplex with the usual geometry, the Aitchison geometry. The concept of balances between groups, a particular coordinate system designed over binary partitions of the parts, could be an alternative to the amalgamation in some cases. In this work we discuss the proper application of both concepts using a real data set corresponding to behavioral measures of pregnant sows

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theory of compositional data analysis is often focused on the composition only. However in practical applications we often treat a composition together with covariables with some other scale. This contribution systematically gathers and develop statistical tools for this situation. For instance, for the graphical display of the dependence of a composition with a categorical variable, a colored set of ternary diagrams might be a good idea for a first look at the data, but it will fast hide important aspects if the composition has many parts, or it takes extreme values. On the other hand colored scatterplots of ilr components could not be very instructive for the analyst, if the conventional, black-box ilr is used. Thinking on terms of the Euclidean structure of the simplex, we suggest to set up appropriate projections, which on one side show the compositional geometry and on the other side are still comprehensible by a non-expert analyst, readable for all locations and scales of the data. This is e.g. done by defining special balance displays with carefully- selected axes. Following this idea, we need to systematically ask how to display, explore, describe, and test the relation to complementary or explanatory data of categorical, real, ratio or again compositional scales. This contribution shows that it is sufficient to use some basic concepts and very few advanced tools from multivariate statistics (principal covariances, multivariate linear models, trellis or parallel plots, etc.) to build appropriate procedures for all these combinations of scales. This has some fundamental implications in their software implementation, and how might they be taught to analysts not already experts in multivariate analysis

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Takes the Tanenbaum (Structured Computer Organisation) approach to show how application of successive levels of abstraction allow us to understand how computers are made from transitors and how they are programmed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of protocols for the identification of metal phosphates in phosphate-treated, metal-contaminated soils is a necessary yet problematical step in the validation of remediation schemes involving immobilization of metals as phosphate phases. The potential for Raman spectroscopy to be applied to the identification of these phosphates in soils has yet to be fully explored. With this in mind, a range of synthetic mixed-metal hydroxylapatites has been characterized and added to soils at known concentrations for analysis using both bulk X-ray powder diffraction (XRD) and Raman spectroscopy. Mixed-metal hydroxylapatites in the binary series Ca-Cd, Ca-Pb, Ca-Sr and Cd-Pb synthesized in the presence of acetate and carbonate ions, were characterized using a range of analytical techniques including XRD, analytical scanning electron microscopy (SEM), infrared spectroscopy (IR), inductively coupled plasma-atomic emission spectrometry (ICP-AES) and Raman spectroscopy. Only the Ca-Cd series displays complete solid solution, although under the synthesis conditions of this study the Cd-5(PO4)(3)OH end member could not be synthesized as a pure phase. Within the Ca-Cd series the cell parameters, IR active modes and Raman active bands vary linearly as a function of Cd content. X-ray diffraction and extended X-ray absorption fine structure spectroscopy (EXAFS) suggest that the Cd is distributed across both the Ca(1) and Ca(2) sites, even at low Cd concentrations. In order to explore the likely detection limits for mixed-metal phosphates in soils for XRD and Raman spectroscopy, soils doped with mixed-metal hydroxylapatites at concentrations of 5, 1 and 0.5 wt.% were then studied. X-ray diffraction could not confirm unambiguously the presence or identity of mixed-metal phosphates in soils at concentrations below 5 wt.%. Raman spectroscopy proved a far more sensitive method for the identification of mixed-metal hydroxylapatites in soils, which could positively identify the presence of such phases in soils at all the dopant concentrations used in this study. Moreover, Raman spectroscopy could also provide an accurate assessment of the degree of chemical substitution in the hydroxylapatites even when present in soils at concentrations as low as 0.1%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A physically motivated statistical model is used to diagnose variability and trends in wintertime ( October - March) Global Precipitation Climatology Project (GPCP) pentad (5-day mean) precipitation. Quasi-geostrophic theory suggests that extratropical precipitation amounts should depend multiplicatively on the pressure gradient, saturation specific humidity, and the meridional temperature gradient. This physical insight has been used to guide the development of a suitable statistical model for precipitation using a mixture of generalized linear models: a logistic model for the binary occurrence of precipitation and a Gamma distribution model for the wet day precipitation amount. The statistical model allows for the investigation of the role of each factor in determining variations and long-term trends. Saturation specific humidity q(s) has a generally negative effect on global precipitation occurrence and with the tropical wet pentad precipitation amount, but has a positive relationship with the pentad precipitation amount at mid- and high latitudes. The North Atlantic Oscillation, a proxy for the meridional temperature gradient, is also found to have a statistically significant positive effect on precipitation over much of the Atlantic region. Residual time trends in wet pentad precipitation are extremely sensitive to the choice of the wet pentad threshold because of increasing trends in low-amplitude precipitation pentads; too low a choice of threshold can lead to a spurious decreasing trend in wet pentad precipitation amounts. However, for not too small thresholds, it is found that the meridional temperature gradient is an important factor for explaining part of the long-term trend in Atlantic precipitation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic association analyses of family-based studies with ordered categorical phenotypes are often conducted using methods either for quantitative or for binary traits, which can lead to suboptimal analyses. Here we present an alternative likelihood-based method of analysis for single nucleotide polymorphism (SNP) genotypes and ordered categorical phenotypes in nuclear families of any size. Our approach, which extends our previous work for binary phenotypes, permits straightforward inclusion of covariate, gene-gene and gene-covariate interaction terms in the likelihood, incorporates a simple model for ascertainment and allows for family-specific effects in the hypothesis test. Additionally, our method produces interpretable parameter estimates and valid confidence intervals. We assess the proposed method using simulated data, and apply it to a polymorphism in the c-reactive protein (CRP) gene typed in families collected to investigate human systemic lupus erythematosus. By including sex interactions in the analysis, we show that the polymorphism is associated with anti-nuclear autoantibody (ANA) production in females, while there appears to be no effect in males.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reviews Bayesian procedures for phase 1 dose-escalation studies and compares different dose schedules and cohort sizes. The methodology described is motivated by the situation of phase 1 dose-escalation studiesin oncology, that is, a single dose administered to each patient, with a single binary response ("toxicity"' or "no toxicity") observed. It is likely that a wider range of applications of the methodology is possible. In this paper, results from 10000-fold simulation runs conducted using the software package Bayesian ADEPT are presented. Four designs were compared under six scenarios. The simulation results indicate that there are slight advantages of having more dose levels and smaller cohort sizes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, Bayesian decision procedures are developed for dose-escalation studies based on bivariate observations of undesirable events and signs of therapeutic benefit. The methods generalize earlier approaches taking into account only the undesirable outcomes. Logistic regression models are used to model the two responses, which are both assumed to take a binary form. A prior distribution for the unknown model parameters is suggested and an optional safety constraint can be included. Gain functions to be maximized are formulated in terms of accurate estimation of the limits of a therapeutic window or optimal treatment of the next cohort of subjects, although the approach could be applied to achieve any of a wide variety of objectives. The designs introduced are illustrated through simulation and retrospective implementation to a completed dose-escalation study. Copyright © 2006 John Wiley & Sons, Ltd.