959 resultados para Characteristic curves
Resumo:
The red cells found in the red rain in Kerala, India are now considered as a possible case of extraterrestrial life form. These cells can undergo rapid replication even at an extreme high temperature of 300 deg C. They can also be cultured in diverse unconventional chemical substrates. The molecular composition of these cells is yet to be identified. This paper reports the unusual autofluorescence characteristic of the cultured red rain cells. A spectrofluorimetric study has been performed to investigate this, which shows a systematic shift of the fluorescence emission peak wavelength as the excitation wavelength is increased. Conventional biomolecules are not known to have this property. Details of this investigation and the results are discussed.
Resumo:
Communication is the process of transmitting data across channel. Whenever data is transmitted across a channel, errors are likely to occur. Coding theory is a stream of science that deals with finding efficient ways to encode and decode data, so that any likely errors can be detected and corrected. There are many methods to achieve coding and decoding. One among them is Algebraic Geometric Codes that can be constructed from curves. Cryptography is the science ol‘ security of transmitting messages from a sender to a receiver. The objective is to encrypt message in such a way that an eavesdropper would not be able to read it. A eryptosystem is a set of algorithms for encrypting and decrypting for the purpose of the process of encryption and decryption. Public key eryptosystem such as RSA and DSS are traditionally being prel‘en‘ec| for the purpose of secure communication through the channel. llowever Elliptic Curve eryptosystem have become a viable altemative since they provide greater security and also because of their usage of key of smaller length compared to other existing crypto systems. Elliptic curve cryptography is based on group of points on an elliptic curve over a finite field. This thesis deals with Algebraic Geometric codes and their relation to Cryptography using elliptic curves. Here Goppa codes are used and the curves used are elliptic curve over a finite field. We are relating Algebraic Geometric code to Cryptography by developing a cryptographic algorithm, which includes the process of encryption and decryption of messages. We are making use of fundamental properties of Elliptic curve cryptography for generating the algorithm and is used here to relate both.
Resumo:
The marine atmospheric boundary layer (MABL) plays a vital role in the transport of momentum and heat from the surface of the ocean into the atmosphere. A detailed study on the MABL characteristics was carried out using high-resolution surface-wind data as measured by the QuikSCAT (Quick scatterometer) satellite. Spatial variations in the surface wind, frictional velocity, roughness parameter and drag coe±cient for the di®erent seasons were studied. The surface wind was strong during the southwest monsoon season due to the modulation induced by the Low Level Jetstream. The drag coe±cient was larger during this season, due to the strong winds and was lower during the winter months. The spatial variations in the frictional velocity over the seas was small during the post-monsoon season (»0.2 m s¡1). The maximum spatial variation in the frictional velocity was found over the south Arabian Sea (0.3 to 0.5 m s¡1) during the southwest monsoon period, followed by the pre-monsoon over the Bay of Bengal (0.1 to 0.25 m s¡1). The mean wind-stress curl during the winter was positive over the equatorial region, with a maximum value of 1.5£10¡7 N m¡3, but on either side of the equatorial belt, a negative wind-stress curl dominated. The area average of the frictional velocity and drag coe±cient over the Arabian Sea and Bay of Bengal were also studied. The values of frictional velocity shows a variability that is similar to the intraseasonal oscillation (ISO) and this was con¯rmed via wavelet analysis. In the case of the drag coe±cient, the prominent oscillations were ISO and quasi-biweekly mode (QBM). The interrelationship between the drag coe±cient and the frictional velocity with wind speed in both the Arabian Sea and the Bay of Bengal was also studied.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
Total energy SCF calculations were performed for noble gas difluorides in a relativistic procedure and compared with analogous non-relativistic calculations. The discrete variational method with numerical basis functions was used. Rather smooth potential energy curves could be obtained. The theoretical Kr - F and Xe - F bond distances were calculated to be 3.5 a.u. and 3.6 a.u. which should be compared with the experimental values of 3.54 a.u. and 3.7 a.u. Although the dissociation energies are off by a factor of about five it was found that ArF_2 may be a stable molecule. Theoretical ionization energies for the outer levels reproduce the experimental values for KrF_2 and XeF_2 to within 2 eV.
Resumo:
A LCAO-MO (linear combination of atomic orbitals - molecular orbitals) relativistic Dirac-Fock-Slater program is presented, which allows one to calculate accurate total energies for diatomic molecules. Numerical atomic Dirac-Fock-Slater wave functions are used as basis functions. All integrations as well as the solution of the Poisson equation are done fully numerical, with a relative accuracy of 10{^-5} - 10{^-6}. The details of the method as well as first results are presented here.
Resumo:
Ab initio fully relativistic SCF molecular calculations of energy eigenvalues as well as coupling-matrix elements are used to calculate the 1s_\sigma excitation differential cross section for Ne-Ne and Ne-O in ion-atom collisions. A relativistic perturbation treatment which allows a direct comparison with analogous non-relativistic calculations is also performed.
Resumo:
Diese Arbeit beschäftigt sich mit der Frage, wie sich in einer Familie von abelschen t-Moduln die Teilfamilie der uniformisierbaren t-Moduln beschreiben lässt. Abelsche t-Moduln sind höherdimensionale Verallgemeinerungen von Drinfeld-Moduln über algebraischen Funktionenkörpern. Bekanntermaßen lassen sich Drinfeld-Moduln in allgemeiner Charakteristik durch analytische Tori parametrisieren. Diese Tatsache überträgt sich allerdings nur auf manche t-Moduln, die man als uniformisierbar bezeichnet. Die Situation hat eine gewisse Analogie zur Theorie von elliptischen Kurven, Tori und abelschen Varietäten über den komplexen Zahlen. Um zu entscheiden, ob ein t-Modul in diesem Sinne uniformisierbar ist, wendet man ein Kriterium von Anderson an, das die rigide analytische Trivialität der zugehörigen t-Motive zum Inhalt hat. Wir wenden dieses Kriterium auf eine Familie von zweidimensionalen t-Moduln vom Rang vier an, die von Koeffizienten a,b,c,d abhängen, und gelangen dabei zur äquivalenten Fragestellung nach der Konvergenz von gewissen rekursiv definierten Folgen. Das Konvergenzverhalten dieser Folgen lässt sich mit Hilfe von Newtonpolygonen gut untersuchen. Schließlich erhält man durch dieses Vorgehen einfach formulierte Bedingungen an die Koeffizienten a,b,c,d, die einerseits die Uniformisierbarkeit garantieren oder andererseits diese ausschließen.
Resumo:
The global power supply stability is faced to several severe and fundamental threats, in particular steadily increasing power demand, diminishing and degrading fossil and nuclear energy resources, very harmful greenhouse gas emissions, significant energy injustice and a structurally misbalanced ecological footprint. Photovoltaic (PV) power systems are analysed in various aspects focusing on economic and technical considerations of supplemental and substitutional power supply to the constraint conventional power system. To infer the most relevant system approach for PV power plants several solar resources available for PV systems are compared. By combining the different solar resources and respective economics, two major PV systems are identified to be very competitive in almost all regions in the world. The experience curve concept is used as a key technique for the development of scenario assumptions on economic projections for the decade of the 2010s. Main drivers for cost reductions in PV systems are learning and production growth rate, thus several relevant aspects are discussed such as research and development investments, technical PV market potential, different PV technologies and the energetic sustainability of PV. Three major market segments for PV systems are identified: off-grid PV solutions, decentralised small scale on-grid PV systems (several kWp) and large scale PV power plants (tens of MWp). Mainly by application of ‘grid-parity’ and ‘fuel-parity’ concepts per country, local market and conventional power plant basis, the global economic market potential for all major PV system segments is derived. PV power plant hybridization potential of all relevant power technologies and the global power plant structure are analyzed regarding technical, economical and geographical feasibility. Key success criteria for hybrid PV power plants are discussed and comprehensively analysed for all adequate power plant technologies, i.e. oil, gas and coal fired power plants, wind power, solar thermal power (STEG) and hydro power plants. For the 2010s, detailed global demand curves are derived for hybrid PV-Fossil power plants on a per power plant, per country and per fuel type basis. The fundamental technical and economic potentials for hybrid PV-STEG, hybrid PV-Wind and hybrid PV-Hydro power plants are considered. The global resource availability for PV and wind power plants is excellent, thus knowing the competitive or complementary characteristic of hybrid PV-Wind power plants on a local basis is identified as being of utmost relevance. The complementarity of hybrid PV-Wind power plants is confirmed. As a result of that almost no reduction of the global economic PV market potential need to be expected and more complex power system designs on basis of hybrid PV-Wind power plants are feasible. The final target of implementing renewable power technologies into the global power system is a nearly 100% renewable power supply. Besides balancing facilities, storage options are needed, in particular for seasonal power storage. Renewable power methane (RPM) offers respective options. A comprehensive global and local analysis is performed for analysing a hybrid PV-Wind-RPM combined cycle gas turbine power system. Such a power system design might be competitive and could offer solutions for nearly all current energy system constraints including the heating and transportation sector and even the chemical industry. Summing up, hybrid PV power plants become very attractive and PV power systems will very likely evolve together with wind power to the major and final source of energy for mankind.
Resumo:
The preceding two editions of CoDaWork included talks on the possible consideration of densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended the Euclidean structure of the simplex to a Hilbert space structure of the set of densities within a bounded interval, and van den Boogaart (2005) generalized this to the set of densities bounded by an arbitrary reference density. From the many variations of the Hilbert structures available, we work with three cases. For bounded variables, a basis derived from Legendre polynomials is used. For variables with a lower bound, we standardize them with respect to an exponential distribution and express their densities as coordinates in a basis derived from Laguerre polynomials. Finally, for unbounded variables, a normal distribution is used as reference, and coordinates are obtained with respect to a Hermite-polynomials-based basis. To get the coordinates, several approaches can be considered. A numerical accuracy problem occurs if one estimates the coordinates directly by using discretized scalar products. Thus we propose to use a weighted linear regression approach, where all k- order polynomials are used as predictand variables and weights are proportional to the reference density. Finally, for the case of 2-order Hermite polinomials (normal reference) and 1-order Laguerre polinomials (exponential), one can also derive the coordinates from their relationships to the classical mean and variance. Apart of these theoretical issues, this contribution focuses on the application of this theory to two main problems in sedimentary geology: the comparison of several grain size distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock or sediment, like their composition
Resumo:
Introducción: Teniendo en cuenta el envejecimiento de la población y la alta prevalencia de las lesiones del manguito rotador no es de extrañar que esta patología se convierta en un problema de salud pública. Se sabe que el aumento en el tamaño de una lesión se asocia con la aparición de síntomas, pero no existen herramientas que permitan predecir la evolución del tamaño de una lesión. Con esto en mente se desarrollo una línea de investigación para estudiar el mecanismo de falla que inicia con la realización de un modelo tridimensional de un tendón del musculo supraespinoso sano. Materiales y métodos: Se caracterizo el tendón del músculo supraespinoso aplicando cargas uniaxiales a 7 complejos humero-tendón-escápula cadavéricos. Con los datos obtenidos se alimento un modelo tridimensional lineal isotrópico analizando la concentración de esfuerzos de von Misses Resultados: Del ensayo uniaxial se obtuvieron curvas esfuerzo-deformación homogéneas para el 20% de la deformación inicial, obteniendo un modulo de Young (14.4±2.3MPa) y un coeficiente de Poisson (0.14) con una concentración de esfuerzos de en la zona central de la cara articular del tendón, cercana a su inserción. Encontramos una disminución del 5% en los esfuerzos al retirar el acromion del modelo. Conclusiones: Se caracterizó de manera exitosa y se obtuvo un modelo tridimensional del tendón. La distribución de esfuerzos es compatible con la reportada en la literatura. El acromion no tiene mayor importancia en la magnitud de los esfuerzos en nuestro modelo. Este es el punto de partida para estudiar el mecanismo de falla.
Resumo:
Exercises and solutions about vector functions and curves.
Resumo:
Forecasting atmospheric blocking is one of the main problems facing medium-range weather forecasters in the extratropics. The European Centre for Medium-Range Weather Forecasts (ECMWF) Ensemble Prediction System (EPS) provides an excellent basis for medium-range forecasting as it provides a number of different possible realizations of the meteorological future. This ensemble of forecasts attempts to account for uncertainties in both the initial conditions and the model formulation. Since 18 July 2000, routine output from the EPS has included the field of potential temperature on the potential vorticity (PV) D 2 PV units (PVU) surface, the dynamical tropopause. This has enabled the objective identification of blocking using an index based on the reversal of the meridional potential-temperature gradient. A year of EPS probability forecasts of Euro-Atlantic and Pacific blocking have been produced and are assessed in this paper, concentrating on the Euro-Atlantic sector. Standard verification techniques such as Brier scores, Relative Operating Characteristic (ROC) curves and reliability diagrams are used. It is shown that Euro-Atlantic sector-blocking forecasts are skilful relative to climatology out to 10 days, and are more skilful than the deterministic control forecast at all lead times. The EPS is also more skilful than a probabilistic version of this deterministic forecast, though the difference is smaller. In addition, it is shown that the onset of a sector-blocking episode is less well predicted than its decay. As the lead time increases, the probability forecasts tend towards a model climatology with slightly less blocking than is seen in the real atmosphere. This small under-forecasting bias in the blocking forecasts is possibly related to a westerly bias in the ECMWF model. Copyright © 2003 Royal Meteorological Society
Resumo:
Prediction of the solar wind conditions in near-Earth space, arising from both quasi-steady and transient structures, is essential for space weather forecasting. To achieve forecast lead times of a day or more, such predictions must be made on the basis of remote solar observations. A number of empirical prediction schemes have been proposed to forecast the transit time and speed of coronal mass ejections (CMEs) at 1 AU. However, the current lack of magnetic field measurements in the corona severely limits our ability to forecast the 1 AU magnetic field strengths resulting from interplanetary CMEs (ICMEs). In this study we investigate the relation between the characteristic magnetic field strengths and speeds of both magnetic cloud and noncloud ICMEs at 1 AU. Correlation between field and speed is found to be significant only in the sheath region ahead of magnetic clouds, not within the clouds themselves. The lack of such a relation in the sheaths ahead of noncloud ICMEs is consistent with such ICMEs being skimming encounters of magnetic clouds, though other explanations are also put forward. Linear fits to the radial speed profiles of ejecta reveal that faster-traveling ICMEs are also expanding more at 1 AU. We combine these empirical relations to form a prediction scheme for the magnetic field strength in the sheaths ahead of magnetic clouds and also suggest a method for predicting the radial speed profile through an ICME on the basis of upstream measurements.