933 resultados para rainfall-runoff empirical statistical model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing demand of consumer markets for the welfare of birds in poultry house has motivated many scientific researches to monitor and classify the welfare according to the production environment. Given the complexity between the birds and the environment of the aviary, the correct interpretation of the conduct becomes an important way to estimate the welfare of these birds. This study obtained multiple logistic regression models with capacity of estimating the welfare of broiler breeders in relation to the environment of the aviaries and behaviors expressed by the birds. In the experiment, were observed several behaviors expressed by breeders housed in a climatic chamber under controlled temperatures and three different ammonia concentrations from the air monitored daily. From the analysis of the data it was obtained two logistic regression models, of which the first model uses a value of ammonia concentration measured by unit and the second model uses a binary value to classify the ammonia concentration that is assigned by a person through his olfactory perception. The analysis showed that both models classified the broiler breeder's welfare successfully.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prostate-specific antigen (PSA) is a marker that is commonly used in estimating prostate cancer risk. Prostate cancer is usually a slowly progressing disease, which might not cause any symptoms whatsoever. Nevertheless, some cases of cancer are aggressive and need to be treated before they become life-threatening. However, the blood PSA concentration may rise also in benign prostate diseases and using a single total PSA (tPSA) measurement to guide the decision on further examinations leads to many unnecessary biopsies, over-detection, and overtreatment of indolent cancers which would not require treatment. Therefore, there is a need for markers that would better separate cancer from benign disorders, and would also predict cancer aggressiveness. The aim of this study was to evaluate whether intact and nicked forms of free PSA (fPSA-I and fPSA-N) or human kallikrein-related peptidase 2 (hK2) could serve as new tools in estimating prostate cancer risk. First, the immunoassays for fPSA-I and free and total hK2 were optimized so that they would be less prone to assay interference caused by interfering factors present in some blood samples. The optimized assays were shown to work well and were used to study the marker concentrations in the clinical sample panels. The marker levels were measured from preoperative blood samples of prostate cancer patients scheduled for radical prostatectomy. The association of the markers with the cancer stage and grade was studied. It was found that among all tested markers and their combinations especially the ratio of fPSA-N to tPSA and ratio of free PSA (fPSA) to tPSA were associated with both cancer stage and grade. They might be useful in predicting the cancer aggressiveness, but further follow-up studies are necessary to fully evaluate the significance of the markers in this clinical setting. The markers tPSA, fPSA, fPSA-I and hK2 were combined in a statistical model which was previously shown to be able to reduce unnecessary biopsies when applied to large screening cohorts of men with elevated tPSA. The discriminative accuracy of this model was compared to models based on established clinical predictors in reference to biopsy outcome. The kallikrein model and the calculated fPSA-N concentrations (fPSA minus fPSA-I) correlated with the prostate volume and the model, when compared to the clinical models, predicted prostate cancer in biopsy equally well. Hence, the measurement of kallikreins in a blood sample could be used to replace the volume measurement which is time-consuming, needs instrumentation and skilled personnel and is an uncomfortable procedure. Overall, the model could simplify the estimation of prostate cancer risk. Finally, as the fPSA-N seems to be an interesting new marker, a direct immunoassay for measuring fPSA-N concentrations was developed. The analytical performance was acceptable, but the rather complicated assay protocol needs to be improved until it can be used for measuring large sample panels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To analyze the prevalence of and factors associated with fragility fractures in Brazilian women aged 50 years and older. METHODS: This cross-sectional population survey, conducted between May 10 and October 31, 2011, included 622 women aged >50 years living in a city in southeastern Brazil. A questionnaire was administered to each woman by a trained interviewer. The associations between the occurrence of a fragility fracture after age 50 years and sociodemographic data, health-related habits and problems, self-perception of health and evaluation of functional capacity were determined by the χ2 test and Poisson regression using the backward selection criteria. RESULTS: The mean age of the 622 women was 64.1 years. The prevalence of fragility fractures was 10.8%, with 1.8% reporting hip fracture. In the final statistical model, a longer time since menopause (PR 1.03; 95%CI 1.01-1.05; p<0.01) and osteoporosis (PR 1.97; 95%CI 1.27-3.08; p<0.01) were associated with a higher prevalence of fractures. CONCLUSIONS: These findings may provide a better understanding of the risk factors associated with fragility fractures in Brazilian women and emphasize the importance of performing bone densitometry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today's networked systems are becoming increasingly complex and diverse. The current simulation and runtime verification techniques do not provide support for developing such systems efficiently; moreover, the reliability of the simulated/verified systems is not thoroughly ensured. To address these challenges, the use of formal techniques to reason about network system development is growing, while at the same time, the mathematical background necessary for using formal techniques is a barrier for network designers to efficiently employ them. Thus, these techniques are not vastly used for developing networked systems. The objective of this thesis is to propose formal approaches for the development of reliable networked systems, by taking efficiency into account. With respect to reliability, we propose the architectural development of correct-by-construction networked system models. With respect to efficiency, we propose reusable network architectures as well as network development. At the core of our development methodology, we employ the abstraction and refinement techniques for the development and analysis of networked systems. We evaluate our proposal by employing the proposed architectures to a pervasive class of dynamic networks, i.e., wireless sensor network architectures as well as to a pervasive class of static networks, i.e., network-on-chip architectures. The ultimate goal of our research is to put forward the idea of building libraries of pre-proved rules for the efficient modelling, development, and analysis of networked systems. We take into account both qualitative and quantitative analysis of networks via varied formal tool support, using a theorem prover the Rodin platform and a statistical model checker the SMC-Uppaal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Successful management of rivers requires an understanding of the fluvial processes that govern them. This, in turn cannot be achieved without a means of quantifying their geomorphology and hydrology and the spatio-temporal interactions between them, that is, their hydromorphology. For a long time, it has been laborious and time-consuming to measure river topography, especially in the submerged part of the channel. The measurement of the flow field has been challenging as well, and hence, such measurements have long been sparse in natural environments. Technological advancements in the field of remote sensing in the recent years have opened up new possibilities for capturing synoptic information on river environments. This thesis presents new developments in fluvial remote sensing of both topography and water flow. A set of close-range remote sensing methods is employed to eventually construct a high-resolution unified empirical hydromorphological model, that is, river channel and floodplain topography and three-dimensional areal flow field. Empirical as well as hydraulic theory-based optical remote sensing methods are tested and evaluated using normal colour aerial photographs and sonar calibration and reference measurements on a rocky-bed sub-Arctic river. The empirical optical bathymetry model is developed further by the introduction of a deep-water radiance parameter estimation algorithm that extends the field of application of the model to shallow streams. The effect of this parameter on the model is also assessed in a study of a sandy-bed sub-Arctic river using close-range high-resolution aerial photography, presenting one of the first examples of fluvial bathymetry modelling from unmanned aerial vehicles (UAV). Further close-range remote sensing methods are added to complete the topography integrating the river bed with the floodplain to create a seamless high-resolution topography. Boat- cart- and backpack-based mobile laser scanning (MLS) are used to measure the topography of the dry part of the channel at a high resolution and accuracy. Multitemporal MLS is evaluated along with UAV-based photogrammetry against terrestrial laser scanning reference data and merged with UAV-based bathymetry to create a two-year series of seamless digital terrain models. These allow the evaluation of the methodology for conducting high-resolution change analysis of the entire channel. The remote sensing based model of hydromorphology is completed by a new methodology for mapping the flow field in 3D. An acoustic Doppler current profiler (ADCP) is deployed on a remote-controlled boat with a survey-grade global navigation satellite system (GNSS) receiver, allowing the positioning of the areally sampled 3D flow vectors in 3D space as a point cloud and its interpolation into a 3D matrix allows a quantitative volumetric flow analysis. Multitemporal areal 3D flow field data show the evolution of the flow field during a snow-melt flood event. The combination of the underwater and dry topography with the flow field yields a compete model of river hydromorphology at the reach scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The determination of the sterilization value for low acid foods in retorts includes a critical evaluation of the factory's facilities and utilities, validation of the heat processing equipment (by heat distribution assays), and finally heat penetration assays with the product. The intensity of the heat process applied to the food can be expressed by the Fo value (sterilization value, in minutes, at a reference temperature of 121.1 °C, and a thermal index, z, of 10 °C, for Clostridium botulinum spores). For safety reasons, the lowest value for Fo is frequently adopted, being obtained in heat penetration assays as indicative of the minimum process intensity applied. This lowest Fo value should always be higher than the minimum Fo recommended for the food in question. However, the use of the Fo value for the coldest can fail to statistically explain all the practical occurrences in food heat treatment processes. Thus, as a result of intense experimental work, we aimed to develop a new focus to determine the lowest Fo value, which we renamed the critical Fo. The critical Fo is based on a statistical model for the interpretation of the results of heat penetration assays in packages, and it depends not only on the Fo values found at the coldest point of the package and the coldest point of the equipment, but also on the size of the batch of packages processed in the retort, the total processing time in the retort, and the time between CIPs of the retort. In the present study, we tried to explore the results of physical measurements used in the validation of food heat processes. Three examples of calculations were prepared to illustrate the methodology developed and to introduce the concept of critical Fo for the processing of canned food.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ce mémoire présente une analyse homogène et rigoureuse de l’échantillon d’étoiles naines blanches situées à moins de 20 pc du Soleil. L’objectif principal de cette étude est d’obtenir un modèle statistiquement viable de l’échantillon le plus représentatif de la population des naines blanches. À partir de l’échantillon défini par Holberg et al. (2008), il a fallu dans un premier temps réunir le plus d’information possible sur toutes les candidates locales sous la forme de spectres visibles et de données photométriques. En utilisant les modèles d’atmosphère de naines blanches les plus récents de Tremblay & Bergeron (2009), ainsi que différentes techniques d’analyse, il a été permis d’obtenir, de façon homogène, les paramètres atmosphériques (Teff et log g) des naines blanches de cet échantillon. La technique spectroscopique, c.-à-d. la mesure de Teff et log g par l’ajustement des raies spectrales, fut appliquée à toutes les étoiles de notre échantillon pour lesquelles un spectre visible présentant des raies assez fortes était disponible. Pour les étoiles avec des données photométriques, la distribution d’énergie combinée à la parallaxe trigonométrique, lorsque mesurée, permettent de déterminer les paramètres atmosphériques ainsi que la composition chimique de l’étoile. Un catalogue révisé des naines blanches dans le voisinage solaire est présenté qui inclut tous les paramètres atmosphériques nouvellement determinés. L’analyse globale qui en découle est ensuite exposée, incluant une étude de la distribution de la composition chimique des naines blanches locales, de la distribution de masse et de la fonction luminosité.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’hypertension artérielle essentielle (HTA) est une pathologie complexe, multifactorielle et à forte composante génétique. L’impact de la variabilité dans le nombre de copies sur l’HTA est encore peu connu. Nous envisagions que des variants dans le nombre de copies (CNVs) communs pourraient augmenter ou diminuer le risque pour l’HTA. Nous avons exploré cette hypothèse en réalisant des associations pangénomiques de CNVs avec l’HTA et avec l’HTA et le diabète de type 2 (DT2), chez 21 familles du Saguenay-Lac-St-Jean (SLSJ) caractérisées par un développement précoce de l’HTA et de la dyslipidémie. Pour la réplication, nous disposions, d’une part, de 3349 sujets diabétiques de la cohorte ADVANCE sélectionnés pour des complications vasculaires. D’autre part, de 187 sujets de la cohorte Tchèque Post-MONICA (CTPM), choisis selon la présence/absence d’albuminurie et/ou de syndrome métabolique. Finalement, 134 sujets de la cohorte CARTaGENE ont été analysés pour la validation fonctionnelle. Nous avons détecté deux nouveaux loci, régions de CNVs (CNVRs) à effets quantitatifs sur 17q21.31, associés à l’hypertension et au DT2 chez les sujets SLSJ et associés à l’hypertension chez les diabétiques ADVANCE. Un modèle statistique incluant les deux variants a permis de souligner le rôle essentiel du locus CNVR1 sur l’insulino-résistance, la précocité et la durée du diabète, ainsi que sur le risque cardiovasculaire. CNVR1 régule l’expression du pseudogène LOC644172 dont le dosage est associé à la prévalence de l’HTA, du DT2 et plus particulièrement au risque cardiovasculaire et à l’âge vasculaire (P<2×10-16). Nos résultats suggèrent que les porteurs de la duplication au locus CNVR1 développent précocement une anomalie de la fonction bêta pancréatique et de l’insulino-résistance, dues à un dosage élevé de LOC644172 qui perturberait, en retour, la régulation du gène paralogue fonctionnel, MAPK8IP1. Nous avons également avons identifié six CNVRs hautement hérités et associés à l'HTA chez les sujets SLSJ. Le score des effets combinés de ces CNVRs est apparu positivement et étroitement relié à la prévalence de l’HTA (P=2×10-10) et à l’âge de diagnostic de l’HTA. Dans la population SLSJ, le score des effets combinés présente une statistique C, pour l’HTA, de 0.71 et apparaît aussi performant que le score de risque Framingham pour la prédiction de l’HTA chez les moins de 25 ans. Un seul nouveau locus de CNVR sur 19q13.12, où la délétion est associée à un risque pour l’HTA, a été confirmé chez les Caucasiens CTPM. Ce CNVR englobe le gène FFAR3. Chez la souris, il a été démontré que l’action hypotensive du propionate est en partie médiée par Ffar3, à travers une interférence entre la flore intestinale et les systèmes cardiovasculaire et rénal. Les CNVRs identifiées dans cette étude, affectent des gènes ou sont localisées dans des QTLs reliés majoritairement aux réponses inflammatoires et immunitaires, au système rénal ainsi qu’aux lésions/réparations rénales ou à la spéciation. Cette étude suggère que l’étiologie de l’HTA ou de l’HTA associée au DT2 est affectée par des effets additifs ou interactifs de CNVRs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction. In utero, l’infection des membranes maternelles et fœtales, la chorioamniotite, passe souvent inaperçue et, en particulier lorsque associée à une acidémie, due à l’occlusion du cordon ombilical (OCO), comme il se produirait au cours du travail, peut entrainer des lésions cérébrales et avoir des répercussions neurologiques péri - et postnatales à long terme chez le fœtus. Il n'existe actuellement aucun moyen de détecter précocement ces conditions pathologiques in utéro afin de prévenir ou de limiter ces atteintes. Hypothèses. 1)l’électroencéphalogramme (EEG) fœtal obtenu du scalp fœtal pourrait servir d’outil auxiliaire à la surveillance électronique fœtale du rythme cardiaque fœtal (RCF) pour la détection précoce d'acidémie fœtale et d'agression neurologique; 2) la fréquence d’échantillonnage de l’ECG fœtal (ECGf) a un impact important sur le monitoring continu de la Variabilité du Rythme Cardiaque (VRCf) dans la prédiction de l’acidémie fœtale ; 3) les patrons de la corrélation de la VRCf aux cytokines pro-inflammatoires refléteront les états de réponses spontanées versus inflammatoires de la Voie Cholinergique Anti-inflammatoire (VCA); 4) grâce au développement d’un modèle de prédictions mathématiques, la prédiction du pH et de l’excès de base (EB) à la naissance sera possible avec seulement une heure de monitoring d’ECGf. Méthodes. Dans une série d’études fondamentales et cliniques, en utilisant respectivement le mouton et une cohorte de femmes en travail comme modèle expérimental et clinique , nous avons modélisé 1) une situation d’hypoxie cérébrale résultant de séquences d’occlusion du cordon ombilical de sévérité croissante jusqu’à atteindre un pH critique limite de 7.00 comme méthode expérimentale analogue au travail humain pour tester les première et deuxième hypothèses 2) un inflammation fœtale modérée en administrant le LPS à une autre cohorte animale pour vérifier la troisième hypothèse et 3) un modèle mathématique de prédictions à partir de paramètres et mesures validés cliniquement qui permettraient de déterminer les facteurs de prédiction d’une détresse fœtale pour tester la dernière hypothèse. Résultats. Les séries d’OCO répétitives se sont soldés par une acidose marquée (pH artériel 7.35±0.01 à 7.00±0.01), une diminution des amplitudes à l'électroencéphalogramme( EEG) synchronisé avec les décélérations du RCF induites par les OCO accompagnées d'une baisse pathologique de la pression artérielle (PA) et une augmentation marquée de VRCf avec hypoxie-acidémie aggravante à 1000 Hz, mais pas à 4 Hz, fréquence d’échantillonnage utilisée en clinique. L’administration du LPS entraîne une inflammation systémique chez le fœtus avec les IL-6 atteignant un pic 3 h après et des modifications de la VRCf retraçant précisément ce profil temporel des cytokines. En clinique, avec nos cohortes originale et de validation, un modèle statistique basée sur une matrice de 103 mesures de VRCf (R2 = 0,90, P < 0,001) permettent de prédire le pH mais pas l’EB, avec une heure d’enregistrement du RCF avant la poussée. Conclusions. La diminution de l'amplitude à l'EEG suggère un mécanisme d'arrêt adaptatif neuroprotecteur du cerveau et suggère que l'EEG fœtal puisse être un complément utile à la surveillance du RCF pendant le travail à haut risque chez la femme. La VRCf étant capable de détecter une hypoxie-acidémie aggravante tôt chez le fœtus à 1000Hz vs 4 Hz évoque qu’un mode d'acquisition d’ECG fœtal plus sensible pourrait constituer une solution. Des profils distinctifs de mesures de la VRCf, identifiés en corrélation avec les niveaux de l'inflammation, ouvre une nouvelle voie pour caractériser le profil inflammatoire de la réponse fœtale à l’infection. En clinique, un monitoring de chevet de prédiction du pH et EB à la naissance, à partir de mesures de VRCf permettrait des interprétations visuelles plus explicites pour des prises de décision plus exactes en obstétrique au cours du travail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dans une turbine hydraulique, la rotation des aubes dans l’eau crée une zone de basse pression, amenant l’eau à passer de l’état liquide à l’état gazeux. Ce phénomène de changement de phase est appelé cavitation et est similaire à l’ébullition. Lorsque les cavités de vapeur formées implosent près des parois, il en résulte une érosion sévère des matériaux, accélérant de façon importante la dégradation de la turbine. Un système de détection de l’érosion de cavitation à l’aide de mesures vibratoires, employable sur les turbines en opération, a donc été installé sur quatre groupes turbine-alternateur d’une centrale et permet d’estimer précisément le taux d’érosion en kg/ 10 000 h. Le présent projet vise à répondre à deux objectifs principaux. Premièrement, étudier le comportement de la cavitation sur un groupe turbine-alternateur cible et construire un modèle statistique, dans le but de prédire la variable cavitation en fonction des variables opératoires (tels l’ouverture de vannage, le débit, les niveaux amont et aval, etc.). Deuxièmement, élaborer une méthodologie permettant la reproductibilité de l’étude à d’autres sites. Une étude rétrospective sera effectuée et on se concentrera sur les données disponibles depuis la mise à jour du système en 2010. Des résultats préliminaires ont mis en évidence l’hétérogénéité du comportement de cavitation ainsi que des changements entre la relation entre la cavitation et diverses variables opératoires. Nous nous proposons de développer un modèle probabiliste adapté, en utilisant notamment le regroupement hiérarchique et des modèles de régression linéaire multiple.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adolescent idiopathic scoliosis (AIS) is a musculoskeletal pathology. It is a complex spinal curvature in a 3-D space that also affects the appearance of the trunk. The clinical follow-up of AIS is decisive for its management. Currently, the Cobb angle, which is measured from full spine radiography, is the most common indicator of the scoliosis progression. However, cumulative exposure to X-rays radiation increases the risk for certain cancers. Thus, a noninvasive method for the identification of the scoliosis progression from trunk shape analysis would be helpful. In this study, a statistical model is built from a set of healthy subjects using independent component analysis and genetic algorithm. Based on this model, a representation of each scoliotic trunk from a set of AIS patients is computed and the difference between two successive acquisitions is used to determine if the scoliosis has progressed or not. This study was conducted on 58 subjects comprising 28 healthy subjects and 30 AIS patients who had trunk surface acquisitions in upright standing posture. The model detects 93% of the progressive cases and 80% of the nonprogressive cases. Thus, the rate of false negatives, representing the proportion of undetected progressions, is very low, only 7%. This study shows that it is possible to perform a scoliotic patient's follow-up using 3-D trunk image analysis, which is based on a noninvasive acquisition technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is no baseline data available at present on the nature of various diseases that occur in a orchid population, under cultivation, in any commercial orchid farm maintained by small scale entrepreneurs who invest considerable amount of money, effort and time. The available data on type of disease symptoms, causative agent, , nature of pathogens, as to bacteria or ftmgi or any other biological agents, and their source, appropriate and effective control measures could not be devised, for large scale implementation and effective management, although arbitrary methods are being practiced by very few farms. Further influence of seasonal variations and environmental factors on disease outbreak is also not scientifically documented and statistically verified as to their authenticity. In this context, the primary objective of the present study was to create a data bank on the following aspects 1. Occurrence of different disease symptoms in Dendrobium hybrid over a period of one year covering all seasons 2. Variations in the environmental parameters at the orchid farms 3. Variations in the characteristics of water used for irrigation in the selected orchid farm 4. Microbial population associated with the various disease symptoms 5. Isolation and identification of bacteria isolated from diseased plants 6. Statistical treatment of the quantitative data and evolving statistical model

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.