845 resultados para Interval Variable
Resumo:
TPM Vol. 21, No. 4, December 2014, 435-447 – Special Issue © 2014 Cises.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
© 2014 Cises This work is distributed with License Creative Commons Attribution-Non commercial-No derivatives 4.0 International (CC BY-BC-ND 4.0)
Resumo:
We show how to construct a topological Markov map of the interval whose invariant probability measure is the stationary law of a given stochastic chain of infinite order. In particular we characterize the maps corresponding to stochastic chains with memory of variable length. The problem treated here is the converse of the classical construction of the Gibbs formalism for Markov expanding maps of the interval.
Resumo:
Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.
Resumo:
We report the discovery with XMM-Newton of a hard-thermal (T similar to 130 MK) and variable X-ray emission from the Be star HD 157832, a new member of the puzzling class of gamma-Cas-like Be/X-ray systems. Recent optical spectroscopy reveals the presence of a large/dense circumstellar disk seen at intermediate/high inclination. With a B1.5V spectral type, HD 157832 is the coolest gamma-Cas analog known. In addition, its non-detection in the ROSAT all-sky survey shows that its average soft X-ray luminosity varied by a factor larger than similar to 3 over a time interval of 14 yr. These two remarkable features, ""low"" effective temperature, and likely high X-ray variability turn HD 157832 into a promising object for understanding the origin of the unusually high-temperature X-ray emission in these systems.
Resumo:
Scoring rules that elicit an entire belief distribution through the elicitation of point beliefsare time-consuming and demand considerable cognitive e¤ort. Moreover, the results are validonly when agents are risk-neutral or when one uses probabilistic rules. We investigate a classof rules in which the agent has to choose an interval and is rewarded (deterministically) onthe basis of the chosen interval and the realization of the random variable. We formulatean e¢ ciency criterion for such rules and present a speci.c interval scoring rule. For single-peaked beliefs, our rule gives information about both the location and the dispersion of thebelief distribution. These results hold for all concave utility functions.
Resumo:
BACKGROUND: Aminoglycosides are mandatory in the treatment of severe infections in burns. However, their pharmacokinetics are difficult to predict in critically ill patients. Our objective was to describe the pharmacokinetic parameters of high doses of tobramycin administered at extended intervals in severely burned patients. METHODS: We prospectively enrolled 23 burned patients receiving tobramycin in combination therapy for Pseudomonas species infections in a burn ICU over 2 years in a therapeutic drug monitoring program. Trough and post peak tobramycin levels were measured to adjust drug dosage. Pharmacokinetic parameters were derived from two points first order kinetics. RESULTS: Tobramycin peak concentration was 7.4 (3.1-19.6)microg/ml and Cmax/MIC ratio 14.8 (2.8-39.2). Half-life was 6.9 (range 1.8-24.6)h with a distribution volume of 0.4 (0.2-1.0)l/kg. Clearance was 35 (14-121)ml/min and was weakly but significantly correlated with creatinine clearance. CONCLUSION: Tobramycin had a normal clearance, but an increased volume of distribution and a prolonged half-life in burned patients. However, the pharmacokinetic parameters of tobramycin are highly variable in burned patients. These data support extended interval administration and strongly suggest that aminoglycosides should only be used within a structured pharmacokinetic monitoring program.
Resumo:
We describe an improved multiple-locus variable-number tandem-repeat (VNTR) analysis (MLVA) scheme for genotyping Staphylococcus aureus. We compare its performance to those of multilocus sequence typing (MLST) and spa typing in a survey of 309 strains. This collection includes 87 epidemic methicillin-resistant S. aureus (MRSA) strains of the Harmony collection, 75 clinical strains representing the major MLST clonal complexes (CCs) (50 methicillin-sensitive S. aureus [MSSA] and 25 MRSA), 135 nasal carriage strains (133 MSSA and 2 MRSA), and 13 published S. aureus genome sequences. The results show excellent concordance between the techniques' results and demonstrate that the discriminatory power of MLVA is higher than those of both MLST and spa typing. Two hundred forty-two genotypes are discriminated with 14 VNTR loci (diversity index, 0.9965; 95% confidence interval, 0.9947 to 0.9984). Using a cutoff value of 45%, 21 clusters are observed, corresponding to the CCs previously defined by MLST. The variability of the different tandem repeats allows epidemiological studies, as well as follow-up of the evolution of CCs and the identification of potential ancestors. The 14 loci can conveniently be analyzed in two steps, based upon a first-line simplified assay comprising a subset of 10 loci (panel 1) and a second subset of 4 loci (panel 2) that provides higher resolution when needed. In conclusion, the MLVA scheme proposed here, in combination with available on-line genotyping databases (including http://mlva.u-psud.fr/), multiplexing, and automatic sizing, can provide a basis for almost-real-time large-scale population monitoring of S. aureus.
Resumo:
The electrocardiography (ECG) QT interval is influenced by fluctuations in heart rate (HR) what may lead to misinterpretation of its length. Considering that alterations in QT interval length reflect abnormalities of the ventricular repolarisation which predispose to occurrence of arrhythmias, this variable must be properly evaluated. The aim of this work is to determine which method of correcting the QT interval is the most appropriate for dogs regarding different ranges of normal HR (different breeds). Healthy adult dogs (n=130; German Shepherd, Boxer, Pit Bull Terrier, and Poodle) were submitted to ECG examination and QT intervals were determined in triplicates from the bipolar limb II lead and corrected for the effects of HR through the application of three published formulae involving quadratic, cubic or linear regression. The mean corrected QT values (QTc) obtained using the diverse formulae were significantly different (ρ<0.05), while those derived according to the equation QTcV = QT + 0.087(1- RR) were the most consistent (linear regression). QTcV values were strongly correlated (r=0.83) with the QT interval and showed a coefficient of variation of 8.37% and a 95% confidence interval of 0.22-0.23 s. Owing to its simplicity and reliability, the QTcV was considered the most appropriate to be used for the correction of QT interval in dogs.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
Els models matemàtics quantitatius són simplificacions de la realitat i per tant el comportament obtingut per simulació d'aquests models difereix dels reals. L'ús de models quantitatius complexes no és una solució perquè en la majoria dels casos hi ha alguna incertesa en el sistema real que no pot ser representada amb aquests models. Una forma de representar aquesta incertesa és mitjançant models qualitatius o semiqualitatius. Un model d'aquest tipus de fet representa un conjunt de models. La simulació del comportament de models quantitatius genera una trajectòria en el temps per a cada variable de sortida. Aquest no pot ser el resultat de la simulació d'un conjunt de models. Una forma de representar el comportament en aquest cas és mitjançant envolupants. L'envolupant exacta és complete, és a dir, inclou tots els possibles comportaments del model, i correcta, és a dir, tots els punts dins de l'envolupant pertanyen a la sortida de, com a mínim, una instància del model. La generació d'una envolupant així normalment és una tasca molt dura que es pot abordar, per exemple, mitjançant algorismes d'optimització global o comprovació de consistència. Per aquesta raó, en molts casos s'obtenen aproximacions a l'envolupant exacta. Una aproximació completa però no correcta a l'envolupant exacta és una envolupant sobredimensionada, mentre que una envolupant correcta però no completa és subdimensionada. Aquestes propietats s'han estudiat per diferents simuladors per a sistemes incerts.
Resumo:
This work develops a new methodology in order to discriminate models for interval-censored data based on bootstrap residual simulation by observing the deviance difference from one model in relation to another, according to Hinde (1992). Generally, this sort of data can generate a large number of tied observations and, in this case, survival time can be regarded as discrete. Therefore, the Cox proportional hazards model for grouped data (Prentice & Gloeckler, 1978) and the logistic model (Lawless, 1982) can befitted by means of generalized linear models. Whitehead (1989) considered censoring to be an indicative variable with a binomial distribution and fitted the Cox proportional hazards model using complementary log-log as a link function. In addition, a logistic model can be fitted using logit as a link function. The proposed methodology arises as an alternative to the score tests developed by Colosimo et al. (2000), where such models can be obtained for discrete binary data as particular cases from the Aranda-Ordaz distribution asymmetric family. These tests are thus developed with a basis on link functions to generate such a fit. The example that motivates this study was the dataset from an experiment carried out on a flax cultivar planted on four substrata susceptible to the pathogen Fusarium oxysoprum. The response variable, which is the time until blighting, was observed in intervals during 52 days. The results were compared with the model fit and the AIC values.
Resumo:
A Fortran computer program is given for the computation of the adjusted average time to signal, or AATS, for adaptive (X) over bar charts with one, two, or all three design parameters variable: the sample size, n, the sampling interval, h, and the factor k used in determining the width of the action limits. The program calculates the threshold limit to switch the adaptive design parameters and also provides the in-control average time to signal, or ATS.