916 resultados para Sampling schemes
Resumo:
This paper introduces a novel method of estimating theFourier transform of deterministic continuous-time signals from a finite number N of their nonuniformly spaced measurements. These samples, located at a mixture of deterministic and random time instants, are collected at sub-Nyquist rates since no constraints are imposed on either the bandwidth or the spectral support of the processed signal. It is shown that the proposed estimation approach converges uniformly for all frequencies at the rate N^−5 or faster. This implies that it significantly outperforms its alias-free-sampling-based predecessors, namely stratified and antithetical stratified estimates, which are shown to uniformly convergence at a rate of N^−1. Simulations are presented to demonstrate the superior performance and low complexity of the introduced technique.
Resumo:
One of the most important measures to prevent wild forest fires is the use of prescribed and controlled burning actions as it reduce the fuel mass availability. The impact of these management activities on soil physical and chemical properties varies according to the type of both soil and vegetation. Decisions in forest management plans are often based on the results obtained from soil-monitoring campaigns. Those campaigns are often man-labor intensive and expensive. In this paper we have successfully used the multivariate statistical technique Robust Principal Analysis Compounds (ROBPCA) to investigate on the sampling procedure effectiveness for two different methodologies, in order to reflect on the possibility of simplifying and reduce the sampling collection process and its auxiliary laboratory analysis work towards a cost-effective and competent forest soil characterization.
Resumo:
Mathematical models and statistical analysis are key instruments in soil science scientific research as they can describe and/or predict the current state of a soil system. These tools allow us to explore the behavior of soil related processes and properties as well as to generate new hypotheses for future experimentation. A good model and analysis of soil properties variations, that permit us to extract suitable conclusions and estimating spatially correlated variables at unsampled locations, is clearly dependent on the amount and quality of data and of the robustness techniques and estimators. On the other hand, the quality of data is obviously dependent from a competent data collection procedure and from a capable laboratory analytical work. Following the standard soil sampling protocols available, soil samples should be collected according to key points such as a convenient spatial scale, landscape homogeneity (or non-homogeneity), land color, soil texture, land slope, land solar exposition. Obtaining good quality data from forest soils is predictably expensive as it is labor intensive and demands many manpower and equipment both in field work and in laboratory analysis. Also, the sampling collection scheme that should be used on a data collection procedure in forest field is not simple to design as the sampling strategies chosen are strongly dependent on soil taxonomy. In fact, a sampling grid will not be able to be followed if rocks at the predicted collecting depth are found, or no soil at all is found, or large trees bar the soil collection. Considering this, a proficient design of a soil data sampling campaign in forest field is not always a simple process and sometimes represents a truly huge challenge. In this work, we present some difficulties that have occurred during two experiments on forest soil that were conducted in order to study the spatial variation of some soil physical-chemical properties. Two different sampling protocols were considered for monitoring two types of forest soils located in NW Portugal: umbric regosol and lithosol. Two different equipments for sampling collection were also used: a manual auger and a shovel. Both scenarios were analyzed and the results achieved have allowed us to consider that monitoring forest soil in order to do some mathematical and statistical investigations needs a sampling procedure to data collection compatible to established protocols but a pre-defined grid assumption often fail when the variability of the soil property is not uniform in space. In this case, sampling grid should be conveniently adapted from one part of the landscape to another and this fact should be taken into consideration of a mathematical procedure.
Resumo:
Dissertation presented to obtain the Doutoramento (Ph.D.) degree in Biochemistry at the Instituto de Tecnologia Qu mica e Biol ogica da Universidade Nova de Lisboa
Resumo:
An assessment of sewage workers' exposure to airborne cultivable bacteria, fungi and inhaled endotoxins was performed at 11 sewage treatment plants. We sampled the enclosed and unenclosed treatment areas in each plant and evaluated the influence of seasons (summer and winter) on bioaerosol levels. We also measured personal exposure to endotoxins of workers during special operation where a higher risk of bioaerosol inhalation was assumed. Results show that only fungi are present in significantly higher concentrations in summer than in winter (2331 +/- 858 versus 329 +/- 95 CFU m(-3)). We also found that there are significantly more bacteria in the enclosed area, near the particle grids for incoming water, than in the unenclosed area near the aeration basins (9455 +/- 2661 versus 2435 +/- 985 CFU m(-3) in summer and 11 081 +/- 2299 versus 2002 +/- 839 CFU m(-3) in winter). All bioaerosols were frequently above the recommended values of occupational exposure. Workers carrying out special tasks such as cleaning tanks were exposed to very high levels of endotoxins (up to 500 EU m(-3)) compared to routine work. The species composition and concentration of airborne Gram-negative bacteria were also studied. A broad spectrum of different species within the Pseudomonadaceae and the Enterobacteriaceae families were predominant in nearly all plants investigated. [Authors]
Resumo:
Introduction/objectives: Multipatient use of a single-patient CBSD occurred inan outpatient clinic during 4 to 16 months before itsnotification. We looked for transmission of blood-bornepathogens among exposed patients.Methods: Exposed patients underwent serology testing for HBV,HCV and HIV. Patients with isolated anti-HBc receivedone dose of hepatitis B vaccine to look for a memoryimmune response. Possible transmissions were investigatedby mapping visits and sequencing of the viral genomeif needed.Results: Of 280 exposed patients, 9 had died without suspicionof blood-borne infection, 3 could not be tested, and 5declined investigations. Among the 263 (93%) testedpatients, 218 (83%) had negative results. We confirmeda known history of HCV infection in 6 patients (1 coinfectedby HIV), and also identified resolved HBVinfection in 37 patients, of whom 18 were alreadyknown. 2 patients were found to have a previouslyunknown HCV infection. According to the time elapsedfrom the closest previous visit of a HCV-infected potentialsource patient, we could rule out nosocomial transmissionin one case (14 weeks) but not in the other (1day). In the latter, however, transmission was deemedvery unlikely by 2 reference centers based on thesequences of the E1 and HVR1 regions of the virus.Conclusion: We did not identify any transmission of blood-bornepathogens in 263 patients exposed to a single-patientCBSD, despite the presence of potential source cases.Change of needle and disinfection of the device betweenpatients may have contributed to this outcome.Although we cannot exclude transmission of HBV, previousacquisition in endemic countries is a more likelyexplanation in this multi-national population.
Resumo:
We conducted a molecular study of MRSA isolated in Swiss hospitals, including the first five consecutive isolates recovered from blood cultures and the first ten isolates recovered from other sites in newly identified carriers. Among 73 MRSA isolates, 44 different double locus sequence typing (DLST) types and 32 spa types were observed. Most isolates belonged to the NewYork/Japan, the UK-EMRSA-15, the South German and the Berlin clones. In a country with a low to moderate MRSA incidence, inclusion of non-invasive isolates allowed a more accurate description of the diversity.
Resumo:
All-electron partitioning of wave functions into products ^core^vai of core and valence parts in orbital space results in the loss of core-valence antisymmetry, uncorrelation of motion of core and valence electrons, and core-valence overlap. These effects are studied with the variational Monte Carlo method using appropriately designed wave functions for the first-row atoms and positive ions. It is shown that the loss of antisymmetry with respect to interchange of core and valence electrons is a dominant effect which increases rapidly through the row, while the effect of core-valence uncorrelation is generally smaller. Orthogonality of the core and valence parts partially substitutes the exclusion principle and is absolutely necessary for meaningful calculations with partitioned wave functions. Core-valence overlap may lead to nonsensical values of the total energy. It has been found that even relatively crude core-valence partitioned wave functions generally can estimate ionization potentials with better accuracy than that of the traditional, non-partitioned ones, provided that they achieve maximum separation (independence) of core and valence shells accompanied by high internal flexibility of ^core and Wvai- Our best core-valence partitioned wave function of that kind estimates the IP's with an accuracy comparable to the most accurate theoretical determinations in the literature.
Resumo:
A new approach to treating large Z systems by quantum Monte Carlo has been developed. It naturally leads to notion of the 'valence energy'. Possibilities of the new approach has been explored by optimizing the wave function for CuH and Cu and computing dissociation energy and dipole moment of CuH using variational Monte Carlo. The dissociation energy obtained is about 40% smaller than the experimental value; the method is comparable with SCF and simple pseudopotential calculations. The dipole moment differs from the best theoretical estimate by about 50% what is again comparable with other methods (Complete Active Space SCF and pseudopotential methods).
Resumo:
The prediction of proteins' conformation helps to understand their exhibited functions, allows for modeling and allows for the possible synthesis of the studied protein. Our research is focused on a sub-problem of protein folding known as side-chain packing. Its computational complexity has been proven to be NP-Hard. The motivation behind our study is to offer the scientific community a means to obtain faster conformation approximations for small to large proteins over currently available methods. As the size of proteins increases, current techniques become unusable due to the exponential nature of the problem. We investigated the capabilities of a hybrid genetic algorithm / simulated annealing technique to predict the low-energy conformational states of various sized proteins and to generate statistical distributions of the studied proteins' molecular ensemble for pKa predictions. Our algorithm produced errors to experimental results within .acceptable margins and offered considerable speed up depending on the protein and on the rotameric states' resolution used.
Resumo:
We provide a theoretical framework to explain the empirical finding that the estimated betas are sensitive to the sampling interval even when using continuously compounded returns. We suppose that stock prices have both permanent and transitory components. The permanent component is a standard geometric Brownian motion while the transitory component is a stationary Ornstein-Uhlenbeck process. The discrete time representation of the beta depends on the sampling interval and two components labelled \"permanent and transitory betas\". We show that if no transitory component is present in stock prices, then no sampling interval effect occurs. However, the presence of a transitory component implies that the beta is an increasing (decreasing) function of the sampling interval for more (less) risky assets. In our framework, assets are labelled risky if their \"permanent beta\" is greater than their \"transitory beta\" and vice versa for less risky assets. Simulations show that our theoretical results provide good approximations for the means and standard deviations of estimated betas in small samples. Our results can be perceived as indirect evidence for the presence of a transitory component in stock prices, as proposed by Fama and French (1988) and Poterba and Summers (1988).
Resumo:
La survie des réseaux est un domaine d'étude technique très intéressant ainsi qu'une préoccupation critique dans la conception des réseaux. Compte tenu du fait que de plus en plus de données sont transportées à travers des réseaux de communication, une simple panne peut interrompre des millions d'utilisateurs et engendrer des millions de dollars de pertes de revenu. Les techniques de protection des réseaux consistent à fournir une capacité supplémentaire dans un réseau et à réacheminer les flux automatiquement autour de la panne en utilisant cette disponibilité de capacité. Cette thèse porte sur la conception de réseaux optiques intégrant des techniques de survie qui utilisent des schémas de protection basés sur les p-cycles. Plus précisément, les p-cycles de protection par chemin sont exploités dans le contexte de pannes sur les liens. Notre étude se concentre sur la mise en place de structures de protection par p-cycles, et ce, en supposant que les chemins d'opération pour l'ensemble des requêtes sont définis a priori. La majorité des travaux existants utilisent des heuristiques ou des méthodes de résolution ayant de la difficulté à résoudre des instances de grande taille. L'objectif de cette thèse est double. D'une part, nous proposons des modèles et des méthodes de résolution capables d'aborder des problèmes de plus grande taille que ceux déjà présentés dans la littérature. D'autre part, grâce aux nouveaux algorithmes, nous sommes en mesure de produire des solutions optimales ou quasi-optimales. Pour ce faire, nous nous appuyons sur la technique de génération de colonnes, celle-ci étant adéquate pour résoudre des problèmes de programmation linéaire de grande taille. Dans ce projet, la génération de colonnes est utilisée comme une façon intelligente d'énumérer implicitement des cycles prometteurs. Nous proposons d'abord des formulations pour le problème maître et le problème auxiliaire ainsi qu'un premier algorithme de génération de colonnes pour la conception de réseaux protegées par des p-cycles de la protection par chemin. L'algorithme obtient de meilleures solutions, dans un temps raisonnable, que celles obtenues par les méthodes existantes. Par la suite, une formulation plus compacte est proposée pour le problème auxiliaire. De plus, nous présentons une nouvelle méthode de décomposition hiérarchique qui apporte une grande amélioration de l'efficacité globale de l'algorithme. En ce qui concerne les solutions en nombres entiers, nous proposons deux méthodes heurisiques qui arrivent à trouver des bonnes solutions. Nous nous attardons aussi à une comparaison systématique entre les p-cycles et les schémas classiques de protection partagée. Nous effectuons donc une comparaison précise en utilisant des formulations unifiées et basées sur la génération de colonnes pour obtenir des résultats de bonne qualité. Par la suite, nous évaluons empiriquement les versions orientée et non-orientée des p-cycles pour la protection par lien ainsi que pour la protection par chemin, dans des scénarios de trafic asymétrique. Nous montrons quel est le coût de protection additionnel engendré lorsque des systèmes bidirectionnels sont employés dans de tels scénarios. Finalement, nous étudions une formulation de génération de colonnes pour la conception de réseaux avec des p-cycles en présence d'exigences de disponibilité et nous obtenons des premières bornes inférieures pour ce problème.
Resumo:
Les titres financiers sont souvent modélisés par des équations différentielles stochastiques (ÉDS). Ces équations peuvent décrire le comportement de l'actif, et aussi parfois certains paramètres du modèle. Par exemple, le modèle de Heston (1993), qui s'inscrit dans la catégorie des modèles à volatilité stochastique, décrit le comportement de l'actif et de la variance de ce dernier. Le modèle de Heston est très intéressant puisqu'il admet des formules semi-analytiques pour certains produits dérivés, ainsi qu'un certain réalisme. Cependant, la plupart des algorithmes de simulation pour ce modèle font face à quelques problèmes lorsque la condition de Feller (1951) n'est pas respectée. Dans ce mémoire, nous introduisons trois nouveaux algorithmes de simulation pour le modèle de Heston. Ces nouveaux algorithmes visent à accélérer le célèbre algorithme de Broadie et Kaya (2006); pour ce faire, nous utiliserons, entre autres, des méthodes de Monte Carlo par chaînes de Markov (MCMC) et des approximations. Dans le premier algorithme, nous modifions la seconde étape de la méthode de Broadie et Kaya afin de l'accélérer. Alors, au lieu d'utiliser la méthode de Newton du second ordre et l'approche d'inversion, nous utilisons l'algorithme de Metropolis-Hastings (voir Hastings (1970)). Le second algorithme est une amélioration du premier. Au lieu d'utiliser la vraie densité de la variance intégrée, nous utilisons l'approximation de Smith (2007). Cette amélioration diminue la dimension de l'équation caractéristique et accélère l'algorithme. Notre dernier algorithme n'est pas basé sur une méthode MCMC. Cependant, nous essayons toujours d'accélérer la seconde étape de la méthode de Broadie et Kaya (2006). Afin de réussir ceci, nous utilisons une variable aléatoire gamma dont les moments sont appariés à la vraie variable aléatoire de la variance intégrée par rapport au temps. Selon Stewart et al. (2007), il est possible d'approximer une convolution de variables aléatoires gamma (qui ressemble beaucoup à la représentation donnée par Glasserman et Kim (2008) si le pas de temps est petit) par une simple variable aléatoire gamma.