995 resultados para Small perturbations


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die Flachwassergleichungen (SWE) sind ein hyperbolisches System von Bilanzgleichungen, die adäquate Approximationen an groß-skalige Strömungen der Ozeane, Flüsse und der Atmosphäre liefern. Dabei werden Masse und Impuls erhalten. Wir unterscheiden zwei charakteristische Geschwindigkeiten: die Advektionsgeschwindigkeit, d.h. die Geschwindigkeit des Massentransports, und die Geschwindigkeit von Schwerewellen, d.h. die Geschwindigkeit der Oberflächenwellen, die Energie und Impuls tragen. Die Froude-Zahl ist eine Kennzahl und ist durch das Verhältnis der Referenzadvektionsgeschwindigkeit zu der Referenzgeschwindigkeit der Schwerewellen gegeben. Für die oben genannten Anwendungen ist sie typischerweise sehr klein, z.B. 0.01. Zeit-explizite Finite-Volume-Verfahren werden am öftersten zur numerischen Berechnung hyperbolischer Bilanzgleichungen benutzt. Daher muss die CFL-Stabilitätsbedingung eingehalten werden und das Zeitinkrement ist ungefähr proportional zu der Froude-Zahl. Deswegen entsteht bei kleinen Froude-Zahlen, etwa kleiner als 0.2, ein hoher Rechenaufwand. Ferner sind die numerischen Lösungen dissipativ. Es ist allgemein bekannt, dass die Lösungen der SWE gegen die Lösungen der Seegleichungen/ Froude-Zahl Null SWE für Froude-Zahl gegen Null konvergieren, falls adäquate Bedingungen erfüllt sind. In diesem Grenzwertprozess ändern die Gleichungen ihren Typ von hyperbolisch zu hyperbolisch.-elliptisch. Ferner kann bei kleinen Froude-Zahlen die Konvergenzordnung sinken oder das numerische Verfahren zusammenbrechen. Insbesondere wurde bei zeit-expliziten Verfahren falsches asymptotisches Verhalten (bzgl. der Froude-Zahl) beobachtet, das diese Effekte verursachen könnte.Ozeanographische und atmosphärische Strömungen sind typischerweise kleine Störungen eines unterliegenden Equilibriumzustandes. Wir möchten, dass numerische Verfahren für Bilanzgleichungen gewisse Equilibriumzustände exakt erhalten, sonst können künstliche Strömungen vom Verfahren erzeugt werden. Daher ist die Quelltermapproximation essentiell. Numerische Verfahren die Equilibriumzustände erhalten heißen ausbalanciert.rnrnIn der vorliegenden Arbeit spalten wir die SWE in einen steifen, linearen und einen nicht-steifen Teil, um die starke Einschränkung der Zeitschritte durch die CFL-Bedingung zu umgehen. Der steife Teil wird implizit und der nicht-steife explizit approximiert. Dazu verwenden wir IMEX (implicit-explicit) Runge-Kutta und IMEX Mehrschritt-Zeitdiskretisierungen. Die Raumdiskretisierung erfolgt mittels der Finite-Volumen-Methode. Der steife Teil wird mit Hilfe von finiter Differenzen oder au eine acht mehrdimensional Art und Weise approximniert. Zur mehrdimensionalen Approximation verwenden wir approximative Evolutionsoperatoren, die alle unendlich viele Informationsausbreitungsrichtungen berücksichtigen. Die expliziten Terme werden mit gewöhnlichen numerischen Flüssen approximiert. Daher erhalten wir eine Stabilitätsbedingung analog zu einer rein advektiven Strömung, d.h. das Zeitinkrement vergrößert um den Faktor Kehrwert der Froude-Zahl. Die in dieser Arbeit hergeleiteten Verfahren sind asymptotisch erhaltend und ausbalanciert. Die asymptotischer Erhaltung stellt sicher, dass numerische Lösung das "korrekte" asymptotische Verhalten bezüglich kleiner Froude-Zahlen besitzt. Wir präsentieren Verfahren erster und zweiter Ordnung. Numerische Resultate bestätigen die Konvergenzordnung, so wie Stabilität, Ausbalanciertheit und die asymptotische Erhaltung. Insbesondere beobachten wir bei machen Verfahren, dass die Konvergenzordnung fast unabhängig von der Froude-Zahl ist.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Criminals are common to all societies. To fight against them the community takes different security measures as, for example, to bring about a police. Thus, crime causes a depletion of the common wealth not only by criminal acts but also because the cost of hiring a police force. In this paper, we present a mathematical model of a criminal-prone self-protected society that is divided into socio-economical classes. We study the effect of a non-null crime rate on a free-of-criminals society which is taken as a reference system. As a consequence, we define a criminal-prone society as one whose free-of-criminals steady state is unstable under small perturbations of a certain socio-economical context. Finally, we compare two alternative strategies to control crime: (i) enhancing police efficiency, either by enlarging its size or by updating its technology, against (ii) either reducing criminal appealing or promoting social classes at risk

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Natural ecosystems contain many individuals and species interacting with each other and with their abiotic environment. Such systems can be expected to exhibit complex dynamics in which small perturbations can be amplified to cause large changes. Here, we document the reorganization of an arid ecosystem that has occurred since the late 1970s. The density of woody shrubs increased 3-fold. Several previously common animal species went locally extinct, while other previously rare species increased. While these changes are symptomatic of desertification, they were not caused by livestock grazing or drought, the principal causes of historical desertification. The changes apparently were caused by a shift in regional climate: since 1977 winter precipitation throughout the region was substantially higher than average for this century. These changes illustrate the kinds of large, unexpected responses of complex natural ecosystems that can occur in response to both natural perturbations and human activities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Through the use of site-directed mutagenesis and chemical rescue, we have identified the proton acceptor for redox-active tyrosine D in photosystem II (PSII). Effects of chemical rescue on the tyrosyl radical were monitored by EPR spectroscopy. We also have acquired the Fourier–transform infrared (FT-IR) spectrum associated with the oxidation of tyrosine D and concomitant protonation of the acceptor. Mutant and isotopically labeled PSII samples are used to assign vibrational lines in the 3,600–3,100 cm−1 region to N-H modes of His-189 in the D2 polypeptide. When His-189 in D2 is changed to a leucine (HL189D2) in PSII, dramatic alterations of both EPR and FT-IR spectra are observed. When imidazole is introduced into HL189D2 samples, results from both EPR and FT-IR spectroscopy argue that imidazole is functionally reconstituted into an accessible pocket and that imidazole acts as a chemical mimic for His-189. Small perturbations of EPR and FT-IR spectra are consistent with access to this pocket in wild-type PSII, as well. Structures of the analogous site in bacterial reaction centers suggest that an accessible pocket, large enough to contain imidazole, is bordered by tyrosine D and His-189 in the D2 polypeptide. These data provide evidence that His-189 in the D2 polypeptide of PSII acts as a proton acceptor for redox-active tyrosine D and that proton transfer to the imidazole ring facilitates the efficient oxidation/reduction of tyrosine D.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

While the elegance and efficiency of enzymatic catalysis have long tempted chemists and biochemists with reductionist leanings to try to mimic the functions of natural enzymes in much smaller peptides, such efforts have only rarely produced catalysts with biologically interesting properties. However, the advent of genetic engineering and hybridoma technology and the discovery of catalytic RNA have led to new and very promising alternative means of biocatalyst development. Synthetic chemists have also had some success in creating nonpeptide catalysts with certain enzyme-like characteristics, although their rates and specificities are generally much poorer than those exhibited by the best novel biocatalysts based on natural structures. A comparison of the various approaches from theoretical and practical viewpoints is presented. It is suggested that, given our current level of understanding, the most fruitful methods may incorporate both iterative selection strategies and rationally chosen small perturbations, superimposed on frameworks designed by nature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neste trabalho, foi proposta uma nova família de distribuições, a qual permite modelar dados de sobrevivência quando a função de risco tem formas unimodal e U (banheira). Ainda, foram consideradas as modificações das distribuições Weibull, Fréchet, half-normal generalizada, log-logística e lognormal. Tomando dados não-censurados e censurados, considerou-se os estimadores de máxima verossimilhança para o modelo proposto, a fim de verificar a flexibilidade da nova família. Além disso, um modelo de regressão locação-escala foi utilizado para verificar a influência de covariáveis nos tempos de sobrevida. Adicionalmente, conduziu-se uma análise de resíduos baseada nos resíduos deviance modificada. Estudos de simulação, utilizando-se de diferentes atribuições dos parâmetros, porcentagens de censura e tamanhos amostrais, foram conduzidos com o objetivo de verificar a distribuição empírica dos resíduos tipo martingale e deviance modificada. Para detectar observações influentes, foram utilizadas medidas de influência local, que são medidas de diagnóstico baseadas em pequenas perturbações nos dados ou no modelo proposto. Podem ocorrer situações em que a suposição de independência entre os tempos de falha e censura não seja válida. Assim, outro objetivo desse trabalho é considerar o mecanismo de censura informativa, baseado na verossimilhança marginal, considerando a distribuição log-odd log-logística Weibull na modelagem. Por fim, as metodologias descritas são aplicadas a conjuntos de dados reais.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper is intended to provide conditions for the stability of the strong uniqueness of the optimal solution of a given linear semi-infinite optimization (LSIO) problem, in the sense of maintaining the strong uniqueness property under sufficiently small perturbations of all the data. We consider LSIO problems such that the family of gradients of all the constraints is unbounded, extending earlier results of Nürnberger for continuous LSIO problems, and of Helbig and Todorov for LSIO problems with bounded set of gradients. To do this we characterize the absolutely (affinely) stable problems, i.e., those LSIO problems whose feasible set (its affine hull, respectively) remains constant under sufficiently small perturbations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We have recently proposed the framework of independent blind source separation as an advantageous approach to steganography. Amongst the several characteristics noted was a sensitivity to message reconstruction due to small perturbations in the sources. This characteristic is not common in most other approaches to steganography. In this paper we discuss how this sensitivity relates the joint diagonalisation inside the independent component approach, and reliance on exact knowledge of secret information, and how it can be used as an additional and inherent security mechanism against malicious attack to discovery of the hidden messages. The paper therefore provides an enhanced mechanism that can be used for e-document forensic analysis and can be applied to different dimensionality digital data media. In this paper we use a low dimensional example of biomedical time series as might occur in the electronic patient health record, where protection of the private patient information is paramount.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is well established that hydrodynamic journal bearings are responsible for self-excited vibrations and have the effect of lowering the critical speeds of rotor systems. The forces within the oil film wedge, generated by the vibrating journal, may be represented by displacement and velocity coefficient~ thus allowing the dynamical behaviour of the rotor to be analysed both for stability purposes and for anticipating the response to unbalance. However, information describing these coefficients is sparse, misleading, and very often not applicable to industrial type bearings. Results of a combined analytical and experimental investigation into the hydrodynamic oil film coefficients operating in the laminar region are therefore presented, the analysis being applied to a 120 degree partial journal bearing having a 5.0 in diameter journal and a LID ratio of 1.0. The theoretical analysis shows that for this type of popular bearing, the eight linearized coefficients do not accurately describe the behaviour of the vibrating journal based on the theory of small perturbations, due to them being masked by the presence of nonlinearity. A method is developed using the second order terms of Taylor expansion whereby design charts are provided which predict the twentyeight force coefficients for both aligned, and for varying amounts of journal misalignment. The resulting non-linear equations of motion are solved using a modified Newton-Raphson method whereby the whirl trajectories are obtained, thus providing a physical appreciation of the bearing characteristics under dynamically loaded conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60J45, 60K25

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coprime and nested sampling are well known deterministic sampling techniques that operate at rates significantly lower than the Nyquist rate, and yet allow perfect reconstruction of the spectra of wide sense stationary signals. However, theoretical guarantees for these samplers assume ideal conditions such as synchronous sampling, and ability to perfectly compute statistical expectations. This thesis studies the performance of coprime and nested samplers in spatial and temporal domains, when these assumptions are violated. In spatial domain, the robustness of these samplers is studied by considering arrays with perturbed sensor locations (with unknown perturbations). Simplified expressions for the Fisher Information matrix for perturbed coprime and nested arrays are derived, which explicitly highlight the role of co-array. It is shown that even in presence of perturbations, it is possible to resolve $O(M^2)$ under appropriate conditions on the size of the grid. The assumption of small perturbations leads to a novel ``bi-affine" model in terms of source powers and perturbations. The redundancies in the co-array are then exploited to eliminate the nuisance perturbation variable, and reduce the bi-affine problem to a linear underdetermined (sparse) problem in source powers. This thesis also studies the robustness of coprime sampling to finite number of samples and sampling jitter, by analyzing their effects on the quality of the estimated autocorrelation sequence. A variety of bounds on the error introduced by such non ideal sampling schemes are computed by considering a statistical model for the perturbation. They indicate that coprime sampling leads to stable estimation of the autocorrelation sequence, in presence of small perturbations. Under appropriate assumptions on the distribution of WSS signals, sharp bounds on the estimation error are established which indicate that the error decays exponentially with the number of samples. The theoretical claims are supported by extensive numerical experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim A recent Monte Carlo based study has shown that it is possible to design a diode that measures small field output factors equivalent to that in water. This is accomplished by placing an appropriate sized air gap above the silicon chip (1) with experimental results subsequently confirming that a particular Monte Carlo design was accurate (2). The aim of this work was to test if a new correction-less diode could be designed using an entirely experimental methodology. Method: All measurements were performed on a Varian iX at a depth of 5 cm, SSD of 95 cm and field sizes of 5, 6, 8, 10, 20 and 30 mm. Firstly, the experimental transfer of kq,clin,kq,msr from a commonly used diode detector (IBA, stereotactic field diode (SFD)) to another diode detector (Sun Nuclear, unshielded diode, (EDGEe)) was tested. These results were compared to Monte Carlo calculated values of the EDGEe. Secondly, the air gap above the EDGEe silicon chip was optimised empirically. Nine different air gap “tops” were placed above the EDGEe (air depth = 0.3, 0.6, 0.9 mm; air width = 3.06, 4.59, 6.13 mm). The sensitivity of the EDGEe was plotted as a function of air gap thickness for the field sizes measured. Results: The transfer of kq,clin,kq,msr from the SFD to the EDGEe was correct to within the simulation and measurement uncertainties. The EDGEe detector can be made “correction-less” for field sizes of 5 and 6 mm, but was ∼2% from being “correction-less” at field sizes of 8 and 10 mm. Conclusion Different materials will perturb small fields in different ways. A detector is only “correction-less” if all these perturbations happen to cancel out. Designing a “correction-less” diode is a complicated process, thus it is reasonable to expect that Monte Carlo simulations should play an important role.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Methods Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable “air cap”. A set of output ratios (ORfclin Det ) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to ORfclin Det measured using an IBA stereotactic field diode (SFD). k fclin, f msr Qclin,Qmsr was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that k fclin, f msr Qclin,Qmsr was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which s “correction-free” in small field relative dosimetry. In addition, the feasibility of experimentally transferring k fclin, f msr Qclin,Qmsr values from the SFD to unknown diodes was tested by comparing the experimentally transferred k fclin, f msr Qclin,Qmsr values for unmodified PTWe and EDGEe diodes to Monte Carlo simulated values. Results 1.0 mm of air was required to make the PTWe diode correction-free. This modified diode (PTWeair) produced output factors equivalent to those in water at all field sizes (5–50 mm). The optimal air thickness required for the EDGEe diode was found to be 0.6 mm. The modified diode (EDGEeair) produced output factors equivalent to those in water, except at field sizes of 8 and 10 mm where it measured approximately 2% greater than the relative dose to water. The experimentally calculated k fclin, f msr Qclin,Qmsr for both the PTWe and the EDGEe diodes (without air) matched Monte Carlo simulated results, thus proving that it is feasible to transfer k fclin, f msr Qclin,Qmsr from one commercially available detector to another using experimental methods and the recommended experimental setup. Conclusions It is possible to create a diode which does not require corrections for small field output factor measurements. This has been performed and verified experimentally. The ability of a detector to be “correction-free” depends strongly on its design and composition. A nonwater-equivalent detector can only be “correction-free” if competing perturbations of the beam cancel out at all field sizes. This should not be confused with true water equivalency of a detector.