966 resultados para inverse exponential distribution
Resumo:
In this paper we calculate the potential for a prolate spheroidal distribution as in a dark matter halo with a radially varying eccentricity. This is obtained by summing up the shell-by-shell contributions of isodensity surfaces, which are taken to be concentric and with a common polar axis and with an axis ratio that varies with radius. Interestingly, the constancy of potential inside a shell is shown to be a good approximation even when the isodensity contours are dissimilar spheroids, as long as the radial variation in eccentricity is small as seen in realistic systems. We consider three cases where the isodensity contours are more prolate at large radii, or are less prolate or have a constant eccentricity. Other relevant physical quantities like the rotation velocity, the net orbital and vertical frequency due to the halo and an exponential disc of finite thickness embedded in it are obtained. We apply this to the kinematical origin of Galactic warp, and show that a prolate-shaped halo is not conducive to making long-lived warps - contrary to what has been proposed in the literature. The results for a prolate mass distribution with a variable axis ratio obtained are general, and can be applied to other astrophysical systems, such as prolate bars, for a more realistic treatment.
Resumo:
Exponential compact higher-order schemes have been developed for unsteady convection-diffusion equation (CDE). One of the developed scheme is sixth-order accurate which is conditionally stable for the Peclet number 0 <= Pe <= 2.8 and the other is fourth-order accurate which is unconditionally stable. Schemes for two-dimensional (2D) problems are made to use alternate direction implicit (ADI) algorithm. Example problems are solved and the numerical solutions are compared with the analytical solutions for each case.
Resumo:
Three refractory coarse grained CAIs from the Efremovka CV3 chondrite, one (E65) previously shown to have formed with live Ca-41, were studied by ion microprobe for their Al-26-Mg-26 and Be-10-B-10 systematic in order to better understand the origin of Be-10. The high precision Al-Mg data and the inferred Al-26/Al-27 values attest that the precursors of the three CAIs evolved in the solar nebula over a period of few hundred thousand years before last melting-crystallization events. The initial Be-10/Be-9 ratios and delta B-10 values defined by the Be-10 isochrons for the three Efremovka CAIs are similar within errors. The CAI Be-10 abundance in published data underscores the large range for initial Be-10/Be-9 ratios. This is contrary to the relatively small range of Al-26/Al-27 variations in CAIs around the canonical ratio. Two models that could explain the origin of this large Be-10/Be-9 range are assessed from the collateral variations predicted for the initial delta B-10 values: (i) closed system decay of Be-10 from a ``canonical'' Be-10/Be-9 ratio and (ii) formation of CAIs from a mixture of solid precursors and nebula gas irradiated during up to a few hundred thousand years. The second scenario is shown to be the most consistent with the data. This shows that the major fraction of Be-10 in CAIs was produced by irradiation of refractory grains, while contributions of galactic cosmic rays trapping and early solar wind irradiation are less dominant. The case for Be-10 production by solar cosmic rays irradiation of solid refractory precursors poses a conundrum for Ca-41 because the latter is easily produced by irradiation and should be more abundant than what is observed in CAIs. Be-10 production by irradiation from solar energetic particles requires high Ca-41 abundance in early solar system, however, this is not observed in CAIs. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The Girsanov linearization method (GLM), proposed earlier in Saha, N., and Roy, D., 2007, ``The Girsanov Linearisation Method for Stochastically Driven Nonlinear Oscillators,'' J. Appl. Mech., 74, pp. 885-897, is reformulated to arrive at a nearly exact, semianalytical, weak and explicit scheme for nonlinear mechanical oscillators under additive stochastic excitations. At the heart of the reformulated linearization is a temporally localized rejection sampling strategy that, combined with a resampling scheme, enables selecting from and appropriately modifying an ensemble of locally linearized trajectories while weakly applying the Girsanov correction (the Radon-Nikodym derivative) for the linearization errors. The semianalyticity is due to an explicit linearization of the nonlinear drift terms and it plays a crucial role in keeping the Radon-Nikodym derivative ``nearly bounded'' above by the inverse of the linearization time step (which means that only a subset of linearized trajectories with low, yet finite, probability exceeds this bound). Drift linearization is conveniently accomplished via the first few (lower order) terms in the associated stochastic (Ito) Taylor expansion to exclude (multiple) stochastic integrals from the numerical treatment. Similarly, the Radon-Nikodym derivative, which is a strictly positive, exponential (super-) martingale, is converted to a canonical form and evaluated over each time step without directly computing the stochastic integrals appearing in its argument. Through their numeric implementations for a few low-dimensional nonlinear oscillators, the proposed variants of the scheme, presently referred to as the Girsanov corrected linearization method (GCLM), are shown to exhibit remarkably higher numerical accuracy over a much larger range of the time step size than is possible with the local drift-linearization schemes on their own.
Resumo:
Estimation of design quantiles of hydrometeorological variables at critical locations in river basins is necessary for hydrological applications. To arrive at reliable estimates for locations (sites) where no or limited records are available, various regional frequency analysis (RFA) procedures have been developed over the past five decades. The most widely used procedure is based on index-flood approach and L-moments. It assumes that values of scale and shape parameters of frequency distribution are identical across all the sites in a homogeneous region. In real-world scenario, this assumption may not be valid even if a region is statistically homogeneous. To address this issue, a novel mathematical approach is proposed. It involves (i) identification of an appropriate frequency distribution to fit the random variable being analyzed for homogeneous region, (ii) use of a proposed transformation mechanism to map observations of the variable from original space to a dimensionless space where the form of distribution does not change, and variation in values of its parameters is minimal across sites, (iii) construction of a growth curve in the dimensionless space, and (iv) mapping the curve to the original space for the target site by applying inverse transformation to arrive at required quantile(s) for the site. Effectiveness of the proposed approach (PA) in predicting quantiles for ungauged sites is demonstrated through Monte Carlo simulation experiments considering five frequency distributions that are widely used in RFA, and by case study on watersheds in conterminous United States. Results indicate that the PA outperforms methods based on index-flood approach.
Resumo:
In this paper, we consider the setting of the pattern maximum likelihood (PML) problem studied by Orlitsky et al. We present a well-motivated heuristic algorithm for deciding the question of when the PML distribution of a given pattern is uniform. The algorithm is based on the concept of a ``uniform threshold''. This is a threshold at which the uniform distribution exhibits an interesting phase transition in the PML problem, going from being a local maximum to being a local minimum.
Resumo:
In contemporary wideband orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE) and WiMAX, different subcarriers over which a codeword is transmitted may experience different signal-to-noise-ratios (SNRs). Thus, adaptive modulation and coding (AMC) in these systems is driven by a vector of subcarrier SNRs experienced by the codeword, and is more involved. Exponential effective SNR mapping (EESM) simplifies the problem by mapping this vector into a single equivalent fiat-fading SNR. Analysis of AMC using EESM is challenging owing to its non-linear nature and its dependence on the modulation and coding scheme. We first propose a novel statistical model for the EESM, which is based on the Beta distribution. It is motivated by the central limit approximation for random variables with a finite support. It is simpler and as accurate as the more involved ad hoc models proposed earlier. Using it, we develop novel expressions for the throughput of a point-to-point OFDM link with multi-antenna diversity that uses EESM for AMC. We then analyze a general, multi-cell OFDM deployment with co-channel interference for various frequency-domain schedulers. Extensive results based on LTE and WiMAX are presented to verify the model and analysis, and gain new insights.
Resumo:
We analytically evaluate the large deviation function in a simple model of classical particle transfer between two reservoirs. We illustrate how the asymptotic long-time regime is reached starting from a special propagating initial condition. We show that the steady-state fluctuation theorem holds provided that the distribution of the particle number decays faster than an exponential, implying analyticity of the generating function and a discrete spectrum for its evolution operator.
Resumo:
Electrical Impedance Tomography (EIT) is a computerized medical imaging technique which reconstructs the electrical impedance images of a domain under test from the boundary voltage-current data measured by an EIT electronic instrumentation using an image reconstruction algorithm. Being a computed tomography technique, EIT injects a constant current to the patient's body through the surface electrodes surrounding the domain to be imaged (Omega) and tries to calculate the spatial distribution of electrical conductivity or resistivity of the closed conducting domain using the potentials developed at the domain boundary (partial derivative Omega). Practical phantoms are essentially required to study, test and calibrate a medical EIT system for certifying the system before applying it on patients for diagnostic imaging. Therefore, the EIT phantoms are essentially required to generate boundary data for studying and assessing the instrumentation and inverse solvers a in EIT. For proper assessment of an inverse solver of a 2D EIT system, a perfect 2D practical phantom is required. As the practical phantoms are the assemblies of the objects with 3D geometries, the developing of a practical 2D-phantom is a great challenge and therefore, the boundary data generated from the practical phantoms with 3D geometry are found inappropriate for assessing a 2D inverse solver. Furthermore, the boundary data errors contributed by the instrumentation are also difficult to separate from the errors developed by the 3D phantoms. Hence, the errorless boundary data are found essential to assess the inverse solver in 2D EIT. In this direction, a MatLAB-based Virtual Phantom for 2D EIT (MatVP2DEIT) is developed to generate accurate boundary data for assessing the 2D-EIT inverse solvers and the image reconstruction accuracy. MatVP2DEIT is a MatLAB-based computer program which simulates a phantom in computer and generates the boundary potential data as the outputs by using the combinations of different phantom parameters as the inputs to the program. Phantom diameter, inhomogeneity geometry (shape, size and position), number of inhomogeneities, applied current magnitude, background resistivity, inhomogeneity resistivity all are set as the phantom variables which are provided as the input parameters to the MatVP2DEIT for simulating different phantom configurations. A constant current injection is simulated at the phantom boundary with different current injection protocols and boundary potential data are calculated. Boundary data sets are generated with different phantom configurations obtained with the different combinations of the phantom variables and the resistivity images are reconstructed using EIDORS. Boundary data of the virtual phantoms, containing inhomogeneities with complex geometries, are also generated for different current injection patterns using MatVP2DEIT and the resistivity imaging is studied. The effect of regularization method on the image reconstruction is also studied with the data generated by MatVP2DEIT. Resistivity images are evaluated by studying the resistivity parameters and contrast parameters estimated from the elemental resistivity profiles of the reconstructed phantom domain. Results show that the MatVP2DEIT generates accurate boundary data for different types of single or multiple objects which are efficient and accurate enough to reconstruct the resistivity images in EIDORS. The spatial resolution studies show that, the resistivity imaging conducted with the boundary data generated by MatVP2DEIT with 2048 elements, can reconstruct two circular inhomogeneities placed with a minimum distance (boundary to boundary) of 2 mm. It is also observed that, in MatVP2DEIT with 2048 elements, the boundary data generated for a phantom with a circular inhomogeneity of a diameter less than 7% of that of the phantom domain can produce resistivity images in EIDORS with a 1968 element mesh. Results also show that the MatVP2DEIT accurately generates the boundary data for neighbouring, opposite reference and trigonometric current patterns which are very suitable for resistivity reconstruction studies. MatVP2DEIT generated data are also found suitable for studying the effect of the different regularization methods on reconstruction process. Comparing the reconstructed image with an original geometry made in MatVP2DEIT, it would be easier to study the resistivity imaging procedures as well as the inverse solver performance. Using the proposed MatVP2DEIT software with modified domains, the cross sectional anatomy of a number of body parts can be simulated in PC and the impedance image reconstruction of human anatomy can be studied.
Resumo:
With the development of deep sequencing methodologies, it has become important to construct site saturation mutant (SSM) libraries in which every nucleotide/codon in a gene is individually randomized. We describe methodologies for the rapid, efficient, and economical construction of such libraries using inverse polymerase chain reaction (PCR). We show that if the degenerate codon is in the middle of the mutagenic primer, there is an inherent PCR bias due to the thermodynamic mismatch penalty, which decreases the proportion of unique mutants. Introducing a nucleotide bias in the primer can alleviate the problem. Alternatively, if the degenerate codon is placed at the 5' end, there is no PCR bias, which results in a higher proportion of unique mutants. This also facilitates detection of deletion mutants resulting from errors during primer synthesis. This method can be used to rapidly generate SSM libraries for any gene or nucleotide sequence, which can subsequently be screened and analyzed by deep sequencing. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
An analytical solution to describe the transient temperature distribution in a geothermal reservoir in response to injection of cold water is presented. The reservoir is composed of a confined aquifer, sandwiched between rocks of different thermo-geological properties. The heat transport processes considered are advection, longitudinal conduction in the geothermal aquifer, and the conductive heat transfer to the underlying and overlying rocks of different geological properties. The one-dimensional heat transfer equation has been solved using the Laplace transform with the assumption of constant density and thermal properties of both rock and fluid. Two simple solutions are derived afterwards, first neglecting the longitudinal conductive heat transport and then heat transport to confining rocks. Results show that heat loss to the confining rock layers plays a vital role in slowing down the cooling of the reservoir. The influence of some parameters, e.g. the volumetric injection rate, the longitudinal thermal conductivity and the porosity of the porous media, on the transient heat transport phenomenon is judged by observing the variation of the transient temperature distribution with different values of the parameters. The effects of injection rate and thermal conductivity have been found to be profound on the results.
Resumo:
It is a well-known fact that most of the developing countries have intermittent water supply and the quantity of water supplied from the source is also not distributed equitably among the consumers. Aged pipelines, pump failures, and improper management of water resources are some of the main reasons for it. This study presents the application of a nonlinear control technique to overcome this problem in different zones in the city of Bangalore. The water is pumped to the city from a large distance of approximately 100km over a very high elevation of approximately 400m. The city has large undulating terrain among different zones, which leads to unequal distribution of water. The Bangalore, inflow water-distribution system (WDS) has been modeled. A dynamic inversion (DI) nonlinear controller with proportional integral derivative (PID) features (DI-PID) is used for valve throttling to achieve the target flows to different zones of the city. This novel approach of equitable water distribution using DI-PID controllers that can be used as a decision support system is discussed in this paper.
Resumo:
An experimental charge-density analysis of pyrazinamide (a first line antitubercular drug) was performed using high-resolution X-ray diffraction data (sin theta/lambda)(max) = 1.1 angstrom(-1)] measured at 100 (2) K. The structure was solved by direct methods using SHELXS97 and refined by SHELXL97. The total electron density of the pyrazinamide molecule was modeled using the Hansen-Coppens multipole formalism implemented in the XD software. The topological properties of electron density determined from the experiment were compared with the theoretical results obtained from CRYSTAL09 at the B3LYP/6-31G** level of theory. The crystal structure was stabilized by N-H center dot center dot center dot N and N-H center dot center dot center dot O hydrogen bonds, in which the N3-H3B center dot center dot center dot N1 and N3-H3A center dot center dot center dot O1 interactions form two types of dimers in the crystal. Hirshfeld surface analysis was carried out to analyze the intermolecular interactions. The fingerprint plot reveals that the N center dot center dot center dot H and O center dot center dot center dot H hydrogen-bonding interactions contribute 26.1 and 18.4%, respectively, of the total Hirshfeld surface. The lattice energy of the molecule was calculated using density functional theory (B3LYP) methods with the 6-31G** basis set. The molecular electrostatic potential of the pyrazinamide molecule exhibits extended electronegative regions around O1, N1 and N2. The existence of a negative electrostatic potential (ESP) region just above the upper and lower surfaces of the pyrazine ring confirm the pi-electron cloud.
Resumo:
A model has been developed to simulate the foam characteristics obtained, when chemical (water) and physical (Freon) blowing agents are used together for the formation of polyurethane foams. The model considers the rate of reaction, the consequent rise in temperature of the reaction mixture, nucleation of bubbles, and mass transfer of CO2 and Freon to them till the time of gelation. The model is able to explain the experimental results available in literature. It further predicts that the nucleation period gets reduced with increase in water (at constant Freon content), whereas with increase in Freon (at constant water) concentration nucleation period decreases marginally leading to narrower bubble-size distribution. By the use of uniform sized nuclei added initially, the model predicts that the bubble-size distribution can be made independent of the rate of homogeneous nucleation and can, thus, offer an extra parameter for its control. (C) 2014 Wiley Periodicals, Inc.
Resumo:
Systematic structural perturbation has been used to fine-tune and understand the luminescence properties of three new 1,8-naphthalimides (NPIs) in solution and aggregates. The NPIs show blue emission in the solution state and their fluorescence quantum yields are dependent upon their molecular rigidity. In concentrated solutions of the NPIs, intermolecular interactions were found to quench the fluorescence due to the formation of excimers. In contrast, upon aggregation (in THF/H2O mixtures), the NPIs show aggregation-induced emission enhancement (AIEE). The NPIs also show moderately high solid-state emission quantum yields (ca. 10-12.7 %). The AIEE behaviour of the NPIs depends on their molecular rigidity and the nature of their intermolecular interactions. The NPIs 1-3 show different extents of intermolecular (pi-pi and C-H center dot center dot center dot O) interactions in their solid-state crystal structures depending on their substituents. Detailed photophysical, computational and structural investigations suggest that an optimal balance of structural flexibility and intermolecular communication is necessary for achieving AIEE characteristics in these NPIs.