994 resultados para Stochastic Matrix
Resumo:
A transformation is suggested which can transform a non-Gaussian monthly hydrological time series into a Gaussian one. The suggested approach is verified with data of ten Indian rainfall time series. Incidentally, it is observed that once the deterministic trends are removed, the transformation leads to an uncorrelated process for monthly rainfall. The procedure for normalization is general enough in that it should be also applicable to river discharges. This is verified to a limited extent by considering data of two Indian river discharges.
Resumo:
Measurement of individual emission sources (e.g., animals or pen manure) within intensive livestock enterprises is necessary to test emission calculation protocols and to identify targets for decreased emissions. In this study, a vented, fabric-covered large chamber (4.5 × 4.5 m, 1.5 m high; encompassing greater spatial variability than a smaller chamber) in combination with on-line analysis (nitrous oxide [N2O] and methane [CH4] via Fourier Transform Infrared Spectroscopy; 1 analysis min-1) was tested as a means to isolate and measure emissions from beef feedlot pen manure sources. An exponential model relating chamber concentrations to ambient gas concentrations, air exchange (e.g., due to poor sealing with the surface; model linear when ≈ 0 m3 s-1), and chamber dimensions allowed data to be fitted with high confidence. Alternating manure source emission measurements using the large-chamber and the backward Lagrangian stochastic (bLS) technique (5-mo period; bLS validated via tracer gas release, recovery 94-104%) produced comparable N2O and CH4 emission values (no significant difference at P < 0.05). Greater precision of individual measurements was achieved via the large chamber than for the bLS (mean ± standard error of variance components: bLS half-hour measurements, 99.5 ± 325 mg CH4 s-1 and 9.26 ± 20.6 mg N2O s-1; large-chamber measurements, 99.6 ± 64.2 mg CH4 s-1 and 8.18 ± 0.3 mg N2O s-1). The large-chamber design is suitable for measurement of emissions from manure on pen surfaces, isolating these emissions from surrounding emission sources, including enteric emissions. © © American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America.
Resumo:
In irrigated cropping, as with any other industry, profit and risk are inter-dependent. An increase in profit would normally coincide with an increase in risk, and this means that risk can be traded for profit. It is desirable to manage a farm so that it achieves the maximum possible profit for the desired level of risk. This paper identifies risk-efficient cropping strategies that allocate land and water between crop enterprises for a case study of an irrigated farm in Southern Queensland, Australia. This is achieved by applying stochastic frontier analysis to the output of a simulation experiment. The simulation experiment involved changes to the levels of business risk by systematically varying the crop sowing rules in a bioeconomic model of the case study farm. This model utilises the multi-field capability of the process based Agricultural Production System Simulator (APSIM) and is parameterised using data collected from interviews with a collaborating farmer. We found sowing rules that increased the farm area sown to cotton caused the greatest increase in risk-efficiency. Increasing maize area also improved risk-efficiency but to a lesser extent than cotton. Sowing rules that increased the areas sown to wheat reduced the risk-efficiency of the farm business. Sowing rules were identified that had the potential to improve the expected farm profit by ca. $50,000 Annually, without significantly increasing risk. The concept of the shadow price of risk is discussed and an expression is derived from the estimated frontier equation that quantifies the trade-off between profit and risk.
Resumo:
We have used the density matrix renormalization group (DMRG) method to study the linear and nonlinear optical responses of first generation nitrogen based dendrimers with donor acceptor groups. We have employed Pariser–Parr–Pople Hamiltonian to model the interacting pi electrons in these systems. Within the DMRG method we have used an innovative scheme to target excited states with large transition dipole to the ground state. This method reproduces exact optical gaps and polarization in systems where exact diagonalization of the Hamiltonian is possible. We have used a correction vector method which tacitly takes into account the contribution of all excited states, to obtain the ground state polarizibility, first hyperpolarizibility, and two photon absorption cross sections. We find that the lowest optical excitations as well as the lowest excited triplet states are localized. It is interesting to note that the first hyperpolarizibility saturates more rapidly with system size compared to linear polarizibility unlike that of linear polyenes.
Resumo:
The stochastic filtering has been in general an estimation of indirectly observed states given observed data. This means that one is discussing conditional expected values as being one of the most accurate estimation, given the observations in the context of probability space. In my thesis, I have presented the theory of filtering using two different kind of observation process: the first one is a diffusion process which is discussed in the first chapter, while the third chapter introduces the latter which is a counting process. The majority of the fundamental results of the stochastic filtering is stated in form of interesting equations, such the unnormalized Zakai equation that leads to the Kushner-Stratonovich equation. The latter one which is known also by the normalized Zakai equation or equally by Fujisaki-Kallianpur-Kunita (FKK) equation, shows the divergence between the estimate using a diffusion process and a counting process. I have also introduced an example for the linear gaussian case, which is mainly the concept to build the so-called Kalman-Bucy filter. As the unnormalized and the normalized Zakai equations are in terms of the conditional distribution, a density of these distributions will be developed through these equations and stated by Kushner Theorem. However, Kushner Theorem has a form of a stochastic partial differential equation that needs to be verify in the sense of the existence and uniqueness of its solution, which is covered in the second chapter.
Resumo:
Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.
Resumo:
Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.
Resumo:
We incorporate various gold nanoparticles (AuNPs) capped with different ligands in two-dimensional films and three-dimensional aggregates derived from N-stearoyl-L-alanine and N-lauroyl-L-alanine, respectively. The assemblies of N-stearoyl-L-alanine afforded stable films at the air-water interface. More compact assemblies were formed upon incorporation of AuNPs in the air-water interface of N-stearoyl-L-alanine. We then examined the effects of incorporation of various AuNPs functionalized with different capping ligands in three-dimensional assemblies of N-lauroyl-L-alanine, a compound that formed a gel in hydrocarbons. The profound influence of nanoparticle incorporation into physical gels was evident from evaluation of various microscopic and bulk properties. The interaction of AuNPs with the gelator assembly was found to depend critically on the capping ligands protecting the Au surface of the gold nanoparticles. Transmission electron microscopy (TEM) showed a long-range directional assembly of certain AuNPs along the gel fibers. Scanning electron microscopy (SEM) images of the freeze-dried gels and nanocomposites indicate that the morphological transformation in the composite microstructures depends significantly on the capping agent of the nanoparticles. Differential scanning calorimetry (DSC) showed that gel formation from sol occurred at a lower temperature upon incorporation of AuNPs having capping ligands that were able to align and noncovalently interact with the gel fibers. Rheological studies indicate that the gel-nanoparticle composites exhibit significantly greater viscoelasticity compared to the native gel alone when the capping ligands are able to interact through interdigitation into the gelator assembly. Thus, it was possible to define a clear relationship between the materials and the molecular-level properties by means of manipulation of the information inscribed on the NP surface.
Location of concentrators in a computer communication network: a stochastic automation search method
Resumo:
The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.
Resumo:
Purification of drinking water is routinely achieved by use of conventional coagulants and disinfection procedures. However, there are instances such as flood events when the level of turbidity reaches extreme levels while NOM may be an issue throughout the year. Consequently, there is a need to develop technologies which can effectively treat water of high turbidity during flood events and natural organic matter (NOM) content year round. It was our hypothesis that pebble matrix filtration potentially offered a relatively cheap, simple and reliable means to clarify such challenging water samples. Therefore, a laboratory scale pebble matrix filter (PMF) column was used to evaluate the turbidity and natural organic matter (NOM) pre-treatment performance in relation to 2013 Brisbane River flood water. Since the high turbidity was only a seasonal and short term problem, the general applicability of pebble matrix filters for NOM removal was also investigated. A 1.0 m deep bed of pebbles (the matrix) partly in-filled with either sand or crushed glass was tested, upon which was situated a layer of granular activated carbon (GAC). Turbidity was measured as a surrogate for suspended solids (SS), whereas, total organic carbon (TOC) and UV Absorbance at 254 nm were measured as surrogate parameters for NOM. Experiments using natural flood water showed that without the addition of any chemical coagulants, PMF columns achieved at least 50% turbidity reduction when the source water contained moderate hardness levels. For harder water samples, above 85% turbidity reduction was obtained. The ability to remove 50% turbidity without chemical coagulants may represent significant cost savings to water treatment plants and added environmental benefits accrue due to less sludge formation. A TOC reduction of 35-47% and UV-254 nm reduction of 24-38% was also observed. In addition to turbidity removal during flood periods, the ability to remove NOM using the pebble matrix filter throughout the year may have the benefit of reducing disinfection by-products (DBP) formation potential and coagulant demand at water treatment plants. Final head losses were remarkably low, reaching only 11 cm at a filtration velocity of 0.70 m/h.
Resumo:
Poly(vinyl alcohol)-matrix reinforced with nanodiamond (ND) particles, with ND content up to 0.6 wt%, were synthesized. Characterization of the composites by transmission electron microscopy (TEM) and small angle X-ray scattering (SAXS) reveal uniform distribution of the ND particles with no agglomeration in the matrix. Differential scanning calorimetry reveals that the crystallinity of the polymer increases with increasing ND content, indicating a strong interaction between ND and PVA. Nano-indentation technique was employed to assess the mechanical properties of composites. Results show that even small additions of ND lead to significant enhancement in the hardness and elastic modulus of PVA. Possible micromechanisms responsible for the enhancement of the mechanical properties are discussed.
Resumo:
Stochastic volatility models are of fundamental importance to the pricing of derivatives. One of the most commonly used models of stochastic volatility is the Heston Model in which the price and volatility of an asset evolve as a pair of coupled stochastic differential equations. The computation of asset prices and volatilities involves the simulation of many sample trajectories with conditioning. The problem is treated using the method of particle filtering. While the simulation of a shower of particles is computationally expensive, each particle behaves independently making such simulations ideal for massively parallel heterogeneous computing platforms. In this paper, we present our portable Opencl implementation of the Heston model and discuss its performance and efficiency characteristics on a range of architectures including Intel cpus, Nvidia gpus, and Intel Many-Integrated-Core (mic) accelerators.
Resumo:
The paper deals with a method for the evaluation of exhaust muffers with mean flow. A new set of variables, convective pressure and convective mass velocity, have been defined to replace the acoustic variables. An expression for attenuation (insertion loss) of a muffler has been proposed in terms of convective terminal impedances and a velocity ratio, on the lines of the one existing for acoustic filters. In order to evaluate the velocity ratio in terms of convective variables, transfer matrices for various muffler elements have been derived from the basic relations of energy, mass and momentum. Finally, the velocity ratiocum-transfer matrix method is illustrated for a typical straight-through muffler.