873 resultados para Multiple escales method
An FETI-preconditioned conjuerate gradient method for large-scale stochastic finite element problems
Resumo:
In the spectral stochastic finite element method for analyzing an uncertain system. the uncertainty is represented by a set of random variables, and a quantity of Interest such as the system response is considered as a function of these random variables Consequently, the underlying Galerkin projection yields a block system of deterministic equations where the blocks are sparse but coupled. The solution of this algebraic system of equations becomes rapidly challenging when the size of the physical system and/or the level of uncertainty is increased This paper addresses this challenge by presenting a preconditioned conjugate gradient method for such block systems where the preconditioning step is based on the dual-primal finite element tearing and interconnecting method equipped with a Krylov subspace reusage technique for accelerating the iterative solution of systems with multiple and repeated right-hand sides. Preliminary performance results on a Linux Cluster suggest that the proposed Solution method is numerically scalable and demonstrate its potential for making the uncertainty quantification Of realistic systems tractable.
Resumo:
In lake ecosystems, both fish and invertebrate predators have dramatic effects on their prey communities. Fish predation selects large cladocerans while invertebrate predators prefer prey of smaller size. Since invertebrate predators are the preferred food items for fish, their occurrence at high densities is often connected with the absence or low number of fish. It is generally believed that invertebrate predators can play a significant role only if the density of planktivorous fish is low. However, in eutrophic clay-turbid Lake Hiidenvesi (southern Finland), a dense population of predatory Chaoborus flavicans larvae coexists with an abundant fish population. The population covers the stratifying area of the lake and attains a maximum population density of 23000 ind. m-2. This thesis aims to clarify the effects of Chaoborus flavicans on the zooplankton community and the environmental factors facilitating the coexistence of fish and invertebrate predators. In the stratifying area of Lake Hiidenvesi, the seasonal succession of cladocerans was exceptional. The spring biomass peak of cladocerans was missing and the highest biomass occurred in midsummer. In early summer, the consumption rate by chaoborids clearly exceeded the production rate of cladocerans and each year the biomass peak of cladocerans coincided with the minimum chaoborid density. In contrast, consumption by fish was very low and each study year cladocerans attained maximum biomass simultaneously with the highest consumption by smelt (Osmerus eperlanus). The results indicated that Chaoborus flavicans was the main predator of cladocerans in the stratifying area of Lake Hiidenvesi. The clay turbidity strongly contributed to the coexistence of chaoborids and smelt at high densities. Turbidity exceeding 30 NTU combined with light intensity below 0.1 μE m-2 s-1provides an efficient daytime refuge for chaoborids, but turbidity alone is not an adequate refuge unless combined with low light intensity. In the non-stratifying shallow basins of Lake Hiidenvesi, light intensity exceeds this level during summer days at the bottom of the lake, preventing Chaoborus forming a dense population in the shallow parts of the lake. Chaoborus can be successful particularly in deep, clay-turbid lakes where they can remain high in the water column close to their epilimnetic prey. Suspended clay alters the trophic interactions by weakening the link between fish and Chaoborus, which in turn strengthens the effect of Chaoborus predation on crustacean zooplankton. Since food web management largely relies on manipulations of fish stocks and the cascading effects of such actions, the validity of the method in deep clay-turbid lakes may be questioned.
Resumo:
We present a new method for establishing correlation between deuterium and its attached carbon in a deuterated liquid crystal. The method is based on transfer of polarization using the DAPT pulse sequence proposed originally for two spin half nuclei, now extended to a spin-1 and a spin-1/2 nuclei. DAPT utilizes the evolution of magnetization of the spin pair under two blocks of phase shifted BLEW-12 pulses on one of the spins separated by a 90 degree pulse on the other spin. The method is easy to implement and does not need to satisfy matching conditions unlike the Hartmann-Hahn cross-polarization. Experimental results presented demonstrate the efficacy of the method.
Resumo:
The quality of short-term electricity load forecasting is crucial to the operation and trading activities of market participants in an electricity market. In this paper, it is shown that a multiple equation time-series model, which is estimated by repeated application of ordinary least squares, has the potential to match or even outperform more complex nonlinear and nonparametric forecasting models. The key ingredient of the success of this simple model is the effective use of lagged information by allowing for interaction between seasonal patterns and intra-day dependencies. Although the model is built using data for the Queensland region of Australia, the method is completely generic and applicable to any load forecasting problem. The model’s forecasting ability is assessed by means of the mean absolute percentage error (MAPE). For day-ahead forecast, the MAPE returned by the model over a period of 11 years is an impressive 1.36%. The forecast accuracy of the model is compared with a number of benchmarks including three popular alternatives and one industrial standard reported by the Australia Energy Market Operator (AEMO). The performance of the model developed in this paper is superior to all benchmarks and outperforms the AEMO forecasts by about a third in terms of the MAPE criterion.
Resumo:
Drug induced liver injury is one of the frequent reasons for the drug removal from the market. During the recent years there has been a pressure to develop more cost efficient, faster and easier ways to investigate drug-induced toxicity in order to recognize hepatotoxic drugs in the earlier phases of drug development. High Content Screening (HCS) instrument is an automated microscope equipped with image analysis software. It makes the image analysis faster and decreases the risk for an error caused by a person by analyzing the images always in the same way. Because the amount of drug and time needed in the analysis are smaller and multiple parameters can be analyzed from the same cells, the method should be more sensitive, effective and cheaper than the conventional assays in cytotoxicity testing. Liver cells are rich in mitochondria and many drugs target their toxicity to hepatocyte mitochondria. Mitochondria produce the majority of the ATP in the cell through oxidative phosphorylation. They maintain biochemical homeostasis in the cell and participate in cell death. Mitochondria is divided into two compartments by inner and outer mitochondrial membranes. The oxidative phosphorylation happens in the inner mitochondrial membrane. A part of the respiratory chain, a protein called cytochrome c, activates caspase cascades when released. This leads to apoptosis. The aim of this study was to implement, optimize and compare mitochondrial toxicity HCS assays in live cells and fixed cells in two cellular models: human HepG2 hepatoma cell line and rat primary hepatocytes. Three different hepato- and mitochondriatoxic drugs (staurosporine, rotenone and tolcapone) were used. Cells were treated with the drugs, incubated with the fluorescent probes and then the images were analyzed using Cellomics ArrayScan VTI reader. Finally the results obtained after optimizing methods were compared to each other and to the results of the conventional cytotoxicity assays, ATP and LDH measurements. After optimization the live cell method and rat primary hepatocytes were selected to be used in the experiments. Staurosporine was the most toxic of the three drugs and caused most damage to the cells most quickly. Rotenone was not that toxic, but the results were more reproducible and thus it would serve as a good positive control in the screening. Tolcapone was the least toxic. So far the conventional analysis of cytotoxicity worked better than the HCS methods. More optimization needs to be done to get the HCS method more sensitive. This was not possible in this study due to time limit.
Resumo:
In receive antenna selection (AS), only signals from a subset of the antennas are processed at any time by the limited number of radio frequency (RF) chains available at the receiver. Hence, the transmitter needs to send pilots multiple times to enable the receiver to estimate the channel state of all the antennas and select the best subset. Conventionally, the sensitivity of coherent reception to channel estimation errors has been tackled by boosting the energy allocated to all pilots to ensure accurate channel estimates for all antennas. Energy for pilots received by unselected antennas is mostly wasted, especially since the selection process is robust to estimation errors. In this paper, we propose a novel training method uniquely tailored for AS that transmits one extra pilot symbol that generates accurate channel estimates for the antenna subset that actually receives data. Consequently, the transmitter can selectively boost the energy allocated to the extra pilot. We derive closed-form expressions for the proposed scheme's symbol error probability for MPSK and MQAM, and optimize the energy allocated to pilot and data symbols. Through an insightful asymptotic analysis, we show that the optimal solution achieves full diversity and is better than the conventional method.
Resumo:
A general procedure for arriving at 3-D models of disulphiderich olypeptide systems based on the covalent cross-link constraints has been developed. The procedure, which has been coded as a computer program, RANMOD, assigns a large number of random, permitted backbone conformations to the polypeptide and identifies stereochemically acceptable structures as plausible models based on strainless disulphide bridge modelling. Disulphide bond modelling is performed using the procedure MODIP developed earlier, in connection with the choice of suitable sites where disulphide bonds could be engineered in proteins (Sowdhamini,R., Srinivasan,N., Shoichet,B., Santi,D.V., Ramakrishnan,C. and Balaram,P. (1989) Protein Engng, 3, 95-103). The method RANMOD has been tested on small disulphide loops and the structures compared against preferred backbone conformations derived from an analysis of putative disulphide subdatabase and model calculations. RANMOD has been applied to disulphiderich peptides and found to give rise to several stereochemically acceptable structures. The results obtained on the modelling of two test cases, a-conotoxin GI and endothelin I, are presented. Available NMR data suggest that such small systems exhibit conformational heterogeneity in solution. Hence, this approach for obtaining several distinct models is particularly attractive for the study of conformational excursions.
Resumo:
A hybrid technique to model two dimensional fracture problems which makes use of displacement discontinuity and direct boundary element method is presented. Direct boundary element method is used to model the finite domain of the body, while displacement discontinuity elements are utilized to represent the cracks. Thus the advantages of the component methods are effectively combined. This method has been implemented in a computer program and numerical results which show the accuracy of the present method are presented. The cases of bodies containing edge cracks as well as multiple cracks are considered. A direct method and an iterative technique are described. The present hybrid method is most suitable for modeling problems invoking crack propagation.
Resumo:
The problem of structural system identification when measurements originate from multiple tests and multiple sensors is considered. An offline solution to this problem using bootstrap particle filtering is proposed. The central idea of the proposed method is the introduction of a dummy independent variable that allows for simultaneous assimilation of multiple measurements in a sequential manner. The method can treat linear/nonlinear structural models and allows for measurements on strains and displacements under static/dynamic loads. Illustrative examples consider measurement data from numerical models and also from laboratory experiments. The results from the proposed method are compared with those from a Kalman filter-based approach and the superior performance of the proposed method is demonstrated. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
A systematic method is formulated to carry out theoretical analysis in a multilocus multiallele genetic system. As a special application, the Fundamental Theorem of Natural Selection is proved (in the continuous time model) for a multilocus multiallele system if all pairwise linkage disequilibria are zero.
Resumo:
In this letter, we propose a method for blind separation of d co-channel BPSK signals arriving at an antenna array. Our method involves two steps. In the first step, the received data vectors at the output of the array is grouped into 2d clusters. In the second step, we assign the 2d d-tuples with ±1 elements to these clusters in a consistent fashion. From the knowledge of the cluster to which a data vector belongs, we estimate the bits transmitted at that instant. Computer simulations are used to study the performance of our method
Resumo:
In this paper, we have developed a method to compute fractal dimension (FD) of discrete time signals, in the time domain, by modifying the box-counting method. The size of the box is dependent on the sampling frequency of the signal. The number of boxes required to completely cover the signal are obtained at multiple time resolutions. The time resolutions are made coarse by decimating the signal. The loglog plot of total number of boxes required to cover the curve versus size of the box used appears to be a straight line, whose slope is taken as an estimate of FD of the signal. The results are provided to demonstrate the performance of the proposed method using parametric fractal signals. The estimation accuracy of the method is compared with that of Katz, Sevcik, and Higuchi methods. In ddition, some properties of the FD are discussed.
Resumo:
The weighted-least-squares method based on the Gauss-Newton minimization technique is used for parameter estimation in water distribution networks. The parameters considered are: element resistances (single and/or group resistances, Hazen-Williams coefficients, pump specifications) and consumptions (for single or multiple loading conditions). The measurements considered are: nodal pressure heads, pipe flows, head loss in pipes, and consumptions/inflows. An important feature of the study is a detailed consideration of the influence of different choice of weights on parameter estimation, for error-free data, noisy data, and noisy data which include bad data. The method is applied to three different networks including a real-life problem.
Resumo:
Routing of floods is essential to control the flood flow at the flood control station such that it is within the specified safe limit. In this paper, the applicability of the extended Muskingum method is examined for routing of floods for a case study of Hirakud reservoir, Mahanadi river basin, India. The inflows to the flood control station are of two types-one controllable which comprises of reservoir releases for power and spill and the other is uncontrollable which comprises of inflow from lower tributaries and intermediate catchment between the reservoir and the flood control station. Muskingum model is improved to incorporate multiple sources of inflows and single outflow to route the flood in the reach. Instead of time lag and prismoidal flow parameters, suitable coefficients for various types of inflows were derived using Linear Programming. Presently, the decisions about operation of gates of Hirakud dam are being taken once in 12 h during floods. However, four time intervals of 24, 18, 12 and 6 h are examined to test the sensitivity of the routing time interval on the computed flood flow at the flood control station. It is observed that mean relative error decreases with decrease in routing interval both for calibration and testing phase. It is concluded that the extended Muskingum method can be explored for similar reservoir configurations such as Hirakud reservoir with suitable modifications. (C) 2010 International Association of Hydro-environment Engineering and Research. Asia Pacific Division. Published by Elsevier By. All rights reserved.
Resumo:
The problem of estimating multiple Carrier Frequency Offsets (CFOs) in the uplink of MIMO-OFDM systems with Co-Channel (CC) and OFDMA based carrier allocation is considered. The tri-linear data model for generalized, multiuser OFDM system is formulated. Novel blind subspace based estimation of multiple CFOs in the case of arbitrary carrier allocation scheme in OFDMA systems and CC users in OFDM systems based on the Khatri-Rao product is proposed. The method works where the conventional subspace method fails. The performance of the proposed methods is compared with pilot based Least-Squares method.