937 resultados para Editor of flow analysis methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gene turnover rates and the evolution of gene family sizes are important aspects of genome evolution. Here, we use curated sequence data of the major chemosensory gene families from Drosophila-the gustatory receptor, odorant receptor, ionotropic receptor, and odorant-binding protein families-to conduct a comparative analysis among families, exploring different methods to estimate gene birth and death rates, including an ad hoc simulation study. Remarkably, we found that the state-of-the-art methods may produce very different rate estimates, which may lead to disparate conclusions regarding the evolution of chemosensory gene family sizes in Drosophila. Among biological factors, we found that a peculiarity of D. sechellia's gene turnover rates was a major source of bias in global estimates, whereas gene conversion had negligible effects for the families analyzed herein. Turnover rates vary considerably among families, subfamilies, and ortholog groups although all analyzed families were quite dynamic in terms of gene turnover. Computer simulations showed that the methods that use ortholog group information appear to be the most accurate for the Drosophila chemosensory families. Most importantly, these results reveal the potential of rate heterogeneity among lineages to severely bias some turnover rate estimation methods and the need of further evaluating the performance of these methods in a more diverse sampling of gene families and phylogenetic contexts. Using branch-specific codon substitution models, we find further evidence of positive selection in recently duplicated genes, which attests to a nonneutral aspect of the gene birth-and-death process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis analyses the calculation of FanSave and PumpSave energy saving tools calculation. With these programs energy consumption of variable speed drive control for fans and pumps can be compared to other control methods. With FanSave centrifugal and axial fans can be examined and PumpSave deals with centrifugal pumps. By means of these programs also suitable frequency converter can be chosen from the ABB collection. Programs need as initial values information about the appliances like amount of flow and efficiencies. Operation time is important factor when calculating the annual energy consumption and information about it are the length and profile. Basic theory related to fans and pumps is introduced without more precise instructions for dimensioning. FanSave and PumpSave contain various methods for flow control. These control methods are introduced in the thesis based on their operational principles and suitability. Also squirrel cage motor and frequency converter are introduced because of their close involvement to fans and pumps. Second part of the thesis contains comparison between results of FanSaves and PumpSaves calculation and performance curve based calculation. Also laboratory tests were made with centrifugal and axial fan and also with centrifugal pump. With the results from this thesis the calculation of these programs can be adjusted to be more accurate and also some new features can be added.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Determination of the viability of bacteria by the conventional plating technique is a time-consuming process. Methods based on enzyme activity or membrane integrity are much faster and may be good alternatives. Assessment of the viability of suspensions of the plant pathogenic bacterium Clavibacter michiganensis subsp. michiganensis (Cmm) using the fluorescent probes Calcein acetoxy methyl ester (Calcein AM), carboxyfluorescein diacetate (cFDA), and propidium iodide (PI) in combination with flow cytometry was evaluated. Heat-treated and viable (non-treated) Cmm cells labeled with Calcein AM, cFDA, PI, or combinations of Calcein AM and cFDA with PI, could be distinguished based on their fluorescence intensity in flow cytometry analysis. Non-treated cells showed relatively high green fluorescence levels due to staining with either Calcein AM or cFDA, whereas damaged cells (heat-treated) showed high red fluorescence levels due to staining with PI. Flow cytometry also allowed a rapid quantification of viable Cmm cells labeled with Calcein AM or cFDA and heat-treated cells labeled with PI. Therefore, the application of flow cytometry in combination with fluorescent probes appears to be a promising technique for assessing viability of Cmm cells when cells are labeled with Calcein AM or the combination of Calcein AM with PI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike traditional biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is foreign to tra- ditional biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Euclidean distance matrix analysis (EDMA) methods are used to distinguish whether or not significant difference exists between conformational samples of antibody complementarity determining region (CDR) loops, isolated LI loop and LI in three-loop assembly (LI, L3 and H3) obtained from Monte Carlo simulation. After the significant difference is detected, the specific inter-Ca distance which contributes to the difference is identified using EDMA.The estimated and improved mean forms of the conformational samples of isolated LI loop and LI loop in three-loop assembly, CDR loops of antibody binding site, are described using EDMA and distance geometry (DGEOM). To the best of our knowledge, it is the first time the EDMA methods are used to analyze conformational samples of molecules obtained from Monte Carlo simulations. Therefore, validations of the EDMA methods using both positive control and negative control tests for the conformational samples of isolated LI loop and LI in three-loop assembly must be done. The EDMA-I bootstrap null hypothesis tests showed false positive results for the comparison of six samples of the isolated LI loop and true positive results for comparison of conformational samples of isolated LI loop and LI in three-loop assembly. The bootstrap confidence interval tests revealed true negative results for comparisons of six samples of the isolated LI loop, and false negative results for the conformational comparisons between isolated LI loop and LI in three-loop assembly. Different conformational sample sizes are further explored by combining the samples of isolated LI loop to increase the sample size, or by clustering the sample using self-organizing map (SOM) to narrow the conformational distribution of the samples being comparedmolecular conformations. However, there is no improvement made for both bootstrap null hypothesis and confidence interval tests. These results show that more work is required before EDMA methods can be used reliably as a method for comparison of samples obtained by Monte Carlo simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCDs, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable stars astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable stars apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable stars time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from its time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial. Derekas et al. 2007, Deb et.al. 2010 states The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the General Catalogue of Vari- able Stars or other databases like Variable Star Index, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The identification of chemical mechanism that can exhibit oscillatory phenomena in reaction networks are currently of intense interest. In particular, the parametric question of the existence of Hopf bifurcations has gained increasing popularity due to its relation to the oscillatory behavior around the fixed points. However, the detection of oscillations in high-dimensional systems and systems with constraints by the available symbolic methods has proven to be difficult. The development of new efficient methods are therefore required to tackle the complexity caused by the high-dimensionality and non-linearity of these systems. In this thesis, we mainly present efficient algorithmic methods to detect Hopf bifurcation fixed points in (bio)-chemical reaction networks with symbolic rate constants, thereby yielding information about their oscillatory behavior of the networks. The methods use the representations of the systems on convex coordinates that arise from stoichiometric network analysis. One of the methods called HoCoQ reduces the problem of determining the existence of Hopf bifurcation fixed points to a first-order formula over the ordered field of the reals that can then be solved using computational-logic packages. The second method called HoCaT uses ideas from tropical geometry to formulate a more efficient method that is incomplete in theory but worked very well for the attempted high-dimensional models involving more than 20 chemical species. The instability of reaction networks may lead to the oscillatory behaviour. Therefore, we investigate some criterions for their stability using convex coordinates and quantifier elimination techniques. We also study Muldowney's extension of the classical Bendixson-Dulac criterion for excluding periodic orbits to higher dimensions for polynomial vector fields and we discuss the use of simple conservation constraints and the use of parametric constraints for describing simple convex polytopes on which periodic orbits can be excluded by Muldowney's criteria. All developed algorithms have been integrated into a common software framework called PoCaB (platform to explore bio- chemical reaction networks by algebraic methods) allowing for automated computation workflows from the problem descriptions. PoCaB also contains a database for the algebraic entities computed from the models of chemical reaction networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structure is an important physical feature of the soil that is associated with water movement, the soil atmosphere, microorganism activity and nutrient uptake. A soil without any obvious organisation of its components is known as apedal and this state can have marked effects on several soil processes. Accurate maps of topsoil and subsoil structure are desirable for a wide range of models that aim to predict erosion, solute transport, or flow of water through the soil. Also such maps would be useful to precision farmers when deciding how to apply nutrients and pesticides in a site-specific way, and to target subsoiling and soil structure stabilization procedures. Typically, soil structure is inferred from bulk density or penetrometer resistance measurements and more recently from soil resistivity and conductivity surveys. To measure the former is both time-consuming and costly, whereas observations made by the latter methods can be made automatically and swiftly using a vehicle-mounted penetrometer or resistivity and conductivity sensors. The results of each of these methods, however, are affected by other soil properties, in particular moisture content at the time of sampling, texture, and the presence of stones. Traditional methods of observing soil structure identify the type of ped and its degree of development. Methods of ranking such observations from good to poor for different soil textures have been developed. Indicator variograms can be computed for each category or rank of structure and these can be summed to give the sum of indicator variograms (SIV). Observations of the topsoil and subsoil structure were made at four field sites where the soil had developed on different parent materials. The observations were ranked by four methods and indicator and the sum of indicator variograms were computed and modelled for each method of ranking. The individual indicators were then kriged with the parameters of the appropriate indicator variogram model to map the probability of encountering soil with the structure represented by that indicator. The model parameters of the SIVs for each ranking system were used with the data to krige the soil structure classes, and the results are compared with those for the individual indicators. The relations between maps of soil structure and selected wavebands from aerial photographs are examined as basis for planning surveys of soil structure. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bloom-forming and toxin-producing cyanobacteria remain a persistent nuisance across the world. Modelling cyanobacterial behaviour in freshwaters is an important tool for understanding their population dynamics and predicting the location and timing of the bloom events in lakes, reservoirs and rivers. A new deterministicmathematical model was developed, which simulates the growth and movement of cyanobacterial blooms in river systems. The model focuses on the mathematical description of the bloom formation, vertical migration and lateral transport of colonies within river environments by taking into account the major factors that affect the cyanobacterial bloom formation in rivers including light, nutrients and temperature. A parameter sensitivity analysis using a one-at-a-time approach was carried out. There were two objectives of the sensitivity analysis presented in this paper: to identify the key parameters controlling the growth and movement patterns of cyanobacteria and to provide a means for model validation. The result of the analysis suggested that maximum growth rate and day length period were the most significant parameters in determining the population growth and colony depth, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plane wave discontinuous Galerkin (PWDG) methods are a class of Trefftz-type methods for the spatial discretization of boundary value problems for the Helmholtz operator $-\Delta-\omega^2$, $\omega>0$. They include the so-called ultra weak variational formulation from [O. Cessenat and B. Desprs, SIAM J. Numer. Anal., 35 (1998), pp. 255299]. This paper is concerned with the a priori convergence analysis of PWDG in the case of $p$-refinement, that is, the study of the asymptotic behavior of relevant error norms as the number of plane wave directions in the local trial spaces is increased. For convex domains in two space dimensions, we derive convergence rates, employing mesh skeleton-based norms, duality techniques from [P. Monk and D. Wang, Comput. Methods Appl. Mech. Engrg., 175 (1999), pp. 121136], and plane wave approximation theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents findings of our study on peer-reviewed papers published in the International Conference on Persuasive Technology from 2006 to 2010. The study indicated that out of 44 systems reviewed, 23 were reported to be successful, 2 to be unsuccessful and 19 did not specify whether or not it was successful. 56 different techniques were mentioned and it was observed that most designers use ad hoc definitions for techniques or methods used in design. Hence we propose the need for research to establish unambiguous definitions of techniques and methods in the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present and analyse a spacetime discontinuous Galerkin method for wave propagation problems. The special feature of the scheme is that it is a Trefftz method, namely that trial and test functions are solution of the partial differential equation to be discretised in each element of the (spacetime) mesh. The method considered is a modification of the discontinuous Galerkin schemes of Kretzschmar et al. (2014) and of Monk & Richter (2005). For Maxwells equations in one space dimension, we prove stability of the method, quasi-optimality, best approximation estimates for polynomial Trefftz spaces and (fully explicit) error bounds with high order in the meshwidth and in the polynomial degree. The analysis framework also applies to scalar wave problems and Maxwells equations in higher space dimensions. Some numerical experiments demonstrate the theoretical results proved and the faster convergence compared to the non-Trefftz version of the scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A sensitive and robust analytical method for spectrophotometric determination of ethyl xanthate, CH(3)CH(2)OCS(2)(-) at trace concentrations in pulp solutions from froth flotation process is proposed. The analytical method is based on the decomposition of ethyl xanthate. EtX(-), with 2.0 mol L(-1) HCl generating ethanol and carbon disulfide. CS(2). A gas diffusion cell assures that only the volatile compounds diffuse through a PTFE membrane towards an acceptor stream of deionized water, thus avoiding the interferences of non-volatile compounds and suspended particles. The CS(2) is selectively detected by UV absorbance at 206 nm (epsilon = 65,000 L mol(-1) cm(-1)). The measured absorbance is directly proportional to EtX(-) concentration present in the sample solutions. The Beer`s law is obeyed in a 1 x 10(-6) to 2 x 10(-4) mol L(-1) concentration range of ethyl xanthate in the pulp with an excellent correlation coefficient (r = 0.999) and a detection limit of 3.1 x 10(-7) mol L(-1), corresponding to 38 mu g L. At flow rates of 200 mu L min(-1) of the donor stream and 100 mu L min(-1) of the acceptor channel a sampling rate of 15 injections per hour could be achieved with RSD < 2.3% (n = 10, 300 mu L injections of 1 x 10(-5) mol L(-1) EtX(-)). Two practical applications demonstrate the versatility of the FIA method: (i) evaluation the free EtX(-) concentration during a laboratory study of the EtX(-) adsorption capacity on pulverized sulfide ore (pyrite) and (ii) monitoring of EtX(-) at different stages (from starting load to washing effluents) of a flotation pilot plant processing a Cu-Zn sulfide ore. (C) 2010 Elsevier By. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A flow system designed with solenoid valves is proposed for determination of weak acid dissociable cyanide, based on the reaction with o-phthalaldehyde (OPA) and glycine yielding a highly fluorescent isoindole derivative. The proposed procedure minimizes the main drawbacks related to the reference batch procedure, based on reaction with barbituric acid and pyridine followed by spectrophotometric detection, i.e., use of toxic reagents, high reagent consumption and waste generation, low sampling rate, and poor sensitivity. Retention of the sample zone was exploited to increase the conversion rate of the analyte with minimized sample dispersion. Linear response (r=0.999) was observed for cyanide concentrations in the range 1-200 mu g L(-1), with a detection limit (99.7% confidence level) of 0.5 mu g L(-1)(19 nmol L(-1)). The sampling rate and coefficient of variation (n=10) were estimated as 22 measurements per hour and 1.4%, respectively. The results of determination of weak acid dissociable cyanide in natural water samples were in agreement with those achieved by the batch reference procedure at the 95% confidence level. Additionally to the improvement in the analytical features in comparison with those of the flow system with continuous reagent addition (sensitivity and sampling rate 90 and 83% higher, respectively), the consumption of OPA was 230-fold lower.