868 resultados para Rule-based techniques
Resumo:
This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.
Resumo:
Special switching sequences can be employed in space-vector-based generation of pulsewidth-modulated (PWM) waveforms for voltage-source inverters. These sequences involve switching a phase twice, switching the second phase once, and clamping the third phase in a subcycle. Advanced bus-clamping PWM (ABCPWM) techniques have been proposed recently that employ such switching sequences. This letter studies the spectral properties of the waveforms produced by these PWM techniques. Further, analytical closed-form expressions are derived for the total rms harmonic distortion due to these techniques. It is shown that the ABCPWM techniques lead to lower distortion than conventional space vector PWM and discontinuous PWM at higher modulation indexes. The findings are validated on a 2.2-kW constant $V/f$ induction motor drive and also on a 100-kW motor drive.
Resumo:
When a uniform flow of any nature is interrupted, the readjustment of the flow results in concentrations and rare-factions, so that the peak value of the flow parameter will be higher than that which an elementary computation would suggest. When stress flow in a structure is interrupted, there are stress concentrations. These are generally localized and often large, in relation to the values indicated by simple equilibrium calculations. With the advent of the industrial revolution, dynamic and repeated loading of materials had become commonplace in engine parts and fast moving vehicles of locomotion. This led to serious fatigue failures arising from stress concentrations. Also, many metal forming processes, fabrication techniques and weak-link type safety systems benefit substantially from the intelligent use or avoidance, as appropriate, of stress concentrations. As a result, in the last 80 years, the study and and evaluation of stress concentrations has been a primary objective in the study of solid mechanics. Exact mathematical analysis of stress concentrations in finite bodies presents considerable difficulty for all but a few problems of infinite fields, concentric annuli and the like, treated under the presumption of small deformation, linear elasticity. A whole series of techniques have been developed to deal with different classes of shapes and domains, causes and sources of concentration, material behaviour, phenomenological formulation, etc. These include real and complex functions, conformal mapping, transform techniques, integral equations, finite differences and relaxation, and, more recently, the finite element methods. With the advent of large high speed computers, development of finite element concepts and a good understanding of functional analysis, it is now, in principle, possible to obtain with economy satisfactory solutions to a whole range of concentration problems by intelligently combining theory and computer application. An example is the hybridization of continuum concepts with computer based finite element formulations. This new situation also makes possible a more direct approach to the problem of design which is the primary purpose of most engineering analyses. The trend would appear to be clear: the computer will shape the theory, analysis and design.
Resumo:
Room-temperature zinc ion-conducting molten electrolytes based on acetamide, urea, and zinc perchlorate or zinc triflate have been prepared and characterized by various physicochemical, spectroscopic, and electrochemical techniques. The ternary molten electrolytes are easy to prepare and can be handled under ambient conditions. They show excellent stability, high ionic conductivity, relatively low viscosity, and other favorable physicochemical and electrochemical properties that make them good electrolytes for rechargeable zinc batteries. Specific conductivities of 3.4 and 0.5 mS cm(-1) at 25 degrees C are obtained for zinc-perchlorate-and zinc-triflate-containing melts, respectively. Vibrational spectroscopic data reveal that the free ion concentration is high in the optimized composition. Rechargeable Zn batteries have been assembled using the molten electrolytes, with gamma-MnO2 as the positive electrode and Zn as the negative electrode. They show excellent electrochemical characteristics with high discharge capacities. This study opens up the possibility of using acetamide-based molten electrolytes as alternate electrolytes in rechargeable zinc batteries. (C) 2009 The Electrochemical Society.
Resumo:
Motivated by a problem from fluid mechanics, we consider a generalization of the standard curve shortening flow problem for a closed embedded plane curve such that the area enclosed by the curve is forced to decrease at a prescribed rate. Using formal asymptotic and numerical techniques, we derive possible extinction shapes as the curve contracts to a point, dependent on the rate of decreasing area; we find there is a wider class of extinction shapes than for standard curve shortening, for which initially simple closed curves are always asymptotically circular. We also provide numerical evidence that self-intersection is possible for non-convex initial conditions, distinguishing between pinch-off and coalescence of the curve interior.
Resumo:
One major reason for the global decline of biodiversity is habitat loss and fragmentation. Conservation areas can be designed to reduce biodiversity loss, but as resources are limited, conservation efforts need to be prioritized in order to achieve best possible outcomes. The field of systematic conservation planning developed as a response to opportunistic approaches to conservation that often resulted in biased representation of biological diversity. The last two decades have seen the development of increasingly sophisticated methods that account for information about biodiversity conservation goals (benefits), economical considerations (costs) and socio-political constraints. In this thesis I focus on two general topics related to systematic conservation planning. First, I address two aspects of the question about how biodiversity features should be valued. (i) I investigate the extremely important but often neglected issue of differential prioritization of species for conservation. Species prioritization can be based on various criteria, and is always goal-dependent, but can also be implemented in a scientifically more rigorous way than what is the usual practice. (ii) I introduce a novel framework for conservation prioritization, which is based on continuous benefit functions that convert increasing levels of biodiversity feature representation to increasing conservation value using the principle that more is better. Traditional target-based systematic conservation planning is a special case of this approach, in which a step function is used for the benefit function. We have further expanded the benefit function framework for area prioritization to address issues such as protected area size and habitat vulnerability. In the second part of the thesis I address the application of community level modelling strategies to conservation prioritization. One of the most serious issues in systematic conservation planning currently is not the deficiency of methodology for selection and design, but simply the lack of data. Community level modelling offers a surrogate strategy that makes conservation planning more feasible in data poor regions. We have reviewed the available community-level approaches to conservation planning. These range from simplistic classification techniques to sophisticated modelling and selection strategies. We have also developed a general and novel community level approach to conservation prioritization that significantly improves on methods that were available before. This thesis introduces further degrees of realism into conservation planning methodology. The benefit function -based conservation prioritization framework largely circumvents the problematic phase of target setting, and allowing for trade-offs between species representation provides a more flexible and hopefully more attractive approach to conservation practitioners. The community-level approach seems highly promising and should prove valuable for conservation planning especially in data poor regions. Future work should focus on integrating prioritization methods to deal with multiple aspects in combination influencing the prioritization process, and further testing and refining the community level strategies using real, large datasets.
Resumo:
This paper presents a new numerical integration technique oil arbitrary polygonal domains. The polygonal domain is mapped conformally to the unit disk using Schwarz-Christoffel mapping and a midpoint quadrature rule defined oil this unit disk is used. This method eliminates the need for a two-level isoparametric mapping Usually required. Moreover, the positivity of the Jacobian is guaranteed. Numerical results presented for a few benchmark problems in the context of polygonal finite elements show that the proposed method yields accurate results.
Resumo:
Achieving stabilization of telomeric DNA in G-quadruplex conformation by Various organic compounds has been an important goal for the medicinal chemists seeking to develop new anticancer agents. Several compounds are known to stabilize G-quadruplexes. However, relatively few are known to induce their formation and/or alter the topology, of the preformed quadruplex DNA. Herein, four compounds having the 1,3-phenylene-bis(piperazinyl benzimidazole) unit as a basic skeleton have been synthesized, and their interactions with the 24-mer telomeric DNA sequences from Tetrahymena thermophilia d(T(2)G(4))(4) have been investigated using high-resolution techniques Such as circular dichroism (CD) spectropolarimetry, CD melting, emission spectroscopy, and polyacrylamide gel electrophoresis. The data obtained, in the presence of one of three ions (Li+, Na+, or K+), indicate that all the new compounds have a high affinity for G-quadruplex DNA, and the strength of the binding with G-quadruplex depends on (1) phenyl ring substitution, (ii) the piperazinyl side chain, and (iii) the type of monovalent cation present in the buffer. Results further Suggest that these compounds are able to abet the conversion of the Intramolecular quadruplex into parallel stranded intermolecular G-quadruplex DNA. Notably, these compounds are also capable of inducing and stabilizing the parallel stranded quadruplex from randomly structured DNA in the absence of any stabilizing cation. The kinetics of the structural changes Induced by these compounds could be followed by recording the changes in the CD signal as a function of time. The implications of the findings mentioned above are discussed in this paper.
Resumo:
This research investigates techniques to analyse long duration acoustic recordings to help ecologists monitor birdcall activities. It designs a generalized algorithm to identify a broad range of bird species. It allows ecologists to search for arbitrary birdcalls of interest, rather than restricting them to just a very limited number of species on which the recogniser is trained. The algorithm can help ecologists find sounds of interest more efficiently by filtering out large volumes of unwanted sounds and only focusing on birdcalls.
Resumo:
Continuous epidural analgesia (CEA) and continuous spinal postoperative analgesia (CSPA) provided by a mixture of local anaesthetic and opioid are widely used for postoperative pain relief. E.g., with the introduction of so-called microcatheters, CSPA found its way particularly in orthopaedic surgery. These techniques, however, may be associated with dose-dependent side-effects as hypotension, weakness in the legs, and nausea and vomiting. At times, they may fail to offer sufficient analgesia, e.g., because of a misplaced catheter. The correct position of an epidural catheter might be confirmed by the supposedly easy and reliable epidural stimulation test (EST). The aims of this thesis were to determine a) whether the efficacy, tolerability, and reliability of CEA might be improved by adding the α2-adrenergic agonists adrenaline and clonidine to CEA, and by the repeated use of EST during CEA; and, b) the feasibility of CSPA given through a microcatheter after vascular surgery. Studies I IV were double-blinded, randomized, and controlled trials; Study V was of a diagnostic, prospective nature. Patients underwent arterial bypass surgery of the legs (I, n=50; IV, n=46), total knee arthroplasty (II, n=70; III, n=72), and abdominal surgery or thoracotomy (V, n=30). Postoperative lumbar CEA consisted of regular mixtures of ropivacaine and fentanyl either without or with adrenaline (2 µg/ml (I) and 4 µg/ml (II)) and clonidine (2 µg/ml (III)). CSPA (IV) was given through a microcatheter (28G) and contained either ropivacaine (max. 2 mg/h) or a mixture of ropivacaine (max. 1 mg/h) and morphine (max. 8 µg/h). Epidural catheter tip position (V) was evaluated both by EST at the moment of catheter placement and several times during CEA, and by epidurography as reference diagnostic test. CEA and CSPA were administered for 24 or 48 h. Study parameters included pain scores assessed with a visual analogue scale, requirements of rescue pain medication, vital signs, and side-effects. Adrenaline (I and II) had no beneficial influence as regards the efficacy or tolerability of CEA. The total amounts of epidurally-infused drugs were even increased in the adrenaline group in Study II (p=0.02, RM ANOVA). Clonidine (III) augmented pain relief with lowered amounts of epidurally infused drugs (p=0.01, RM ANOVA) and reduced need for rescue oxycodone given i.m. (p=0.027, MW-U; median difference 3 mg (95% CI 0 7 mg)). Clonidine did not contribute to sedation and its influence on haemodynamics was minimal. CSPA (IV) provided satisfactory pain relief with only limited blockade of the legs (no inter-group differences). EST (V) was often related to technical problems and difficulties of interpretation, e.g., it failed to identify the four patients whose catheters were outside the spinal canal already at the time of catheter placement. As adjuvants to lumbar CEA, clonidine only slightly improved pain relief, while adrenaline did not provide any benefit. The role of EST applied at the time of epidural catheter placement or repeatedly during CEA remains open. The microcatheter CSPA technique appeared effective and reliable, but needs to be compared to routine CEA after peripheral arterial bypass surgery.
Resumo:
Four new 5-aminoisophthalates of cobalt and nickel have been prepared employing hydro/solvothermal methods: [Co2(C8H5NO4)2(C4H4N2)(H2O)2]·3H2O (I), [Ni2(C8H5NO4)2(C4H4N2)(H2O)2]·3H2O (II), [Co2(H2O)(μ3-OH)2(C8H5NO4)] (III), and [Ni2(H2O)(μ3-OH)2(C8H5NO4)] (IV). Compounds I and II are isostructural, having anion-deficient CdCl2 related layers bridged by a pyrazine ligand, giving rise to a bilayer arrangement. Compounds III and IV have one-dimensional M−O(H)−M chains connected by the 5-aminoisophthalate units forming a three-dimensional structure. The coordinated as well as the lattice water molecules of I and II could be removed and inserted by simple heating−cooling cycles under the atmospheric conditions. The removal of the coordinated water molecule is accompanied by changes in the coordination environment around the M2+ (M = Co, Ni) and color of the samples (purple to blue, Co; green to dark yellow, Ni). This change has been examined by a variety of techniques that include in situ single crystal to single crystal transformation studies and in situ IR and UV−vis spectroscopic studies. Magnetic studies indicate antiferromagnetic behavior in I and II, a field-induced magnetism in III, and a canted antiferromagnetic behavior in IV.
Resumo:
A considerable amount of work has been dedicated on the development of analytical solutions for flow of chemical contaminants through soils. Most of the analytical solutions for complex transport problems are closed-form series solutions. The convergence of these solutions depends on the eigen values obtained from a corresponding transcendental equation. Thus, the difficulty in obtaining exact solutions from analytical models encourages the use of numerical solutions for the parameter estimation even though, the later models are computationally expensive. In this paper a combination of two swarm intelligence based algorithms are used for accurate estimation of design transport parameters from the closed-form analytical solutions. Estimation of eigen values from a transcendental equation is treated as a multimodal discontinuous function optimization problem. The eigen values are estimated using an algorithm derived based on glowworm swarm strategy. Parameter estimation of the inverse problem is handled using standard PSO algorithm. Integration of these two algorithms enables an accurate estimation of design parameters using closed-form analytical solutions. The present solver is applied to a real world inverse problem in environmental engineering. The inverse model based on swarm intelligence techniques is validated and the accuracy in parameter estimation is shown. The proposed solver quickly estimates the design parameters with a great precision.
Resumo:
Sugarcane has garnered much interest for its potential as a viable renewable energy crop. While the use of sugar juice for ethanol production has been in practice for years, a new focus on using the fibrous co-product known as bagasse for producing renewable fuels and bio-based chemicals is growing in interest. The success of these efforts, and the development of new varieties of energy canes, could greatly increase the use of sugarcane and sugarcane biomass for fuels while enhancing industry sustainability and competitiveness. Sugarcane-Based Biofuels and Bioproducts examines the development of a suite of established and developing biofuels and other renewable products derived from sugarcane and sugarcane-based co-products, such as bagasse. Chapters provide broad-ranging coverage of sugarcane biology, biotechnological advances, and breakthroughs in production and processing techniques. This text brings together essential information regarding the development and utilization of new fuels and bioproducts derived from sugarcane. Authored by experts in the field, Sugarcane-Based Biofuels and Bioproducts is an invaluable resource for researchers studying biofuels, sugarcane, and plant biotechnology as well as sugar and biofuels industry personnel.
Resumo:
Combining the advanced techniques of optimal dynamic inversion and model-following neuro-adaptive control design, an innovative technique is presented to design an automatic drug administration strategy for effective treatment of chronic myelogenous leukemia (CML). A recently developed nonlinear mathematical model for cell dynamics is used to design the controller (medication dosage). First, a nominal controller is designed based on the principle of optimal dynamic inversion. This controller can treat the nominal model patients (patients who can be described by the mathematical model used here with the nominal parameter values) effectively. However, since the system parameters for a realistic model patient can be different from that of the nominal model patients, simulation studies for such patients indicate that the nominal controller is either inefficient or, worse, ineffective; i.e. the trajectory of the number of cancer cells either shows non-satisfactory transient behavior or it grows in an unstable manner. Hence, to make the drug dosage history more realistic and patient-specific, a model-following neuro-adaptive controller is augmented to the nominal controller. In this adaptive approach, a neural network trained online facilitates a new adaptive controller. The training process of the neural network is based on Lyapunov stability theory, which guarantees both stability of the cancer cell dynamics as well as boundedness of the network weights. From simulation studies, this adaptive control design approach is found to be very effective to treat the CML disease for realistic patients. Sufficient generality is retained in the mathematical developments so that the technique can be applied to other similar nonlinear control design problems as well.
Resumo:
Technological development of fast multi-sectional, helical computed tomography (CT) scanners has allowed computed tomography perfusion (CTp) and angiography (CTA) in evaluating acute ischemic stroke. This study focuses on new multidetector computed tomography techniques, namely whole-brain and first-pass CT perfusion plus CTA of carotid arteries. Whole-brain CTp data is acquired during slow infusion of contrast material to achieve constant contrast concentration in the cerebral vasculature. From these data quantitative maps are constructed of perfused cerebral blood volume (pCBV). The probability curve of cerebral infarction as a function of normalized pCBV was determined in patients with acute ischemic stroke. Normalized pCBV, expressed as a percentage of contralateral normal brain pCBV, was determined in the infarction core and in regions just inside and outside the boundary between infarcted and noninfarcted brain. Corresponding probabilities of infarction were 0.99, 0.96, and 0.11, R² was 0.73, and differences in perfusion between core and inner and outer bands were highly significant. Thus a probability of infarction curve can help predict the likelihood of infarction as a function of percentage normalized pCBV. First-pass CT perfusion is based on continuous cine imaging over a selected brain area during a bolus injection of contrast. During its first passage, contrast material compartmentalizes in the intravascular space, resulting in transient tissue enhancement. Functional maps such as cerebral blood flow (CBF), and volume (CBV), and mean transit time (MTT) are then constructed. We compared the effects of three different iodine concentrations (300, 350, or 400 mg/mL) on peak enhancement of normal brain tissue and artery and vein, stratified by region-of-interest (ROI) location, in 102 patients within 3 hours of stroke onset. A monotonic increasing peak opacification was evident at all ROI locations, suggesting that CTp evaluation of patients with acute stroke is best performed with the highest available concentration of contrast agent. In another study we investigated whether lesion volumes on CBV, CBF, and MTT maps within 3 hours of stroke onset predict final infarct volume, and whether all these parameters are needed for triage to intravenous recombinant tissue plasminogen activator (IV-rtPA). The effect of IV-rtPA on the affected brain by measuring salvaged tissue volume in patients receiving IV-rtPA and in controls was investigated also. CBV lesion volume did not necessarily represent dead tissue. MTT lesion volume alone can serve to identify the upper size limit of the abnormally perfused brain, and those with IV-rtPA salvaged more brain than did controls. Carotid CTA was compared with carotid DSA in grading of stenosis in patients with stroke symptoms. In CTA, the grade of stenosis was determined by means of axial source and maximum intensity projection (MIP) images as well as a semiautomatic vessel analysis. CTA provides an adequate, less invasive alternative to conventional DSA, although tending to underestimate clinically relevant grades of stenosis.