978 resultados para Normalization constraint


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The term water stress refers to the effects of low water availability on microbial growth and physiology. Water availability has been proposed as a major constraint for the use of microorganisms in contaminated sites with the purpose of bioremediation. Sphingomonas wittichii RW1 is a bacterium capable of degrading the xenobiotic compounds dibenzofuran and dibenzo-p-dioxin, and has potential to be used for targeted bioremediation. The aim of the current work was to identify genes implicated in water stress in RW1 by means of transposon mutagenesis and mutant growth experiments. Conditions of low water potential were mimicked by adding NaCl to the growth media. Three different mutant selection or separation method were tested which, however recovered different mutants. Recovered transposon mutants with poorer growth under salt-induced water stress carried insertions in genes involved in proline and glutamate biosynthesis, and further in a gene putatively involved in aromatic compound catabolism. Transposon mutants growing poorer on medium with lowered water potential also included ones that had insertions in genes involved in more general functions such as transcriptional regulation, elongation factor, cell division protein, RNA polymerase β or an aconitase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Promazine hydrochloride was injected accidentally in the antecubital artery of a 42-year-old woman, resulting in severe ischemia of the second and third fingers of her right hand which lasted for four days before she was hospitalized. Vasodilation by combining axillary plexus block and intravenous sodium nitroprusside did not improve ischemia and local thrombolysis was performed using recombinant tissue-type plasminogen activator (50 mg over 8 hours), resulting in normalization of digital pressure in one of the two affected fingers. The outcome was favourable and amputation could be avoided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tämä diplomityö on tehty Hollming Works Oy:n Loviisan yksikölle. Työn tekovaiheessa yrityksessä oltiin aloittamassa tuulivoimalakoneikkojen sarjatyönä tapahtuvaa kokoonpanoa. Työn tavoitteena oli kehittää koneikkojen kokoonpanoverstaan toimintaa. Ennen työn alkua yritykseen oli perustettu tuulivoimalakoneikkojen kokoonpanoon tuoteverstas. Verstaalla kootaan raskaita osakokoonpanoja, jotka lopulta yhdistetään koneikoksi, jota kutsutaan myös naselliksi. Tämän jälkeen varusteluvaiheessa koneikkoon asennetaan mm. erilaisia sähköisiä ja hydraulisia järjestelmiä. Varusteluvaiheen päätteeksi koneikon toiminta testataan. Viimeisenä vaiheenakoneikon päälle asennetaan lasikuitukuori. Työn alkuosassa on käyty läpi tuotannon- ja materiaalinohjauksen perusteita, kokoonpanon kehittämistä ja layout-suunnittelua, jonka pohjalta tuulivoimalaverstaalle tehtiin kokoonpanosoluihin perustuva layoutsuunnitelma. Tuotannonohjaus suunnitelmassa perustuu kapeikko-ohjaukseen. Tuotannon kapeikon muodostava varusteluvaihe imee aiemmista vaiheista osakokoonpanot. Varusteluvaiheen ja testauksen jälkeen koneikko siirtyytyöntöohjatusti lasikuitukuoren asennukseen. Järjestelmässä pyritään tehokkaaseen tilankäyttöön, lyhyeen läpäisyaikaan ja vähäiseen keskeneräisen tuotannon määrään.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Knowledge of the pathological diagnosis before deciding the best strategy for treating parasellar lesions is of prime importance, due to the relative high morbidity and side-effects of open direct approaches to this region, known to be rich in important vasculo-nervous structures. When imaging is not evocative enough to ascertain an accurate pathological diagnosis, a percutaneous biopsy through the transjugal-transoval route (of Hartel) may be performed to guide the therapeutic decision.The chapter is based on the authors' experience in 50 patients who underwent the procedure over the ten past years. There was no mortality and only little (mostly transient) morbidity. Pathological diagnosis accuracy of the method revealed good, with a sensitivity of 0.83 and a specificity of 1.In the chapter the authors first recall the surgical anatomy background from personal laboratory dissections. They then describe the technical procedure, as well as the tissue harvesting method. Finally they define indications together with the decision-making process.Due to the constraint trajectory of the biopsy needle inserted through the Foramen Ovale, accessible lesions are only those located in the Meckel trigeminal Cave, the posterior sector of the cavernous sinus compartment, and the upper part of the petroclival region.The authors advise to perform this percutaneous biopsy method when imaging does not provide sufficient evidence of the pathological nature of the lesion, for therapeutic decision. Goal is to avoid unnecessary open surgery or radiosurgery, also inappropriate chemo-/radio-therapy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MOTIVATION: Comparative analyses of gene expression data from different species have become an important component of the study of molecular evolution. Thus methods are needed to estimate evolutionary distances between expression profiles, as well as a neutral reference to estimate selective pressure. Divergence between expression profiles of homologous genes is often calculated with Pearson's or Euclidean distance. Neutral divergence is usually inferred from randomized data. Despite being widely used, neither of these two steps has been well studied. Here, we analyze these methods formally and on real data, highlight their limitations and propose improvements. RESULTS: It has been demonstrated that Pearson's distance, in contrast to Euclidean distance, leads to underestimation of the expression similarity between homologous genes with a conserved uniform pattern of expression. Here, we first extend this study to genes with conserved, but specific pattern of expression. Surprisingly, we find that both Pearson's and Euclidean distances used as a measure of expression similarity between genes depend on the expression specificity of those genes. We also show that the Euclidean distance depends strongly on data normalization. Next, we show that the randomization procedure that is widely used to estimate the rate of neutral evolution is biased when broadly expressed genes are abundant in the data. To overcome this problem, we propose a novel randomization procedure that is unbiased with respect to expression profiles present in the datasets. Applying our method to the mouse and human gene expression data suggests significant gene expression conservation between these species. CONTACT: marc.robinson-rechavi@unil.ch; sven.bergmann@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Koneet voidaan usein jakaa osajärjestelmiin, joita ovat ohjaus- ja säätöjärjestelmät, voimaa tuottavat toimilaitteet ja voiman välittävät mekanismit. Eri osajärjestelmiä on simuloitu tietokoneavusteisesti jo usean vuosikymmenen ajan. Osajärjestelmien yhdistäminen on kuitenkin uudempi ilmiö. Usein esimerkiksi mekanismien mallinnuksessa toimilaitteen tuottama voimaon kuvattu vakiona, tai ajan funktiona muuttuvana voimana. Vastaavasti toimilaitteiden analysoinnissa mekanismin toimilaitteeseen välittämä kuormitus on kuvattu vakiovoimana, tai ajan funktiona työkiertoa kuvaavana kuormituksena. Kun osajärjestelmät on erotettu toisistaan, on niiden välistenvuorovaikutuksien tarkastelu erittäin epätarkkaa. Samoin osajärjestelmän vaikutuksen huomioiminen koko järjestelmän käyttäytymissä on hankalaa. Mekanismien dynamiikan mallinnukseen on kehitetty erityisesti tietokoneille soveltuvia numeerisia mallinnusmenetelmiä. Useimmat menetelmistä perustuvat Lagrangen menetelmään, joka mahdollistaa vapaasti valittaviin koordinaattimuuttujiin perustuvan mallinnuksen. Numeerista ratkaisun mahdollistamiseksi menetelmän avulla muodostettua differentiaali-algebraaliyhtälöryhmää joudutaan muokkaamaan esim. derivoimalla rajoiteyhtälöitä kahteen kertaan. Menetelmän alkuperäisessä numeerisissa ratkaisuissa kaikki mekanismia kuvaavat yleistetyt koordinaatit integroidaan jokaisella aika-askeleella. Tästä perusmenetelmästä johdetuissa menetelmissä riippumattomat yleistetyt koordinaatit joko integroidaan ja riippuvat koordinaatit ratkaistaan rajoiteyhtälöiden perusteella tai yhtälöryhmän kokoa pienennetään esim. käyttämällä nopeus- ja kiihtyvyysanalyyseissä eri kiertymäkoordinaatteja kuin asema-analyysissä. Useimmat integrointimenetelmät on alun perin tarkoitettu differentiaaliyhtälöiden (ODE) ratkaisuunjolloin yhtälöryhmään liitetyt niveliä kuvaavat algebraaliset rajoiteyhtälöt saattavat aiheuttaa ongelmia. Nivelrajoitteiden virheiden korjaus, stabilointi, on erittäin tärkeää mekanismien dynamiikan simuloinnin onnistumisen ja tulosten oikeellisuuden kannalta. Mallinnusmenetelmien johtamisessa käytetyn virtuaalisen työn periaatteen oletuksena nimittäin on, etteivät rajoitevoimat tee työtä, eli rajoitteiden vastaista siirtymää ei tapahdu. Varsinkaan monimutkaisten järjestelmien pidemmissä analyyseissä nivelrajoitteet eivät toteudu tarkasti. Tällöin järjestelmän energiatasapainoei toteudu ja järjestelmään muodostuu virtuaalista energiaa, joka rikkoo virtuaalisen työn periaatetta, Tästä syystä tulokset eivät enää pidäpaikkaansa. Tässä raportissa tarkastellaan erityyppisiä mallinnus- ja ratkaisumenetelmiä, ja vertaillaan niiden toimivuutta yksinkertaisten mekanismien numeerisessa ratkaisussa. Menetelmien toimivuutta tarkastellaan ratkaisun tehokkuuden, nivelrajoitteiden toteutumisen ja energiatasapainon säilymisen kannalta.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El objetivo de este estudio fue replicar las estructuras de 5 y 6 factores de segundo orden del 16PF-5. Para la estructura de 5 factores se toma como referencia teórica la estructura obtenida por Russell y Karol (1995), y para la estructura de 6 factores (incluyendo un factor adicional de Razonamiento) la obtenida en muestras americanas por Cattell y Cattell (1995). Se utilizan tres procedimientos para el estudio de la replicabilidad, a) análisis factorial exploratorio, b) análisis de la estructura ortogonal Procrustes, y c) análisis de los índices de congruencia entre las tres matrices factoriales. Las matrices factoriales obtenidas en el presente estudio son similares a las informadas en los estudios de referencia, aunque la solución Procrustes se revela ligeramente más parecida. Los índices de congruencia en ge- neral son aceptables por lo que se concluye que el 16PF-5 demuestra buena replicabilidad.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the ethanol production in the fermentation of Saccharomyces cerevisiae.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the classical theorems of extreme value theory the limits of suitably rescaled maxima of sequences of independent, identically distributed random variables are studied. The vast majority of the literature on the subject deals with affine normalization. We argue that more general normalizations are natural from a mathematical and physical point of view and work them out. The problem is approached using the language of renormalization-group transformations in the space of probability densities. The limit distributions are fixed points of the transformation and the study of its differential around them allows a local analysis of the domains of attraction and the computation of finite-size corrections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Random problem distributions have played a key role in the study and design of algorithms for constraint satisfaction and Boolean satisfiability, as well as in ourunderstanding of problem hardness, beyond standard worst-case complexity. We consider random problem distributions from a highly structured problem domain that generalizes the Quasigroup Completion problem (QCP) and Quasigroup with Holes (QWH), a widely used domain that captures the structure underlying a range of real-world applications. Our problem domain is also a generalization of the well-known Sudoku puz- zle: we consider Sudoku instances of arbitrary order, with the additional generalization that the block regions can have rectangular shape, in addition to the standard square shape. We evaluate the computational hardness of Generalized Sudoku instances, for different parameter settings. Our experimental hardness results show that we can generate instances that are considerably harder than QCP/QWH instances of the same size. More interestingly, we show the impact of different balancing strategies on problem hardness. We also provide insights into backbone variables in Generalized Sudoku instances and how they correlate to problem hardness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tractable cases of the binary CSP are mainly divided in two classes: constraint language restrictions and constraint graph restrictions. To better understand and identify the hardest binary CSPs, in this work we propose methods to increase their hardness by increasing the balance of both the constraint language and the constraint graph. The balance of a constraint is increased by maximizing the number of domain elements with the same number of occurrences. The balance of the graph is defined using the classical definition from graph the- ory. In this sense we present two graph models; a first graph model that increases the balance of a graph maximizing the number of vertices with the same degree, and a second one that additionally increases the girth of the graph, because a high girth implies a high treewidth, an important parameter for binary CSPs hardness. Our results show that our more balanced graph models and constraints result in harder instances when compared to typical random binary CSP instances, by several orders of magnitude. Also we detect, at least for sparse constraint graphs, a higher treewidth for our graph models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: All methods presented to date to map both conductivity and permittivity rely on multiple acquisitions to compute quantitatively the magnitude of radiofrequency transmit fields, B1+. In this work, we propose a method to compute both conductivity and permittivity based solely on relative receive coil sensitivities ( B1-) that can be obtained in one single measurement without the need to neither explicitly perform transmit/receive phase separation nor make assumptions regarding those phases. THEORY AND METHODS: To demonstrate the validity and the noise sensitivity of our method we used electromagnetic finite differences simulations of a 16-channel transceiver array. To experimentally validate our methodology at 7 Tesla, multi compartment phantom data was acquired using a standard 32-channel receive coil system and two-dimensional (2D) and 3D gradient echo acquisition. The reconstructed electric properties were correlated to those measured using dielectric probes. RESULTS: The method was demonstrated both in simulations and in phantom data with correlations to both the modeled and bench measurements being close to identity. The noise properties were modeled and understood. CONCLUSION: The proposed methodology allows to quantitatively determine the electrical properties of a sample using any MR contrast, with the only constraint being the need to have 4 or more receive coils and high SNR. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.