918 resultados para Trial and error
Resumo:
The XSophe computer simulation software suite consisting of a daemon, the XSophe interface and the computational program Sophe is a state of the art package for the simulation of electron paramagnetic resonance spectra. The Sophe program performs the computer simulation and includes a number of new technologies including; the SOPHE partition and interpolation schemes, a field segmentation algorithm, homotopy, parallelisation and spectral optimisation. The SOPHE partition and interpolation scheme along with a field segmentation algorithm greatly increases the speed of simulations for most systems. Multidimensional homotopy provides an efficient method for accurately tracing energy levels and hence tracing transitions in the presence of energy level anticrossings and looping transitions and allowing computer simulations in frequency space. Recent enhancements to Sophe include the generalised treatment of distributions of orientational parameters, termed the mosaic misorientation linewidth model and a faster more efficient algorithm for the calculation of resonant field positions and transition probabilities. For complex systems the parallelisation enables the simulation of these systems on a parallel computer and the optimisation algorithms in the suite provide the experimentalist with the possibility of finding the spin Hamiltonian parameters in a systematic manner rather than a trial-and-error process. The XSophe software suite has been used to simulate multifrequency EPR spectra (200 MHz to 6 00 GHz) from isolated spin systems (S > ~½) and coupled centres (Si, Sj _> I/2). Griffin, M.; Muys, A.; Noble, C.; Wang, D.; Eldershaw, C.; Gates, K.E.; Burrage, K.; Hanson, G.R."XSophe, a Computer Simulation Software Suite for the Analysis of Electron Paramagnetic Resonance Spectra", 1999, Mol. Phys. Rep., 26, 60-84.
Resumo:
Support vector machines (SVMs) have recently emerged as a powerful technique for solving problems in pattern classification and regression. Best performance is obtained from the SVM its parameters have their values optimally set. In practice, good parameter settings are usually obtained by a lengthy process of trial and error. This paper describes the use of genetic algorithm to evolve these parameter settings for an application in mobile robotics.
Resumo:
Eukaryotic membrane proteins cannot be produced in a reliable manner for structural analysis. Consequently, researchers still rely on trial-and-error approaches, which most often yield insufficient amounts. This means that membrane protein production is recognized by biologists as the primary bottleneck in contemporary structural genomics programs. Here, we describe a study to examine the reasons for successes and failures in recombinant membrane protein production in yeast, at the level of the host cell, by systematically quantifying cultures in high-performance bioreactors under tightlydefined growth regimes. Our data show that the most rapid growth conditions of those chosen are not the optimal production conditions. Furthermore, the growth phase at which the cells are harvested is critical: We show that it is crucial to grow cells under tightly-controlled conditions and to harvest them prior to glucose exhaustion, just before the diauxic shift. The differences in membrane protein yields that we observe under different culture conditions are not reflected in corresponding changes in mRNA levels of FPS1, but rather can be related to the differential expression of genes involved in membrane protein secretion and yeast cellular physiology. Copyright © 2005 The Protein Society.
Resumo:
Background The production of high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences that has yet to be addressed in a truly rational manner. Typically eukaryotic protein production experiments have relied on varying expression construct cassettes such as promoters and tags, or culture process parameters such as pH, temperature and aeration to enhance yields. These approaches require repeated rounds of trial-and-error optimization and cannot provide a mechanistic insight into the biology of recombinant protein production. We published an early transcriptome analysis that identified genes implicated in successful membrane protein production experiments in yeast. While there has been a subsequent explosion in such analyses in a range of production organisms, no one has yet exploited the genes identified. The aim of this study was to use the results of our previous comparative transcriptome analysis to engineer improved yeast strains and thereby gain an understanding of the mechanisms involved in high-yielding protein production hosts. Results We show that tuning BMS1 transcript levels in a doxycycline-dependent manner resulted in optimized yields of functional membrane and soluble protein targets. Online flow microcalorimetry demonstrated that there had been a substantial metabolic change to cells cultured under high-yielding conditions, and in particular that high yielding cells were more metabolically efficient. Polysome profiling showed that the key molecular event contributing to this metabolically efficient, high-yielding phenotype is a perturbation of the ratio of 60S to 40S ribosomal subunits from approximately 1:1 to 2:1, and correspondingly of 25S:18S ratios from 2:1 to 3:1. This result is consistent with the role of the gene product of BMS1 in ribosome biogenesis. Conclusion This work demonstrates the power of a rational approach to recombinant protein production by using the results of transcriptome analysis to engineer improved strains, thereby revealing the underlying biological events involved.
Resumo:
In recent years structured packings have become more widely used in the process industries because of their improved volumetric efficiency. Most structured packings consist of corrugated sheets placed in the vertical plane The corrugations provide a regular network of channels for vapour liquid contact. Until recently it has been necessary to develop new packings by trial and error, testing new shapes in the laboratory. The orderly repetitive nature of the channel network produced by a structured packing suggests it may be possible to develop improved structured packings by the application of computational fluid dynamics (CFD) to calculate the packing performance and evaluate changes in shape so as to reduce the need for laboratory testing. In this work the CFD package PHOENICS has been used to predict the flow patterns produced in the vapour phase as it passes through the channel network. A particular novelty of the approach is to set up a method of solving the Navier Stokes equations for any particular intersection of channels. The flow pattern of the streams leaving the intersection is then made the input to the downstream intersection. In this way the flow pattern within a section of packing can be calculated. The resulting heat or mass transfer performance can be calculated by other standard CFD procedures. The CFD predictions revealed a circulation developing within the channels which produce a loss in mass transfer efficiency The calculations explained and predicted a change in mass transfer efficiency with depth of the sheets. This effect was also shown experimentally. New shapes of packing were proposed to remove the circulation and these were evaluated using CFD. A new shape was chosen and manufactured. This was tested experimentally and found to have a higher mass transfer efficiency than the standard packing.
Resumo:
Gas absorption, the removal of one or more constitutents from a gas mixture, is widely used in chemical processes. In many gas absorption processes, the gas mixture is already at high pressure and in recent years organic solvents have been developed for the process of physical absorption at high pressure followed by low pressure regeneration of the solvent and recovery of the absorbed gases. Until now the discovery of new solvents has usually been by expensive and time consuming trial and error laboratory tests. This work describes a new approach, whereby a solvent is selected from considerations of its molecular structure by applying recently published methods of predicting gas solubility from the molecular groups which make up the solvent molecule. The removal of the acid gases of carbon dioxide and hydrogen sulfide from methane or hydrogen was used as a commercially important example. After a preliminary assessment to identify promising moecular groups, more than eighty new solvent molecules were designed and evaluated by predicting gas solubility. The other important physical properties were also predicted by appropriate theoretical procedures, and a commercially promising new solvent was chosen to have a high solubility for acid gases, a low solubility for methane and hydrogen, a low vapour pressure, and a low viscosity. The solvent chosen, of molecular structure Ch3-COCH2-CH2-CO-CH3, was tested in the laboratory and shown to have physical properties, except for vapour pressures, close to those predicted. That is gas solubilities were within 10% but lower than predicted. Viscosity within 10% but higher than predicted and a vapour pressure significantly lower than predicted. A computer program was written to predict gas solubility in the new solvent at the high pressures (25 bar) used in practice. This is based on the group contribution method of Skold Jorgensen (1984). Before using this with the new solvent, Acetonyl acetone, the method was show to be sufficiently accurate by comparing predicted values of gas solubility with experimental solubilities from the literature for 14 systems up to 50 bar. A test of the commercial potential of the new solvent was made by means of two design studies which compared the size of plant and approximate relative costs of absorbing acid gases by means of the new solvent with other commonly used solvents. These were refrigerated methanol(Rectisol process) and Dimethyl Ether or Polyethylene Glycol(Selexol process). Both studies showed in terms of capital and operating cost some significant advantage for plant designed for the new solvent process.
Resumo:
Understanding the structures and functions of membrane proteins is an active area of research within bioscience. Membrane proteins are key players in essential cellular processes such as the uptake of nutrients, the export of waste products, and the way in which cells communicate with their environment. It is therefore not surprising that membrane proteins are targeted by over half of all prescription drugs. Since most membrane proteins are not abundant in their native membranes, it is necessary to produce them in recombinant host cells to enable further structural and functional studies. Unfortunately, achieving the required yields of functional recombinant membrane proteins is still a bottleneck in contemporary bioscience. This has highlighted the need for defined and rational optimization strategies based upon experimental observation rather than relying on trial and error. We have published a transcriptome and subsequent genetic analysis that has identified genes implicated in high-yielding yeast cells. These results have highlighted a role for alterations to a cell's protein synthetic capacity in the production of high yields of recombinant membrane protein: paradoxically, reduced protein synthesis favors higher yields. These results highlight a potential bottleneck at the protein folding or translocation stage of protein production.
Resumo:
The slow down in the drug discovery pipeline is, in part, owing to a lack of structural and functional information available for new drug targets. Membrane proteins, the targets of well over 50% of marketed pharmaceuticals, present a particular challenge. As they are not naturally abundant, they must be produced recombinantly for the structural biology that is a prerequisite to structure-based drug design. Unfortunately, however, obtaining high yields of functional, recombinant membrane proteins remains a major bottleneck in contemporary bioscience. While repeated rounds of trial-and-error optimization have not (and cannot) reveal mechanistic details of the biology of recombinant protein production, examination of the host response has provided new insights. To this end, we published an early transcriptome analysis that identified genes implicated in high-yielding yeast cell factories, which has enabled the engineering of improved production strains. These advances offer hope that the bottleneck of membrane protein production can be relieved rationally.
Resumo:
The “trial and error” method is fundamental for Master Minddecision algorithms. On the basis of Master Mind games and strategies weconsider some data mining methods for tests using students as teachers.Voting, twins, opposite, simulate and observer methods are investigated.For a pure data base these combinatorial algorithms are faster then manyAI and Master Mind methods. The complexities of these algorithms arecompared with basic combinatorial methods in AI. ACM Computing Classification System (1998): F.3.2, G.2.1, H.2.1, H.2.8, I.2.6.
Resumo:
Water-alternating-gas (WAG) is an enhanced oil recovery method combining the improved macroscopic sweep of water flooding with the improved microscopic displacement of gas injection. The optimal design of the WAG parameters is usually based on numerical reservoir simulation via trial and error, limited by the reservoir engineer’s availability. Employing optimisation techniques can guide the simulation runs and reduce the number of function evaluations. In this study, robust evolutionary algorithms are utilized to optimise hydrocarbon WAG performance in the E-segment of the Norne field. The first objective function is selected to be the net present value (NPV) and two global semi-random search strategies, a genetic algorithm (GA) and particle swarm optimisation (PSO) are tested on different case studies with different numbers of controlling variables which are sampled from the set of water and gas injection rates, bottom-hole pressures of the oil production wells, cycle ratio, cycle time, the composition of the injected hydrocarbon gas (miscible/immiscible WAG) and the total WAG period. In progressive experiments, the number of decision-making variables is increased, increasing the problem complexity while potentially improving the efficacy of the WAG process. The second objective function is selected to be the incremental recovery factor (IRF) within a fixed total WAG simulation time and it is optimised using the same optimisation algorithms. The results from the two optimisation techniques are analyzed and their performance, convergence speed and the quality of the optimal solutions found by the algorithms in multiple trials are compared for each experiment. The distinctions between the optimal WAG parameters resulting from NPV and oil recovery optimisation are also examined. This is the first known work optimising over this complete set of WAG variables. The first use of PSO to optimise a WAG project at the field scale is also illustrated. Compared to the reference cases, the best overall values of the objective functions found by GA and PSO were 13.8% and 14.2% higher, respectively, if NPV is optimised over all the above variables, and 14.2% and 16.2% higher, respectively, if IRF is optimised.
Resumo:
In the past, many papers have been presented which show that the coating of cutting tools often yields decreased wear rates and reduced coefficients of friction. Although different theories are proposed, covering areas such as hardness theory, diffusion barrier theory, thermal barrier theory, and reduced friction theory, most have not dealt with the question of how and why the coating of tool substrates with hard materials such as Titanium Nitride (TiN), Titanium Carbide (TiC) and Aluminium Oxide (Al203) transforms the performance and life of cutting tools. This project discusses the complex interrelationship that encompasses the thermal barrier function and the relatively low sliding friction coefficient of TiN on an undulating tool surface, and presents the result of an investigation into the cutting characteristics and performance of EDMed surface-modified carbide cutting tool inserts. The tool inserts were coated with TiN by the physical vapour deposition (PVD) method. PVD coating is also known as Ion-plating which is the general term of the coating method in which the film is created by attracting ionized metal vapour in this the metal was Titanium and ionized gas onto negatively biased substrate surface. Coating by PVD was chosen because it is done at a temperature of not more than 5000C whereas chemical Vapour Deposition CVD process is done at very high temperature of about 8500C and in two stages of heating up the substrates. The high temperatures involved in CVD affects the strength of the (tool) substrates. In this study, comparative cutting tests using TiN-coated control specimens with no EDM surface structures and TiN-coated EDMed tools with a crater-like surface topography were carried out on mild steel grade EN-3. Various cutting speeds were investigated, up to an increase of 40% of the tool manufacturer’s recommended speed. Fifteen minutes of cutting were carried out for each insert at the speeds investigated. Conventional tool inserts normally have a tool life of approximately 15 minutes of cutting. After every five cuts (passes) microscopic pictures of the tool wear profiles were taken, in order to monitor the progressive wear on the rake face and on the flank of the insert. The power load was monitored for each cut taken using an on-board meter on the CNC machine to establish the amount of power needed for each stage of operation. The spindle drive for the machine is an 11 KW/hr motor. Results obtained confirmed the advantages of cutting at all speeds investigated using EDMed coated inserts, in terms of reduced tool wear and low power loads. Moreover, the surface finish on the workpiece was consistently better for the EDMed inserts. The thesis discusses the relevance of the finite element method in the analysis of metal cutting processes, so that metal machinists can design, manufacture and deliver goods (tools) to the market quickly and on time without going through the hassle of trial and error approach for new products. Improvements in manufacturing technologies require better knowledge of modelling metal cutting processes. Technically the use of computational models has a great value in reducing or even eliminating the number of experiments traditionally used for tool design, process selection, machinability evaluation, and chip breakage investigations. In this work, much interest in theoretical and experimental investigations of metal machining were given special attention. Finite element analysis (FEA) was given priority in this study to predict tool wear and coating deformations during machining. Particular attention was devoted to the complicated mechanisms usually associated with metal cutting, such as interfacial friction; heat generated due to friction and severe strain in the cutting region, and high strain rates. It is therefore concluded that Roughened contact surface comprising of peaks and valleys coated with hard materials (TiN) provide wear-resisting properties as the coatings get entrapped in the valleys and help reduce friction at chip-tool interface. The contributions to knowledge: a. Relates to a wear-resisting surface structure for application in contact surfaces and structures in metal cutting and forming tools with ability to give wear-resisting surface profile. b. Provide technique for designing tool with roughened surface comprising of peaks and valleys covered in conformal coating with a material such as TiN, TiC etc which is wear-resisting structure with surface roughness profile compose of valleys which entrap residual coating material during wear thereby enabling the entrapped coating material to give improved wear resistance. c. Provide knowledge for increased tool life through wear resistance, hardness and chemical stability at high temperatures because of reduced friction at the tool-chip and work-tool interfaces due to tool coating, which leads to reduced heat generation at the cutting zones. d. Establishes that Undulating surface topographies on cutting tips tend to hold coating materials longer in the valleys, thus giving enhanced protection to the tool and the tool can cut faster by 40% and last 60% longer than conventional tools on the markets today.
Resumo:
Tämän kandidaatintutkimuksen tarkoituksena on löytää vastaus siihen, miten vahva voi olla DRM-systeemi, ennen kuin kuluttajat eivät enää hyväksy sitä. DRM-systeemejä on monen tasoisia, mutta ne eivät ole soveltuvia sellaisenaan kaikille eri alustoille. Peliteollisuuden digitaalisten käyttöoikeuksien hallintajärjestelmillä on omanlaisensa lainalaisuudet kuin esimerkiksi musiikkiteollisuudella. Lisäksi on olemassa tietty tämän hetkinen hyväksytty DRM:n taso, josta voi olla vaarallista poiketa. Tutkimus on luonteeltaan laadullinen tutkimus. Työssä on sovellettu sekä diskurssi- että sisällönanalyysin oppeja. Tutkimuksen aineistona on erilaisten viestiketjujen tekstit, joiden pohjalta pyritään löytämään vastaus tutkimuskysymykseen. Ketjut on jaettu eri vahvuisiksi sen perusteella, miten vahva on DRM:ää koskeva uutinen, jonka pohjalta viestiketju on syntynyt. Koska aineisto on puhuttua kieltä ja sillä on aina oma merkityksensä kontekstissaan, ovat valitut menetelmät soveltuvia analysoimaan aineistoa. Eri ketjujen analyysien tuloksien pohjalta voidaan sanoa, että DRM ei voi olla sitä tasoa suurempi kuin mikä on sen hetkinen vallitseva taso. Jos tästä tasosta poiketaan pikkaisenkin, voi se aiheuttaa suurta närästystä kuluttajien keskuudessa, jopa siihen saakka, että yritys menettää tuloja. Sen hetkiseen tasoon on päästy erinäisten kokeilujen kautta, joista kuluttajat ovat kärsineet, joten he eivät suosiolla hyväksy yhtään sen suurempaa tasoa kuin mikä vallitsee sillä hetkellä. Jos yritys näkee, että tasoa on pakko tiukentaa, täytyy tiukennus tehdä pikkuhiljaa ja naamioida se lisäominaisuuksilla. Kuluttajat ovat tietoisia omista oikeuksistaan, eivätkä he helpolla halua luopua niistä yhtään sen enempää kuin on tarpeellista.
Resumo:
Dissertação composta por 02 artigos.
Resumo:
Der Plan, große Mengen radioaktiver Materialien in Salinar- gesteine von Salzstöcken einzulagern, schließt die Rück- holbarkeit praktisch aus. Bei der Abschätzung des Langzeitverhaltens der Gesteine, der Grubenbaue und des gesamten Diapirs sind - wie auch beim Einlagerungsvorgang selbst - Fehler nicht auszuschließen und nicht korrigierbar. Die Antragsteller behandeln die geowissenschaftlichen Aspekte der Einlagerung (Teilprojekt 6) nicht qualifiziert und widmen den damit verbundenen Problemen eine unangemessen geringe Aufmerksamkeit. Sie lassen die einem solchen Projekt adäquate planerische Sorgfalt vermissen, gehen mit den zur Verfügung stehenden Daten in ihrer Argumentation ungenau oder selektiv um und erwecken den Eindruck, unter der Erdoberfläche nach dem 'trial-and-error'-Prinzip Vorgehen zu wollen. Salzstöcke sind tektonisch grundsätzlich instabile Gesteinskörper. Die an ihrem Aufbau überwiegend beteiligten Gesteine sind die wasserlöslichsten der Erdkruste; sie reagieren am empfindlichsten auf mechanische und thermische Beanspruchung und sind am reaktionsfähigsten bei möglichen Interaktionen zwischen Einlagerungsmaterial und Einlagerungsmedium. Salzstöcke sind die auf bergtechnische Eingriffe am sensibelsten reagierenden Gesteinskörper, insbesondere, wenn der am Salzspiegel herrschende Lösungszustand gestört wird, wenn durch künstliche Hohlräume im Innern Kriechbewegung (Konvergenz) des gesamten Salinars ausgelöst wird und wenn mit der Einlagerung thermische Belastungen einhergehen, welche höher sind als die mit der Gesteinsbildung und -Umbildung verbundenen Temperaturen es jemals waren. Daß trotz dieser Empfindlichkeit Gewinnungsbergbau in Diapiren möglich ist, ist kein Beleg für ihre Eignung als Endlager. Die Geowissenschaften verfügen über Modellvorstellungen zur Deutung der Salinargenese, des Salzaufstiegs und des gebirgsmechanischen Verhaltens. Diese Modelle sind teils als 'Lehrbuchwahrheit1 allgemein akzeptiert, werden z.T. aber auch als Hypothesen kontrovers diskutiert. Langzeitprognosen über das Verhalten von Gesteinen sind nicht verläßlich, wenn sie auf widersprochenen Modellvorstellungen über das Wesen von Gesteinen und Gesteinsverhalten beruhen. Die Salzstockauswahl ging der geowissenschaftlichen Erkundung voraus. Die wenigen publizierten Daten zur regionalen Geologie lassen nicht auf einen bergbautechnisch besonders leicht zu beherrschenden Salzstock schließen. Die Lage des Diapirs im Verbreitungsgebiet wasserreicher quartärzeitlicher Rinnensysteme spricht genauso gegen die Standortwahl wie die zu erwartende komplizierte Interntektonik und die politisch bedingte Unerforschbarkeit der Gesamtstruktur Gorleben-Rambow. Als Fehlentscheidung ist die durch Landkäufe am Standort Gorleben vorweggenommene Auswahl des Fabrikgeländes einschließlich Schachtanlage und Tritiumwasser-Verpressung auf dem Salzstock zu werten. Der nicht auszuschließende "Störfall Wassereinbruch" kann sich über Tage auf die Standsicherheit der riesigen Gebäude und Lagerbecken zerstörerisch auswirken und so Kontamination der Umgebung verursachen. Geowissenschaftliche Gründe, Erfahrungen aus der Bergbaukunde und die Erwartung, daß man fehlerhaftes Handeln nicht ausschließen kann, führen den Verfasser zu der Überzeugung, daß die Endlagerung radioaktiver Abfälle im Salz nicht zu empfehlen und nicht zu verantworten ist.
Resumo:
The effectiveness and value of entrepreneurship education is much debated within academic literature. The individual’s experience is advocated as being key to shaping entrepreneurial education and design through a multiplicity of theoretical concepts. Latent, pre-nascent and nascent entrepreneurship (doing) studies within the accepted literature provide an exceptional richness in diversity of thought however, there is a paucity of research into latent entrepreneurship education. In addition, Tolman’s early work shows the existence of cases whereby a novel problem is solved without trial and error, and sees such previous learning situations and circumstances as “examples of latent learning and reasoning”, (Deutsch, 1956, pg115). Latent learning has historically been the cause of much academic debate however, Coon’s (2004, pg260) work refers to “latent (hidden) learning … (as being) … without obvious reinforcement and remains hidden until reinforcement is provided” and thus, forms the working definition for the purpose of this study.