898 resultados para Non-conventional models of career
Resumo:
Hierarchical visualization systems are desirable because a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex high-dimensional data sets. We extend an existing locally linear hierarchical visualization system PhiVis [1] in several directions: bf(1) we allow for em non-linear projection manifolds (the basic building block is the Generative Topographic Mapping -- GTM), bf(2) we introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree, bf(3) we describe folding patterns of low-dimensional projection manifold in high-dimensional data space by computing and visualizing the manifold's local directional curvatures. Quantities such as magnification factors [3] and directional curvatures are helpful for understanding the layout of the nonlinear projection manifold in the data space and for further refinement of the hierarchical visualization plot. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. We demonstrate the visualization system principle of the approach on a complex 12-dimensional data set and mention possible applications in the pharmaceutical industry.
Resumo:
A recent method for phase equilibria, the AGAPE method, has been used to predict activity coefficients and excess Gibbs energy for binary mixtures with good accuracy. The theory, based on a generalised London potential (GLP), accounts for intermolecular attractive forces. Unlike existing prediction methods, for example UNIFAC, the AGAPE method uses only information derived from accessible experimental data and molecular information for pure components. Presently, the AGAPE method has some limitations, namely that the mixtures must consist of small, non-polar compounds with no hydrogen bonding, at low moderate pressures and at conditions below the critical conditions of the components. Distinction between vapour-liquid equilibria and gas-liquid solubility is rather arbitrary and it seems reasonable to extend these ideas to solubility. The AGAPE model uses a molecular lattice-based mixing rule. By judicious use of computer programs a methodology was created to examine a body of experimental gas-liquid solubility data for gases such as carbon dioxide, propane, n-butane or sulphur hexafluoride which all have critical temperatures a little above 298 K dissolved in benzene, cyclo-hexane and methanol. Within this methodology the value of the GLP as an ab initio combining rule for such solutes in very dilute solutions in a variety of liquids has been tested. Using the GLP as a mixing rule involves the computation of rotationally averaged interactions between the constituent atoms, and new calculations have had to be made to discover the magnitude of the unlike pair interactions. These numbers have been seen as significant in their own right in the context of the behaviour of infinitely-dilute solutions. A method for extending this treatment to "permanent" gases has also been developed. The findings from the GLP method and from the more general AGAPE approach have been examined in the context of other models for gas-liquid solubility, both "classical" and contemporary, in particular those derived from equations-of-state methods and from reference solvent methods.
Resumo:
The main theme of research of this project concerns the study of neutral networks to control uncertain and non-linear control systems. This involves the control of continuous time, discrete time, hybrid and stochastic systems with input, state or output constraints by ensuring good performances. A great part of this project is devoted to the opening of frontiers between several mathematical and engineering approaches in order to tackle complex but very common non-linear control problems. The objectives are: 1. Design and develop procedures for neutral network enhanced self-tuning adaptive non-linear control systems; 2. To design, as a general procedure, neural network generalised minimum variance self-tuning controller for non-linear dynamic plants (Integration of neural network mapping with generalised minimum variance self-tuning controller strategies); 3. To develop a software package to evaluate control system performances using Matlab, Simulink and Neural Network toolbox. An adaptive control algorithm utilising a recurrent network as a model of a partial unknown non-linear plant with unmeasurable state is proposed. Appropriately, it appears that structured recurrent neural networks can provide conveniently parameterised dynamic models for many non-linear systems for use in adaptive control. Properties of static neural networks, which enabled successful design of stable adaptive control in the state feedback case, are also identified. A survey of the existing results is presented which puts them in a systematic framework showing their relation to classical self-tuning adaptive control application of neural control to a SISO/MIMO control. Simulation results demonstrate that the self-tuning design methods may be practically applicable to a reasonably large class of unknown linear and non-linear dynamic control systems.
Resumo:
Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.
Resumo:
Objectives Particle delivery to the airways is an attractive prospect for many potential therapeutics, including vaccines. Developing strategies for inhalation of particles provides a targeted, controlled and non-invasive delivery route but, as with all novel therapeutics, in vitro and in vivo testing are needed prior to clinical use. Whilst advanced vaccine testing demands the use of animal models to address safety issues, the production of robust in vitro cellular models would take account of the ethical framework known as the 3Rs (Replacement, Reduction and Refinement of animal use), by permitting initial screening of potential candidates prior to animal use. There is thus a need for relevant, realistic in vitro models of the human airways. Key findings Our laboratory has designed and characterised a multi-cellular model of human airways that takes account of the conditions in the airways and recapitulates many salient features, including the epithelial barrier and mucus secretion. Summary Our human pulmonary models recreate many of the obstacles to successful pulmonary delivery of particles and therefore represent a valid test platform for screening compounds and delivery systems.
Resumo:
The rainbow smelt (Osmerus mordax) is an anadromous teleost that produces type II antifreeze protein (AFP) and accumulates modest urea and high glycerol levels in plasma and tissues as adaptive cryoprotectant mechanisms in sub-zero temperatures. It is known that glyceroneogenesis occurs in liver via a branch in glycolysis and gluconeogenesis and is activated by low temperature; however, the precise mechanisms of glycerol synthesis and trafficking in smelt remain to be elucidated. The objective of this thesis was to provide further insight using functional genomic techniques [e.g. suppression subtractive hybridization (SSH) cDNA library construction, microarray analyses] and molecular analyses [e.g. cloning, quantitative reverse transcription - polymerase chain reaction (QPCR)]. Novel molecular mechanisms related to glyceroneogenesis were deciphered by comparing the transcript expression profiles of glycerol (cold temperature) and non-glycerol (warm temperature) accumulating hepatocytes (Chapter 2) and livers from intact smelt (Chapter 3). Briefly, glycerol synthesis can be initiated from both amino acids and carbohydrate; however carbohydrate appears to be the preferred source when it is readily available. In glycerol accumulating hepatocytes, levels of the hepatic glucose transporter (GLUT2) plummeted and transcript levels of a suite of genes (PEPCK, MDH2, AAT2, GDH and AQP9) associated with the mobilization of amino acids to fuel glycerol synthesis were all transiently higher. In contrast, in glycerol accumulating livers from intact smelt, glycerol synthesis was primarily fuelled by glycogen degradation with higher PGM and PFK (glycolysis) transcript levels. Whether initiated from amino acids or carbohydrate, there were common metabolic underpinnings. Increased PDK2 (an inhibitor of PDH) transcript levels would direct pyruvate derived from amino acids and / or DHAP derived from G6P to glycerol as opposed to oxidation via the citric acid cycle. Robust LIPL (triglyceride catabolism) transcript levels would provide free fatty acids that could be oxidized to fuel ATP synthesis. Increased cGPDH (glyceroneogenesis) transcript levels were not required for increased glycerol production, suggesting that regulation is more likely by post-translational modification. Finally, levels of a transcript potentially encoding glycerol-3-phosphatase, an enzyme not yet characterized in any vertebrate species, were transiently higher. These comparisons also led to the novel discoveries that increased G6Pase (glucose synthesis) and increased GS (glutamine synthesis) transcript levels were part of the low temperature response in smelt. Glucose may provide increased colligative protection against freezing; whereas glutamine could serve to store nitrogen released from amino acid catabolism in a non-toxic form and / or be used to synthesize urea via purine synthesis-uricolysis. Novel key aspects of cryoprotectant osmolyte (glycerol and urea) trafficking were elucidated by cloning and characterizing three aquaglyceroporin (GLP)-encoding genes from smelt at the gene and cDNA levels in Chapter 4. GLPs are integral membrane proteins that facilitate passive movement of water, glycerol and urea across cellular membranes. The highlight was the discovery that AQP10ba transcript levels always increase in posterior kidney only at low temperature. This AQP10b gene paralogue may have evolved to aid in the reabsorption of urea from the proximal tubule. This research has contributed significantly to a general understanding of the cold adaptation response in smelt, and more specifically to the development of a working scenario for the mechanisms involved in glycerol synthesis and trafficking in this species.
Resumo:
Wind energy installations are increasing in power systems worldwide and wind generation capacity tends to be located some distance from load centers. A conflict may arise at times of high wind generation when it becomes necessary to curtail wind energy in order to maintain conventional generators on-line for the provision of voltage control support at load centers. Using the island of Ireland as a case study and presenting commercially available reactive power support devices as possible solutions to the voltage control problems in urban areas, this paper explores the reduction in total generation costs resulting from the relaxation of the operational constraints requiring conventional generators to be kept on-line near load centers for reactive power support. The paper shows that by 2020 there will be possible savings of 87€m per annum and a reduction in wind curtailment of more than a percentage point if measures are taken to relax these constraints.
Resumo:
This paper proposes extended nonlinear analytical models, third-order models, of compliant parallelogram mechanisms. These models are capable of capturing the accurate effects from the very large axial force within the transverse motion range of 10% of the beam length through incorporating the terms associated with the high-order (up to third-order) axial force. Firstly, the free-body diagram method is employed to derive the nonlinear analytical model for a basic compliant parallelogram mechanism based on load-displacement relations of a single beam, geometry compatibility conditions, and load-equilibrium conditions. The procedures for the forward solutions and inverse solutions are described. Nonlinear analytical models for guided compliant multi-beam parallelogram mechanisms are then obtained. A case study of the compound compliant parallelogram mechanism, composed of two basic compliant parallelogram mechanisms in symmetry, is further implemented. This work intends to estimate the internal axial force change, the transverse force change, and the transverse stiffness change with the transverse motion using the proposed third-order model in comparison with the first-order model proposed in the prior art. In addition, FEA (finite element analysis) results validate the accuracy of the third-order model for a typical example. It is shown that in the case study the slenderness ratio affects the result discrepancy between the third-order model and the first-order model significantly, and the third-order model can illustrate a non-monotonic transverse stiffness curve if the beam is thin enough.
Resumo:
Diffuse intrinsic pontine glioma (DIPG) is a rare and incurable brain tumor that arises in the brainstem of children predominantly between the ages of 6 and 8. Its intricate morphology and involvement of normal pons tissue precludes surgical resection, and the standard of care today remains fractionated radiation alone. In the past 30 years, there have been no significant advances made in the treatment of DIPG. This is largely because we lack good models of DIPG and therefore have little biological basis for treatment. In recent years, however, due to increased biopsy and acquisition of autopsy specimens, research is beginning to unravel the genetic and epigenetic drivers of DIPG. Insight gleaned from these studies has led to improvements in approaches to both model these tumors in the lab and to potentially treat them in the clinic. This review will detail the initial strides toward modeling DIPG in animals, which included allograft and xenograft rodent models using non-DIPG glioma cells. Important advances in the field came with the development of in vitro cell and in vivo xenograft models derived directly from autopsy material of DIPG patients or from human embryonic stem cells. Finally, we will summarize the progress made in the development of genetically engineered mouse models of DIPG. Cooperation of studies incorporating all of these modeling systems to both investigate the unique mechanisms of gliomagenesis in the brainstem and to test potential novel therapeutic agents in a preclinical setting will result in improvement in treatments for DIPG patients.
Resumo:
The development of non-equilibrium group IV nanoscale alloys is critical to achieving new functionalities, such as the formation of a direct bandgap in a conventional indirect bandgap elemental semiconductor. Here, we describe the fabrication of uniform diameter, direct bandgap Ge1-xSnx alloy nanowires, with a Sn incorporation up to 9.2[thinsp]at.%, far in excess of the equilibrium solubility of Sn in bulk Ge, through a conventional catalytic bottom-up growth paradigm using noble metal and metal alloy catalysts. Metal alloy catalysts permitted a greater inclusion of Sn in Ge nanowires compared with conventional Au catalysts, when used during vapour-liquid-solid growth. The addition of an annealing step close to the Ge-Sn eutectic temperature (230[thinsp][deg]C) during cool-down, further facilitated the excessive dissolution of Sn in the nanowires. Sn was distributed throughout the Ge nanowire lattice with no metallic Sn segregation or precipitation at the surface or within the bulk of the nanowires. The non-equilibrium incorporation of Sn into the Ge nanowires can be understood in terms of a kinetic trapping model for impurity incorporation at the triple-phase boundary during growth.
Resumo:
Advertising investment and audience figures indicate that television continues to lead as a mass advertising medium. However, its effectiveness is questioned due to problems such as zapping, saturation and audience fragmentation. This has favoured the development of non-conventional advertising formats. This study provides empirical evidence for the theoretical development. This investigation analyzes the recall generated by four non-conventional advertising formats in a real environment: short programme (branded content), television sponsorship, internal and external telepromotion versus the more conventional spot. The methodology employed has integrated secondary data with primary data from computer assisted telephone interviewing (CATI) were performed ad-hoc on a sample of 2000 individuals, aged 16 to 65, representative of the total television audience. Our findings show that non-conventional advertising formats are more effective at a cognitive level, as they generate higher levels of both unaided and aided recall, in all analyzed formats when compared to the spot.
Resumo:
Calculations of synthetic spectropolarimetry are one means to test multidimensional explosion models for Type Ia supernovae. In a recent paper, we demonstrated that the violent merger of a 1.1 and 0.9 M⊙ white dwarf binary system is too asymmetric to explain the low polarization levels commonly observed in normal Type Ia supernovae. Here, we present polarization simulations for two alternative scenarios: the sub-Chandrasekhar mass double-detonation and the Chandrasekhar mass delayed-detonation model. Specifically, we study a 2D double-detonation model and a 3D delayed-detonation model, and calculate polarization spectra for multiple observer orientations in both cases. We find modest polarization levels (<1 per cent) for both explosion models. Polarization in the continuum peaks at ∼0.1–0.3 per cent and decreases after maximum light, in excellent agreement with spectropolarimetric data of normal Type Ia supernovae. Higher degrees of polarization are found across individual spectral lines. In particular, the synthetic Si II λ6355 profiles are polarized at levels that match remarkably well the values observed in normal Type Ia supernovae, while the low degrees of polarization predicted across the O I λ7774 region are consistent with the non-detection of this feature in current data. We conclude that our models can reproduce many of the characteristics of both flux and polarization spectra for well-studied Type Ia supernovae, such as SN 2001el and SN 2012fr. However, the two models considered here cannot account for the unusually high level of polarization observed in extreme cases such as SN 2004dt.
Resumo:
The use of macroalgae (seaweed) as a potential source of biofuels has attracted considerable worldwide interest. Since brown algae, especially the giant kelp, grow very rapidly and contain considerable amounts of polysaccharides, coupled with low lignin content, they represent attractive candidates for bioconversion to ethanol through yeast fermentation processes. In the current study, powdered dried seaweeds (Ascophylum nodosum and Laminaria digitata) were pre-treated with dilute sulphuric acid and hydrolysed with commercially available enzymes to liberate fermentable sugars. Higher sugar concentrations were obtained from L. digitata compared with A. nodosum with glucose and rhamnose being the predominant sugars, respectively, liberated from these seaweeds. Fermentation of the resultant seaweed sugars was performed using two non-conventional yeast strains: Scheffersomyces (Pichia) stipitis and Kluyveromyces marxianus based on their abilities to utilise a wide range of sugars. Although the yields of ethanol were quite low (at around 6 g/L), macroalgal ethanol production was slightly higher using K. marxianus compared with S. stipitis. The results obtained demonstrate the feasibility of obtaining ethanol from brown algae using relatively straightforward bioprocess technology, together with non-conventional yeasts. Conversion efficiency of these non-conventional yeasts could be maximised by operating the fermentation process based on the physiological requirements of the yeasts.
Resumo:
International audience
Resumo:
Historic vaulted masonry structures often need strengthening interventions that can effectively improve their structural performance, especially during seismic events, and at the same time respect the existing setting and the modern conservation requirements. In this context, the use of innovative materials such as fiber-reinforced composite materials has been shown as an effective solution that can satisfy both aspects. This work aims to provide insight into the computational modeling of a full-scale masonry vault strengthened by fiber-reinforced composite materials and analyze the influence of the arrangement of the reinforcement on the efficiency of the intervention. At first, a parametric model of a cross vault focusing on a realistic representation of its micro-geometry is proposed. Then numerical modeling, simulating the pushover analyses, of several barrel vaults reinforced with different reinforcement configurations is performed. Finally, the results are collected and discussed in terms of force-displacement curves obtained for each proposed configuration.