988 resultados para design III
Resumo:
Item 1005-C
Resumo:
The medically significant genus Chlamydia is a class of obligate intracellular bacterial pathogens that replicate within vacuoles in host eukaryotic cells termed inclusions. Chlamydia's developmental cycle involves two forms; an infectious extracellular form, known as an elementary body (EB), and a non-infectious form, known as the reticulate body (RB), that replicates inside the vacuoles of the host cells. The RB surface is covered in projections that are in intimate contact with the inclusion membrane. Late in the developmental cycle, these reticulate bodies differentiate into the elementary body form. In this paper, we present a hypothesis for the modulation of these developmental events involving the contact-dependent type III secretion (TTS) system. TTS surface projections mediate intimate contact between the RB and the inclusion membrane. Below a certain number of projections, detachment of the RB provides a signal for late differentiation of RB into EB. We use data and develop a mathematical model investigating this hypothesis. If the hypothesis proves to be accurate, then we have shown that increasing the number of inclusions per host cell will increase the number of infectious progeny EB until some optimal number of inclusions. For more inclusions than this optimum, the infectious yield is reduced because of spatial restrictions. We also predict that a reduction in the number of projections on the surface of the RB (and as early as possible during development) will significantly reduce the burst size of infectious EB particles. Many of the results predicted by the model can be tested experimentally and may lead to the identification of potential targets for drug design. © Society for Mathematical Biology 2006.
Resumo:
This work is concerned with the assessment of a newer version of the spout-fluid bed where the gas is supplied from a common plenum and the distributor controls the operational phenomenon. Thus the main body of the work deals with the effect of the distributor design on the mixing and segregation of solids in a spout-filled bed. The effect of distributor design in the conventional fluidised bed and of variation of the gas inlet diameter in a spouted bed were also briefly investigated for purpose of comparison. Large particles were selected for study because they are becoming increasingly important in industrial fluidised beds but have not been thoroughly investigated. The mean particle diameters of the fraction ranged from 550 to 2400 mm, and their specific gravity from 0.97 to 2.45. Only work carried out with binary systems is reported here. The effect of air velocity, particle properties, bed height, the relative amount of jetsam and flotsam and initial conditions on the steady-state concentration profiles were assessed with selected distributors. The work is divided into three sections. Sections I and II deal with the fluidised bed and spouted bed systems. Section III covers the development of the spout-filled bed and its behaviour with reference to distributor design and it is shown how benefits of both spouting and fluidising phenomena can be exploited. In the fluidisation zone, better mixing is achieved by distributors which produce a large initial bubble diameter. Some common features exist between the behaviour of unidensity jetsam-rich systems and different density flotsam-rich systems. The shape factor does not seem to have an affect as long as it is only restricted to the minor component. However, in the case of the major component, particle shape significantly affects the final results. Studies of aspect ratio showed that there is a maximum (1.5) above which slugging occurs and the effect of the distributor design is nullified. A mixing number was developed for unidensity spherical rich systems, which proved to be extremely useful in quantifying the variation in mixing and segregation with changes in distributor design.
Resumo:
INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.
Resumo:
In this paper we present the design and analysis of an intonation model for text-to-speech (TTS) synthesis applications using a combination of Relational Tree (RT) and Fuzzy Logic (FL) technologies. The model is demonstrated using the Standard Yorùbá (SY) language. In the proposed intonation model, phonological information extracted from text is converted into an RT. RT is a sophisticated data structure that represents the peaks and valleys as well as the spatial structure of a waveform symbolically in the form of trees. An initial approximation to the RT, called Skeletal Tree (ST), is first generated algorithmically. The exact numerical values of the peaks and valleys on the ST is then computed using FL. Quantitative analysis of the result gives RMSE of 0.56 and 0.71 for peak and valley respectively. Mean Opinion Scores (MOS) of 9.5 and 6.8, on a scale of 1 - -10, was obtained for intelligibility and naturalness respectively.
Resumo:
The purpose of the study was to compare the English III success of students whose home language is Haitian Creole (SWHLIHC) with that of the more visible African American high school students in the Miami Dade County Public Schools System, in an effort to offer insight that might assist educators in facilitating the educational success of SWHLIHC in American Literature class.^ The study was guided by two important theories on how students interact with and learn from literature. They are Reader Response Theory which advocates giving students the opportunity to become involved in the literature experience (Rosenblatt, 1995), and Critical Literacy, a theory developed by Paolo Freire and Henry Giroux, which espouses a critical approach to analysis of society that enables people to analyze social problems through lenses that would reveal social inequities and assist in transforming society into a more equitable entity.^ Data for the study: 10th grade reading FCAT scores, English III/American Literature grades, and Promotion to English IV records for the school year 2010-2011 were retrieved from the records division of the Miami Dade County Public Schools System. The study used a quantitative methods approach, the central feature of which was an ex post facto design with hypotheses (Newman, Newman, Brown, & McNeely, 2006). The ex post facto design with hypotheses was chosen because the researcher postulated hypotheses about the relationships that might exist between the performances of SWHLIHC and those of African American students on the three above mentioned variables. This type of design supported the researcher's purpose of comparing these performances.^ One way analysis of variance (ANOVA), two way ANOVAs, and chi square tests were used to examine the two groups' performances on the 10th grade reading FCAT, their English III grades, and their promotion to English IV. ^ The study findings show that there was a significant difference in the performance of SWHLIHC and African American high school students on all three independent variables. SWHLIHC performed significantly higher on English III success and promotion to English IV. African American high school students performed significantly higher on the reading FCAT.^
Resumo:
Bacteria are known to release a large variety of small molecules known as autoinducers (AI) which effect quorum sensing (QS) initiation. The interruption of QS effects bacterial communication, growth and virulence. ^ Three novel classes of S-ribosylhomocysteine (SRH) analogues as potential inhibitors of S-ribosylhomocysteinase (LuxS enzyme) and AI-2 modulators of QS were developed. The synthesis of 2-deoxy-2-bromo-SRH analogues was attempted by coupling of the corresponding 2-bromo-2-deoxypentafuranosyl precursors with the homocysteinate anion. The displacement of the bromide from C2 rather than the expected substitution of the mesylate from C5 was observed. The synthesis of 4-C-alkyl/aryl-S-ribosylhomocysteine analogues involved the following steps: (i) conversion of the D-ribose to the ribitol-4-ulose; (ii) diastereoselective addition of various alkyl or aryl or vinyl Grignard reagents to 4-ketone intermediate; (iii) oxidation of the primary hydroxyl group at C1 followed by the intramolecular ring closure to the corresponding 4-C-alkyl/aryl-substituted ribono-1,4-lactones; (iv) displacement of the activated 5-hydroxyl group with the protected homocysteinate. Treatment of the 4-C-alkyl/aryl-substituted SRH analogues with lithium triethylborohydride effected reduction of the ribonolactone to the ribose (hemiacetal) and subsequent global deprotection with trifluoroacetic acid provided 4-C-alkyl/aryl-SRHs. ^ The 4-[thia]-SRH were prepared from the 1-deoxy-4-thioribose through the coupling of the &agr;-fluoro thioethers (thioribosyl fluorides) with homocysteinate anion. The 4-[thia]-SRH analogues showed concentration dependent effect on the growth on las (50% inhibitory effect at 200 µg/mL). The most active was 1-deoxy-4-[thia]-SRH analogue with sufur atom in the ring oxidized to sulfoxide decreasing las gene activity to approximately 35% without affecting rhl gene. Neither of the tested compounds had effect on bioluminescence nor on total growth of V. harveyi, but had however slight inhibition of the QS.^
Resumo:
In the Jakarta Metropolitan Region (JMR), the lack of co-ordination and appropriate governance has resulted in paralyzing traffic jams at the metropolitan scale that cannot be resolved by a single government entity. The issue of metropolitan governance is especially crucial here as the JMR lacks an established and formally pre-designed system of governance (e.g., in a constitution or other legal regulations). Instead, it relies on the interaction, coordination and cooperation of a multitude of different stakeholders, ranging from local and regional authorities to private entities and citizens. This chapter offers a discussion on the various governance approaches relating to an appropriate institutional design required for transportation issues at the metropolitan scale. The case used is a regional Bus Rapid Transit (BRT) system as an extension to the metropolitan transport system. Institutional design analysis is applied to the case and three possible improvements - i) a ‘Megapolitan’ concept, ii) a regional spatial plan and iii) inter-local government cooperation; were identified that correspond to current debates on metropolitan governance approaches of regionalism, localism and new regionalism. The findings, which are relevant to similar metropolitan regions, suggest that i) improvements at the meso-level of institutional design are more readily accepted and effective than improvements at the macro-level and ii) that the appropriate institutional design for governing metropolitan transportation in the JMR requires enhanced coordination and cooperation amongst four important actors - local governments, the regional agency, the central government, and private companies.
Resumo:
Currently, no standard mix design procedure is available for CIR-emulsion in Iowa. The CIR-foam mix design process developed during the previous phase is applied for CIR-emulsion mixtures with varying emulsified asphalt contents. Dynamic modulus test, dynamic creep test, static creep test and raveling test were conducted to evaluate the short- and long-term performance of CIR-emulsion mixtures at various testing temperatures and loading conditions. A potential benefit of this research is a better understanding of CIR-emulsion material properties in comparison with those of CIR-foam material that would allow for the selection of the most appropriate CIR technology and the type and amount of the optimum stabilization material. Dynamic modulus, flow number and flow time of CIR-emulsion mixtures using CSS-1h were generally higher than those of HFMS-2p. Flow number and flow time of CIR-emulsion using RAP materials from Story County was higher than those from Clayton County. Flow number and flow time of CIR-emulsion with 0.5% emulsified asphalt was higher than CIR-emulsion with 1.0% or 1.5%. Raveling loss of CIR-emulsion with 1.5% emulsified was significantly less than those with 0.5% and 1.0%. Test results in terms of dynamic modulus, flow number, flow time and raveling loss of CIR-foam mixtures are generally better than those of CIR-emulsion mixtures. Given the limited RAP sources used for this study, it is recommended that the CIR-emulsion mix design procedure should be validated against several RAP sources and emulsion types.
Resumo:
Computer games are significant since they embody our youngsters’ engagement with contemporary culture, including both play and education. These games rely heavily on visuals, systems of sign and expression based on concepts and principles of Art and Architecture. We are researching a new genre of computer games, ‘Educational Immersive Environments’ (EIEs) to provide educational materials suitable for the school classroom. Close collaboration with subject teachers is necessary, but we feel a specific need to engage with the practicing artist, the art theoretician and historian. Our EIEs are loaded with multimedia (but especially visual) signs which act to direct the learner and provide the ‘game-play’ experience forming semiotic systems. We suggest the hypothesis that computer games are a space of deconstruction and reconstruction (DeRe): When players enter the game their physical world and their culture is torn apart; they move in a semiotic system which serves to reconstruct an alternate reality where disbelief is suspended. The semiotic system draws heavily on visuals which direct the players’ interactions and produce motivating gameplay. These can establish a reconstructed culture and emerging game narrative. We have recently tested our hypothesis and have used this in developing design principles for computer game designers. Yet there are outstanding issues concerning the nature of the visuals used in computer games, and so questions for contemporary artists. Currently, the computer game industry employs artists in a ‘classical’ role in production of concept sketches, storyboards and 3D content. But this is based on a specification from the client which restricts the artist in intellectual freedom. Our DeRe hypothesis places the artist at the generative centre, to inform the game designer how art may inform our DeRe semiotic spaces. This must of course begin with the artists’ understanding of DeRe in this time when our ‘identities are becoming increasingly fractured, networked, virtualized and distributed’ We hope to persuade artists to engage with the medium of computer game technology to explore these issues. In particular, we pose several questions to the artist: (i) How can particular ‘periods’ in art history be used to inform the design of computer games? (ii) How can specific artistic elements or devices be used to design ‘signs’ to guide the player through the game? (iii) How can visual material be integrated with other semiotic strata such as text and audio?
Resumo:
This portfolio thesis describes work undertaken by the author under the Engineering Doctorate program of the Institute for System Level Integration. It was carried out in conjunction with the sponsor company Teledyne Defence Limited. A radar warning receiver is a device used to detect and identify the emissions of radars. They were originally developed during the Second World War and are found today on a variety of military platforms as part of the platform’s defensive systems. Teledyne Defence has designed and built components and electronic subsystems for the defence industry since the 1970s. This thesis documents part of the work carried out to create Phobos, Teledyne Defence’s first complete radar warning receiver. Phobos was designed to be the first low cost radar warning receiver. This was made possible by the reuse of existing Teledyne Defence products, commercial off the shelf hardware and advanced UK government algorithms. The challenges of this integration are described and discussed, with detail given of the software architecture and the development of the embedded application. Performance of the embedded system as a whole is described and qualified within the context of a low cost system.
Resumo:
Considerable interest in renewable energy has increased in recent years due to the concerns raised over the environmental impact of conventional energy sources and their price volatility. In particular, wind power has enjoyed a dramatic global growth in installed capacity over the past few decades. Nowadays, the advancement of wind turbine industry represents a challenge for several engineering areas, including materials science, computer science, aerodynamics, analytical design and analysis methods, testing and monitoring, and power electronics. In particular, the technological improvement of wind turbines is currently tied to the use of advanced design methodologies, allowing the designers to develop new and more efficient design concepts. Integrating mathematical optimization techniques into the multidisciplinary design of wind turbines constitutes a promising way to enhance the profitability of these devices. In the literature, wind turbine design optimization is typically performed deterministically. Deterministic optimizations do not consider any degree of randomness affecting the inputs of the system under consideration, and result, therefore, in an unique set of outputs. However, given the stochastic nature of the wind and the uncertainties associated, for instance, with wind turbine operating conditions or geometric tolerances, deterministically optimized designs may be inefficient. Therefore, one of the ways to further improve the design of modern wind turbines is to take into account the aforementioned sources of uncertainty in the optimization process, achieving robust configurations with minimal performance sensitivity to factors causing variability. The research work presented in this thesis deals with the development of a novel integrated multidisciplinary design framework for the robust aeroservoelastic design optimization of multi-megawatt horizontal axis wind turbine (HAWT) rotors, accounting for the stochastic variability related to the input variables. The design system is based on a multidisciplinary analysis module integrating several simulations tools needed to characterize the aeroservoelastic behavior of wind turbines, and determine their economical performance by means of the levelized cost of energy (LCOE). The reported design framework is portable and modular in that any of its analysis modules can be replaced with counterparts of user-selected fidelity. The presented technology is applied to the design of a 5-MW HAWT rotor to be used at sites of wind power density class from 3 to 7, where the mean wind speed at 50 m above the ground ranges from 6.4 to 11.9 m/s. Assuming the mean wind speed to vary stochastically in such range, the rotor design is optimized by minimizing the mean and standard deviation of the LCOE. Airfoil shapes, spanwise distributions of blade chord and twist, internal structural layup and rotor speed are optimized concurrently, subject to an extensive set of structural and aeroelastic constraints. The effectiveness of the multidisciplinary and robust design framework is demonstrated by showing that the probabilistically designed turbine achieves more favorable probabilistic performance than those of the initial baseline turbine and a turbine designed deterministically.
Resumo:
Chapter 1: Under the average common value function, we select almost uniquely the mechanism that gives the seller the largest portion of the true value in the worst situation among all the direct mechanisms that are feasible, ex-post implementable and individually rational. Chapter 2: Strategy-proof, budget balanced, anonymous, envy-free linear mechanisms assign p identical objects to n agents. The efficiency loss is the largest ratio of surplus loss to efficient surplus, over all profiles of non-negative valuations. The smallest efficiency loss is uniquely achieved by the following simple allocation rule: assigns one object to each of the p−1 agents with the highest valuation, a large probability to the agent with the pth highest valuation, and the remaining probability to the agent with the (p+1)th highest valuation. When “envy freeness” is replaced by the weaker condition “voluntary participation”, the optimal mechanism differs only when p is much less than n. Chapter 3: One group is to be selected among a set of agents. Agents have preferences over the size of the group if they are selected; and preferences over size as well as the “stand-outside” option are single-peaked. We take a mechanism design approach and search for group selection mechanisms that are efficient, strategy-proof and individually rational. Two classes of such mechanisms are presented. The proposing mechanism allows agents to either maintain or shrink the group size following a fixed priority, and is characterized by group strategy-proofness. The voting mechanism enlarges the group size in each voting round, and achieves at least half of the maximum group size compatible with individual rationality.
Resumo:
This thesis describes a collection of studies into the electrical response of a III-V MOS stack comprising metal/GaGdO/GaAs layers as a function of fabrication process variables and the findings of those studies. As a result of this work, areas of improvement in the gate process module of a III-V heterostructure MOSFET were identified. Compared to traditional bulk silicon MOSFET design, one featuring a III-V channel heterostructure with a high-dielectric-constant oxide as the gate insulator provides numerous benefits, for example: the insulator can be made thicker for the same capacitance, the operating voltage can be made lower for the same current output, and improved output characteristics can be achieved without reducing the channel length further. It is known that transistors composed of III-V materials are most susceptible to damage induced by radiation and plasma processing. These devices utilise sub-10 nm gate dielectric films, which are prone to contamination, degradation and damage. Therefore, throughout the course of this work, process damage and contamination issues, as well as various techniques to mitigate or prevent those have been investigated through comparative studies of III-V MOS capacitors and transistors comprising various forms of metal gates, various thicknesses of GaGdO dielectric, and a number of GaAs-based semiconductor layer structures. Transistors which were fabricated before this work commenced, showed problems with threshold voltage control. Specifically, MOSFETs designed for normally-off (VTH > 0) operation exhibited below-zero threshold voltages. With the results obtained during this work, it was possible to gain an understanding of why the transistor threshold voltage shifts as the gate length decreases and of what pulls the threshold voltage downwards preventing normally-off device operation. Two main culprits for the negative VTH shift were found. The first was radiation damage induced by the gate metal deposition process, which can be prevented by slowing down the deposition rate. The second was the layer of gold added on top of platinum in the gate metal stack which reduces the effective work function of the whole gate due to its electronegativity properties. Since the device was designed for a platinum-only gate, this could explain the below zero VTH. This could be prevented either by using a platinum-only gate, or by matching the layer structure design and the actual gate metal used for the future devices. Post-metallisation thermal anneal was shown to mitigate both these effects. However, if post-metallisation annealing is used, care should be taken to ensure it is performed before the ohmic contacts are formed as the thermal treatment was shown to degrade the source/drain contacts. In addition, the programme of studies this thesis describes, also found that if the gate contact is deposited before the source/drain contacts, it causes a shift in threshold voltage towards negative values as the gate length decreases, because the ohmic contact anneal process affects the properties of the underlying material differently depending on whether it is covered with the gate metal or not. In terms of surface contamination; this work found that it causes device-to-device parameter variation, and a plasma clean is therefore essential. This work also demonstrated that the parasitic capacitances in the system, namely the contact periphery dependent gate-ohmic capacitance, plays a significant role in the total gate capacitance. This is true to such an extent that reducing the distance between the gate and the source/drain ohmic contacts in the device would help with shifting the threshold voltages closely towards the designed values. The findings made available by the collection of experiments performed for this work have two major applications. Firstly, these findings provide useful data in the study of the possible phenomena taking place inside the metal/GaGdO/GaAs layers and interfaces as the result of chemical processes applied to it. In addition, these findings allow recommendations as to how to best approach fabrication of devices utilising these layers.
Resumo:
The aim of this thesis is to review and augment the theory and methods of optimal experimental design. In Chapter I the scene is set by considering the possible aims of an experimenter prior to an experiment, the statistical methods one might use to achieve those aims and how experimental design might aid this procedure. It is indicated that, given a criterion for design, a priori optimal design will only be possible in certain instances and, otherwise, some form of sequential procedure would seem to be indicated. In Chapter 2 an exact experimental design problem is formulated mathematically and is compared with its continuous analogue. Motivation is provided for the solution of this continuous problem, and the remainder of the chapter concerns this problem. A necessary and sufficient condition for optimality of a design measure is given. Problems which might arise in testing this condition are discussed, in particular with respect to possible non-differentiability of the criterion function at the design being tested. Several examples are given of optimal designs which may be found analytically and which illustrate the points discussed earlier in the chapter. In Chapter 3 numerical methods of solution of the continuous optimal design problem are reviewed. A new algorithm is presented with illustrations of how it should be used in practice. It is shown that, for reasonably large sample size, continuously optimal designs may be approximated to well by an exact design. In situations where this is not satisfactory algorithms for improvement of this design are reviewed. Chapter 4 consists of a discussion of sequentially designed experiments, with regard to both the philosophies underlying, and the application of the methods of, statistical inference. In Chapter 5 we criticise constructively previous suggestions for fully sequential design procedures. Alternative suggestions are made along with conjectures as to how these might improve performance. Chapter 6 presents a simulation study, the aim of which is to investigate the conjectures of Chapter 5. The results of this study provide empirical support for these conjectures. In Chapter 7 examples are analysed. These suggest aids to sequential experimentation by means of reduction of the dimension of the design space and the possibility of experimenting semi-sequentially. Further examples are considered which stress the importance of the use of prior information in situations of this type. Finally we consider the design of experiments when semi-sequential experimentation is mandatory because of the necessity of taking batches of observations at the same time. In Chapter 8 we look at some of the assumptions which have been made and indicate what may go wrong where these assumptions no longer hold.