872 resultados para Large-scale experiments


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Over 50% of clinically-marketed drugs target membrane proteins; in particular G protein-coupled receptors (GPCRs). GPCRs are vital to living cells, performing an active role in many processes, making them integral to drug development. In nature, GPCRs are not sufficiently abundant for research and their structural integrity is often lost during extraction from cell membranes. The objectives of this thesis were to increase recombinant yield of the GPCR, human adenosine A2A receptor (hA2AR) by investigating bioprocess conditions in large-scale Pichia pastoris and small-scale Saccharomyces cerevisiae cultivations. Extraction of hA2AR from membranes using novel polymers was also investigated. An increased yield of hA2AR from P. pastoris was achieved by investigating the methanol feeding regime. Slow, exponential feed during induction (μlow) was compared to a faster, exponential feed (μhigh) in 35 L pilot-scale bioreactors. Overall hA2AR yields were increased for the μlow cultivation (536.4pmol g-1) compared to the μhigh148.1 pmol g-1. hA2AR levels were maintained in cytotoxic methanol conditions and unexpectedly, pre-induction levels of hA2AR were detected. Small-scale bioreactor work showed that Design of Experiments (DoE) could be applied to screen for bioprocess conditions to give optimal hA2AR yields. Optimal conditions were retrieved for S. cerevisiae using a d-optimal screen and response surface methodology. The conditions were 22°C, pH 6.0, 30% DO without dimethyl sulphoxide. A polynomial equation was generated to predict hA2AR yields if conditions varied. Regarding the extraction, poly (maleic anhydride-styrene) or PMAS was successful in solubilising hA2AR from P. pastoris membranes compared with dodcecyl-β-D-maltoside (DDM) detergent. Variants of PMAS worked well as solubilising agents with either 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC) or cholesteryl hemisuccinate (CHS). Moreover, esterification of PMAS improved solubilisation, suggesting that increased hydrophobicity stabilises hA2AR during extraction. Overall, hA2AR yields were improved in both, P. pastoris and S. cerevisiae and the use of novel polymers for efficient extraction was achieved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is an urgent need for fast, non-destructive and quantitative two-dimensional dopant profiling of modern and future ultra large-scale semiconductor devices. The low voltage scanning electron microscope (LVSEM) has emerged to satisfy this need, in part, whereby it is possible to detect different secondary electron yield values (brightness in the SEM signal) from the p-type to the n-type doped regions as well as different brightness levels from the same dopant type. The mechanism that gives rise to such a secondary electron (SE) contrast effect is not fully understood, however. A review of the different models that have been proposed to explain this SE contrast is given. We report on new experiments that support the proposal that this contrast is due to the establishment of metal-to-semiconductor surface contacts. Further experiments showing the effect of instrument parameters including the electron dose, the scan speeds and the electron beam energy on the SE contrast are also reported. Preliminary results on the dependence of the SE contrast on the existence of a surface structure featuring metal-oxide semiconductor (MOS) are also reported. Copyright © 2005 John Wiley & Sons, Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Full text: Semiconductor quantum dot lasers are attractive for multipletechnological applications in biophotonics. Simultaneous two-state lasing ofground state (GS) and excited state (ES) electrons and holes in QD lasers ispossible under a certain parameter range. It has already been investigated in steady-stateoperations and in dynamical regimes and is currently a subject of the intesiveresearch. It has been shown that the relaxation frequency in the two-state lasingregime is not a function of the total intensity [1], as could be traditionallyexpected.In this work we study damping relaxation oscillations in QD lasersimultaneously operating at two transitions, and find that under variouspumping conditions, the frequency of oscillations may decrease, increase orstay without change in time as shown in Fig1.The studied QD laser structure wasgrown on a GaAs substrate by molecular-beam epitaxy. The active region includedfive layers of self-assembled InAs QDs separated with a GaAs spacer from a5.3nm thick covering layer of InGaAs and processed into 4mm-wide mesa stripe devices. The 2.5mm long lasers withhigh-and antireflection coatings on the rear and front facets lasesimultaneously at the GS (around 1265nm) and ES (around 1190nm) in the wholerange of pumping. Pulsed electrical pumping obtained from a high power (up to2A current) pulse source was used to achieve high output power operation. We simultaneously detect the total output and merely ES output using aBragg filter transmitting the short-wavelength and reflecting the long-wavelengthradiation. Typical QD does not demonstrate relaxation oscillations frequencybecause of the strong damping [2]. It is confirmed for the low (I<0.68A) andhigh (I>1.2 A) range of the pump currents in our experiments. The situationis different for a short range of the medium currents (0.68A

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Advances in the area of industrial metrology have generated new technologies that are capable of measuring components with complex geometry and large dimensions. However, no standard or best-practice guides are available for the majority of such systems. Therefore, these new systems require appropriate testing and verification in order for the users to understand their full potential prior to their deployment in a real manufacturing environment. This is a crucial stage, especially when more than one system can be used for a specific measurement task. In this paper, two relatively new large-volume measurement systems, the mobile spatial co-ordinate measuring system (MScMS) and the indoor global positioning system (iGPS), are reviewed. These two systems utilize different technologies: the MScMS is based on ultrasound and radiofrequency signal transmission and the iGPS uses laser technology. Both systems have components with small dimensions that are distributed around the measuring area to form a network of sensors allowing rapid dimensional measurements to be performed in relation to large-size objects, with typical dimensions of several decametres. The portability, reconfigurability, and ease of installation make these systems attractive for many industries that manufacture large-scale products. In this paper, the major technical aspects of the two systems are briefly described and compared. Initial results of the tests performed to establish the repeatability and reproducibility of these systems are also presented. © IMechE 2009.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this talk we investigate the usage of spectrally shaped amplified spontaneous emission (ASE) in order to emulate highly dispersed wavelength division multiplexed (WDM) signals in an optical transmission system. Such a technique offers various simplifications to large scale WDM experiments. Not only does it offer a reduction in transmitter complexity, removing the need for multiple source lasers, it potentially reduces the test and measurement complexity by requiring only the centre channel of a WDM system to be measured in order to estimate WDM worst case performance. The use of ASE as a test and measurement tool is well established in optical communication systems and several measurement techniques will be discussed [1, 2]. One of the most prevalent uses of ASE is in the measurement of receiver sensitivity where ASE is introduced in order to degrade the optical signal to noise ratio (OSNR) and measure the resulting bit error rate (BER) at the receiver. From an analytical point of view noise has been used to emulate system performance, the Gaussian Noise model is used as an estimate of highly dispersed signals and has had consider- able interest [3]. The work to be presented here extends the use of ASE by using it as a metric to emulate highly dispersed WDM signals and in the process reduce WDM transmitter complexity and receiver measurement time in a lab environment. Results thus far have indicated [2] that such a transmitter configuration is consistent with an AWGN model for transmission, with modulation format complexity and nonlinearities playing a key role in estimating the performance of systems utilising the ASE channel emulation technique. We conclude this work by investigating techniques capable of characterising the nonlinear and damage limits of optical fibres and the resultant information capacity limits. REFERENCES McCarthy, M. E., N. Mac Suibhne, S. T. Le, P. Harper, and A. D. Ellis, “High spectral efficiency transmission emulation for non-linear transmission performance estimation for high order modulation formats," 2014 European Conference on IEEE Optical Communication (ECOC), 2014. 2. Ellis, A., N. Mac Suibhne, F. Gunning, and S. Sygletos, “Expressions for the nonlinear trans- mission performance of multi-mode optical fiber," Opt. Express, Vol. 21, 22834{22846, 2013. Vacondio, F., O. Rival, C. Simonneau, E. Grellier, A. Bononi, L. Lorcy, J. Antona, and S. Bigo, “On nonlinear distortions of highly dispersive optical coherent systems," Opt. Express, Vol. 20, 1022-1032, 2012.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier–Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager–Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherWe adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hurricane is one of the most destructive and costly natural hazard to the built environment and its impact on low-rise buildings, particularity, is beyond acceptable. The major objective of this research was to perform a parametric evaluation of internal pressure (IP) for wind-resistant design of low-rise buildings and wind-driven natural ventilation applications. For this purpose, a multi-scale experimental, i.e. full-scale at Wall of Wind (WoW) and small-scale at Boundary Layer Wind Tunnel (BLWT), and a Computational Fluid Dynamics (CFD) approach was adopted. This provided new capability to assess wind pressures realistically on internal volumes ranging from small spaces formed between roof tiles and its deck to attic to room partitions. Effects of sudden breaching, existing dominant openings on building envelopes as well as compartmentalization of building interior on the IP were systematically investigated. Results of this research indicated: (i) for sudden breaching of dominant openings, the transient overshooting response was lower than the subsequent steady state peak IP and internal volume correction for low-wind-speed testing facilities was necessary. For example a building without volume correction experienced a response four times faster and exhibited 30–40% lower mean and peak IP; (ii) for existing openings, vent openings uniformly distributed along the roof alleviated, whereas one sided openings aggravated the IP; (iii) larger dominant openings exhibited a higher IP on the building envelope, and an off-center opening on the wall exhibited (30–40%) higher IP than center located openings; (iv) compartmentalization amplified the intensity of IP and; (v) significant underneath pressure was measured for field tiles, warranting its consideration during net pressure evaluations. The study aimed at wind driven natural ventilation indicated: (i) the IP due to cross ventilation was 1.5 to 2.5 times higher for Ainlet/Aoutlet>1 compared to cases where Ainlet/Aoutlet<1, this in effect reduced the mixing of air inside the building and hence the ventilation effectiveness; (ii) the presence of multi-room partitioning increased the pressure differential and consequently the air exchange rate. Overall good agreement was found between the observed large-scale, small-scale and CFD based IP responses. Comparisons with ASCE 7-10 consistently demonstrated that the code underestimated peak positive and suction IP.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hardware/software (HW/SW) cosimulation integrates software simulation and hardware simulation simultaneously. Usually, HW/SW co-simulation platform is used to ease debugging and verification for very large-scale integration (VLSI) design. To accelerate the computation of the gesture recognition technique, an HW/SW implementation using field programmable gate array (FPGA) technology is presented in this paper. The major contributions of this work are: (1) a novel design of memory controller in the Verilog Hardware Description Language (Verilog HDL) to reduce memory consumption and load on the processor. (2) The testing part of the neural network algorithm is being hardwired to improve the speed and performance. The American Sign Language gesture recognition is chosen to verify the performance of the approach. Several experiments were carried out on four databases of the gestures (alphabet signs A to Z). (3) The major benefit of this design is that it takes only few milliseconds to recognize the hand gesture which makes it computationally more efficient.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Globally, small-scale fisheries (SSFs) are driven by climate, governance, and market factors of social-ecological change, presenting both challenges and opportunities. The ability of small-scale fishermen and buyers to adapt to changing conditions allows participants to survive economic or environmental disturbances and to benefit from optimal conditions. This study presented here identifies key large-scale factors that drive SSFs in California to shift focus among targets and that dictate long-term trends in landings. We use Elinor Ostrom’s Social-Ecological System (SES) framework to apply an interdisciplinary approach when identifying potential factors and when understanding the complex dynamics of these fisheries. We analyzed the interactions among Monterey Bay SSFs over the past four decades since the passage of the Magnuson Stevens Fisheries Conservation and Management Act of 1976. In this region, the Pacific sardine (Sardinops sagax), northern anchovy (Engraulis mordax), and market squid (Loligo opalescens) fisheries comprise a tightly linked system where shifting focus among fisheries is a key element to adaptive capacity and reduced social and ecological vulnerability. Using a cluster analysis of landings, we identified four modes from 1974 to 2012 that were dominated by squid, sardine, anchovy, or lacked any dominance, enabling us to identify external drivers attributed to a change in fishery dominance during seven distinct transition points. Overall, we show that market and climate factors drive the transitions among dominance modes. Governance phases most dictated long-term trends in landings and are best viewed as a response to changes in perceived biomass and thus a proxy for biomass. Our findings suggest that globally, small-scale fishery managers should consider enabling shifts in effort among fisheries and retaining existing flexibility, as adaptive capacity is a critical determinant for social and ecological resilience.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver. The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Compressional- and shear-wave velocity logs (Vp and Vs, respectively) that were run to a sub-basement depth of 1013 m (1287.5 m sub-bottom) in Hole 504B suggest the presence of Layer 2A and document the presence of layers 2B and 2C on the Costa Rica Rift. Layer 2A extends from the mudline to 225 m sub-basement and is characterized by compressional-wave velocities of 4.0 km/s or less. Layer 2B extends from 225 to 900 m and may be divided into two intervals: an upper level from 225 to 600 m in which Vp decreases slowly from 5.0 to 4.8 km/s and a lower level from 600 to about 900 m in which Vp increases slowly to 6.0 km/s. In Layer 2C, which was logged for about 100 m to a depth of 1 km, Vp and Vs appear to be constant at 6.0 and 3.2 km/s, respectively. This velocity structure is consistent with, but more detailed than the structure determined by the oblique seismic experiment in the same hole. Since laboratory measurements of the compressional- and shear-wave velocity of samples from Hole 504B at Pconfining = Pdifferential average 6.0 and 3.2 km/s respectively, and show only slight increases with depth, we conclude that the velocity structure of Layer 2 is controlled almost entirely by variations in porosity and that the crack porosity of Layer 2C approaches zero. A comparison between the compressional-wave velocities determined by logging and the formation porosities calculated from the results of the large-scale resistivity experiment using Archie's Law suggest that the velocity- porosity relation derived by Hyndman et al. (1984) for laboratory samples serves as an upper bound for Vp, and the noninteractive relation derived by Toksöz et al. (1976) for cracks with an aspect ratio a = 1/32 serves as a lower bound.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Category hierarchy is an abstraction mechanism for efficiently managing large-scale resources. In an open environment, a category hierarchy will inevitably become inappropriate for managing resources that constantly change with unpredictable pattern. An inappropriate category hierarchy will mislead the management of resources. The increasing dynamicity and scale of online resources increase the requirement of automatically maintaining category hierarchy. Previous studies about category hierarchy mainly focus on either the generation of category hierarchy or the classification of resources under a pre-defined category hierarchy. The automatic maintenance of category hierarchy has been neglected. Making abstraction among categories and measuring the similarity between categories are two basic behaviours to generate a category hierarchy. Humans are good at making abstraction but limited in ability to calculate the similarities between large-scale resources. Computing models are good at calculating the similarities between large-scale resources but limited in ability to make abstraction. To take both advantages of human view and computing ability, this paper proposes a two-phase approach to automatically maintaining category hierarchy within two scales by detecting the internal pattern change of categories. The global phase clusters resources to generate a reference category hierarchy and gets similarity between categories to detect inappropriate categories in the initial category hierarchy. The accuracy of the clustering approaches in generating category hierarchy determines the rationality of the global maintenance. The local phase detects topical changes and then adjusts inappropriate categories with three local operations. The global phase can quickly target inappropriate categories top-down and carry out cross-branch adjustment, which can also accelerate the local-phase adjustments. The local phase detects and adjusts the local-range inappropriate categories that are not adjusted in the global phase. By incorporating the two complementary phase adjustments, the approach can significantly improve the topical cohesion and accuracy of category hierarchy. A new measure is proposed for evaluating category hierarchy considering not only the balance of the hierarchical structure but also the accuracy of classification. Experiments show that the proposed approach is feasible and effective to adjust inappropriate category hierarchy. The proposed approach can be used to maintain the category hierarchy for managing various resources in dynamic application environment. It also provides an approach to specialize the current online category hierarchy to organize resources with more specific categories.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aircraft manufacturing industries are looking for solutions in order to increase their productivity. One of the solutions is to apply the metrology systems during the production and assembly processes. Metrology Process Model (MPM) (Maropoulos et al, 2007) has been introduced which emphasises metrology applications with assembly planning, manufacturing processes and product designing. Measurability analysis is part of the MPM and the aim of this analysis is to check the feasibility for measuring the designed large scale components. Measurability Analysis has been integrated in order to provide an efficient matching system. Metrology database is structured by developing the Metrology Classification Model. Furthermore, the feature-based selection model is also explained. By combining two classification models, a novel approach and selection processes for integrated measurability analysis system (MAS) are introduced and such integrated MAS could provide much more meaningful matching results for the operators. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Graphene, first isolated in 2004 and the subject of the 2010 Nobel Prize in physics, has generated a tremendous amount of research interest in recent years due to its incredible mechanical and electrical properties. However, difficulties in large-scale production and low as-prepared surface area have hindered commercial applications. In this dissertation, a new material is described incorporating the superior electrical properties of graphene edge planes into the high surface area framework of carbon nanotube forests using a scalable and reproducible technology.

The objectives of this research were to investigate the growth parameters and mechanisms of a graphene-carbon nanotube hybrid nanomaterial termed “graphenated carbon nanotubes” (g-CNTs), examine the applicability of g-CNT materials for applications in electrochemical capacitors (supercapacitors) and cold-cathode field emission sources, and determine materials characteristics responsible for the superior performance of g-CNTs in these applications. The growth kinetics of multi-walled carbon nanotubes (MWNTs), grown by plasma-enhanced chemical vapor deposition (PECVD), was studied in order to understand the fundamental mechanisms governing the PECVD reaction process. Activation energies and diffusivities were determined for key reaction steps and a growth model was developed in response to these findings. Differences in the reaction kinetics between CNTs grown on single-crystal silicon and polysilicon were studied to aid in the incorporation of CNTs into microelectromechanical systems (MEMS) devices. To understand processing-property relationships for g-CNT materials, a Design of Experiments (DOE) analysis was performed for the purpose of determining the importance of various input parameters on the growth of g-CNTs, finding that varying temperature alone allows the resultant material to transition from CNTs to g-CNTs and finally carbon nanosheets (CNSs): vertically oriented sheets of few-layered graphene. In addition, a phenomenological model was developed for g-CNTs. By studying variations of graphene-CNT hybrid nanomaterials by Raman spectroscopy, a linear trend was discovered between their mean crystallite size and electrochemical capacitance. Finally, a new method for the calculation of nanomaterial surface area, more accurate than the standard BET technique, was created based on atomic layer deposition (ALD) of titanium oxide (TiO2).