899 resultados para Integrated circuits Very large scale integration Design and construction.
Resumo:
Background: The large-scale production of G-protein coupled receptors (GPCRs) for functional and structural studies remains a challenge. Recent successes have been made in the expression of a range of GPCRs using Pichia pastoris as an expression host. P. pastoris has a number of advantages over other expression systems including ability to post-translationally modify expressed proteins, relative low cost for production and ability to grow to very high cell densities. Several previous studies have described the expression of GPCRs in P. pastoris using shaker flasks, which allow culturing of small volumes (500 ml) with moderate cell densities (OD600 similar to 15). The use of bioreactors, which allow straightforward culturing of large volumes, together with optimal control of growth parameters including pH and dissolved oxygen to maximise cell densities and expression of the target receptors, are an attractive alternative. The aim of this study was to compare the levels of expression of the human Adenosine 2A receptor (A(2A)R) in P. pastoris under control of a methanol-inducible promoter in both flask and bioreactor cultures. Results: Bioreactor cultures yielded an approximately five times increase in cell density (OD600 similar to 75) compared to flask cultures prior to induction and a doubling in functional expression level per mg of membrane protein, representing a significant optimisation. Furthermore, analysis of a C-terminally truncated A2AR, terminating at residue V334 yielded the highest levels (200 pmol/mg) so far reported for expression of this receptor in P. pastoris. This truncated form of the receptor was also revealed to be resistant to C-terminal degradation in contrast to the WT A(2A)R, and therefore more suitable for further functional and structural studies. Conclusion: Large-scale expression of the A(2A)R in P. pastoris bioreactor cultures results in significant increases in functional expression compared to traditional flask cultures.
Resumo:
High-resolution simulations over a large tropical domain (∼20◦S–20◦N and 42◦E–180◦E) using both explicit and parameterized convection are analyzed and compared to observations during a 10-day case study of an active Madden-Julian Oscillation (MJO) event. The parameterized convection model simulations at both 40 km and 12 km grid spacing have a very weak MJO signal and little eastward propagation. A 4 km explicit convection simulation using Smagorinsky subgrid mixing in the vertical and horizontal dimensions exhibits the best MJO strength and propagation speed. 12 km explicit convection simulations also perform much better than the 12 km parameterized convection run, suggesting that the convection scheme, rather than horizontal resolution, is key for these MJO simulations. Interestingly, a 4 km explicit convection simulation using the conventional boundary layer scheme for vertical subgrid mixing (but still using Smagorinsky horizontal mixing) completely loses the large-scale MJO organization, showing that relatively high resolution with explicit convection does not guarantee a good MJO simulation. Models with a good MJO representation have a more realistic relationship between lower-free-tropospheric moisture and precipitation, supporting the idea that moisture-convection feedback is a key process for MJO propagation. There is also increased generation of available potential energy and conversion of that energy into kinetic energy in models with a more realistic MJO, which is related to larger zonal variance in convective heating and vertical velocity, larger zonal temperature variance around 200 hPa, and larger correlations between temperature and ascent (and between temperature and diabatic heating) between 500–400 hPa.
Resumo:
Atmospheric Rivers (ARs), narrow plumes of enhanced moisture transport in the lower troposphere, are a key synoptic feature behind winter flooding in midlatitude regions. This article develops an algorithm which uses the spatial and temporal extent of the vertically integrated horizontal water vapor transport for the detection of persistent ARs (lasting 18 h or longer) in five atmospheric reanalysis products. Applying the algorithm to the different reanalyses in the vicinity of Great Britain during the winter half-years of 1980–2010 (31 years) demonstrates generally good agreement of AR occurrence between the products. The relationship between persistent AR occurrences and winter floods is demonstrated using winter peaks-over-threshold (POT) floods (with on average one flood peak per winter). In the nine study basins, the number of winter POT-1 floods associated with persistent ARs ranged from approximately 40 to 80%. A Poisson regression model was used to describe the relationship between the number of ARs in the winter half-years and the large-scale climate variability. A significant negative dependence was found between AR totals and the Scandinavian Pattern (SCP), with a greater frequency of ARs associated with lower SCP values.
Large-scale atmospheric dynamics of the wet winter 2009–2010 and its impact on hydrology in Portugal
Resumo:
The anomalously wet winter of 2010 had a very important impact on the Portuguese hydrological system. Owing to the detrimental effects of reduced precipitation in Portugal on the environmental and socio-economic systems, the 2010 winter was predominantly beneficial by reversing the accumulated precipitation deficits during the previous hydrological years. The recorded anomalously high precipitation amounts have contributed to an overall increase in river runoffs and dam recharges in the 4 major river basins. In synoptic terms, the winter 2010 was characterised by an anomalously strong westerly flow component over the North Atlantic that triggered high precipitation amounts. A dynamically coherent enhancement in the frequencies of mid-latitude cyclones close to Portugal, also accompanied by significant increases in the occurrence of cyclonic, south and south-westerly circulation weather types, are noteworthy. Furthermore, the prevalence of the strong negative phase of the North Atlantic Oscillation (NAO) also emphasises the main dynamical features of the 2010 winter. A comparison of the hydrological and atmospheric conditions between the 2010 winter and the previous 2 anomalously wet winters (1996 and 2001) was also carried out to isolate not only their similarities, but also their contrasting conditions, highlighting the limitations of estimating winter precipitation amounts in Portugal using solely the NAO phase as a predictor.
Resumo:
It is known that despite companies’ efforts to improve the quality of their products, design and assembly defects results in large repair costs both in terms of repair and providing feedback to the origin of the defect. The purpose of this paper is to study these types of defects and the defect rates in design and assembly. The paper presents a web based questionnaire answered by 29 companies. The result shows that the defect rate (defects per product) spanned from 0.01 to 10. Also, design and assembly defects covered 46%, 23% respectively, of all occurred defects. A case study is also presented, performed at a company who recently implemented a modular architecture. In this company, defects from 5 700 integrated product architectures are compared with defects from 431 modular architectures. The average defect rate increased by 21.5% – from 0.65 to 0.79 – when a more modular architecture has been implemented. Furthermore, the study showed that the assembly defects have decreased while the design defects increased. The results presented in this paper will also support the development of the MPV (Module Property Verification) method which is briefly described.
Resumo:
This paper analyses the static and dynamic behavior of the railroad track model in laboratory. Measurements of stresses and strains on a large-scale railroad track apparatus were studied. The model includes: compacted soil, representing the final layers of platform, ballast layer, and ties (steel, wooden, and pre-stressed concrete). The soil and soil ballast interface were instrumented with pneumatic stress gauge. Settlement measurement device were positioned at the same levels as the load cells. Loads were applied by hydraulic actuators, statically and dynamically. After the prescribed number of load cycles, in pre-determined intervals, stresses and strains were measured. Observations indicate that stress and strain distributions, transmitted by wooden or steel ties, behave similarly. A more favorable behavior was observed with pre-stressed concrete mono block ties. Non-linear response was observed after a threshold numbers of cycles were surpassed, showing that the strain modulus increases with the numbers of cycles. © 2009 IOS Press.
Resumo:
Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.
Resumo:
Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.
Resumo:
Coupled-cluster theory provides one of the most successful concepts in electronic-structure theory. This work covers the parallelization of coupled-cluster energies, gradients, and second derivatives and its application to selected large-scale chemical problems, beside the more practical aspects such as the publication and support of the quantum-chemistry package ACES II MAB and the design and development of a computational environment optimized for coupled-cluster calculations. The main objective of this thesis was to extend the range of applicability of coupled-cluster models to larger molecular systems and their properties and therefore to bring large-scale coupled-cluster calculations into day-to-day routine of computational chemistry. A straightforward strategy for the parallelization of CCSD and CCSD(T) energies, gradients, and second derivatives has been outlined and implemented for closed-shell and open-shell references. Starting from the highly efficient serial implementation of the ACES II MAB computer code an adaptation for affordable workstation clusters has been obtained by parallelizing the most time-consuming steps of the algorithms. Benchmark calculations for systems with up to 1300 basis functions and the presented applications show that the resulting algorithm for energies, gradients and second derivatives at the CCSD and CCSD(T) level of theory exhibits good scaling with the number of processors and substantially extends the range of applicability. Within the framework of the ’High accuracy Extrapolated Ab initio Thermochemistry’ (HEAT) protocols effects of increased basis-set size and higher excitations in the coupled- cluster expansion were investigated. The HEAT scheme was generalized for molecules containing second-row atoms in the case of vinyl chloride. This allowed the different experimental reported values to be discriminated. In the case of the benzene molecule it was shown that even for molecules of this size chemical accuracy can be achieved. Near-quantitative agreement with experiment (about 2 ppm deviation) for the prediction of fluorine-19 nuclear magnetic shielding constants can be achieved by employing the CCSD(T) model together with large basis sets at accurate equilibrium geometries if vibrational averaging and temperature corrections via second-order vibrational perturbation theory are considered. Applying a very similar level of theory for the calculation of the carbon-13 NMR chemical shifts of benzene resulted in quantitative agreement with experimental gas-phase data. The NMR chemical shift study for the bridgehead 1-adamantyl cation at the CCSD(T) level resolved earlier discrepancies of lower-level theoretical treatment. The equilibrium structure of diacetylene has been determined based on the combination of experimental rotational constants of thirteen isotopic species and zero-point vibrational corrections calculated at various quantum-chemical levels. These empirical equilibrium structures agree to within 0.1 pm irrespective of the theoretical level employed. High-level quantum-chemical calculations on the hyperfine structure parameters of the cyanopolyynes were found to be in excellent agreement with experiment. Finally, the theoretically most accurate determination of the molecular equilibrium structure of ferrocene to date is presented.
Resumo:
Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.
Resumo:
The energy harvesting research field has grown considerably in the last decade due to increasing interests in energy autonomous sensing systems, which require smart and efficient interfaces for extracting power from energy source and power management (PM) circuits. This thesis investigates the design trade-offs for minimizing the intrinsic power of PM circuits, in order to allow operation with very weak energy sources. For validation purposes, three different integrated power converter and PM circuits for energy harvesting applications are presented. They have been designed for nano-power operations and single-source converters can operate with input power lower than 1 μW. The first IC is a buck-boost converter for piezoelectric transducers (PZ) implementing Synchronous Electrical Charge Extraction (SECE), a non-linear energy extraction technique. Moreover, Residual Charge Inversion technique is exploited for extracting energy from PZ with weak and irregular excitations (i.e. lower voltage), and the implemented PM policy, named Two-Way Energy Storage, considerably reduces the start-up time of the converter, improving the overall conversion efficiency. The second proposed IC is a general-purpose buck-boost converter for low-voltage DC energy sources, up to 2.5 V. An ultra-low-power MPPT circuit has been designed in order to track variations of source power. Furthermore, a capacitive boost circuit has been included, allowing the converter start-up from a source voltage VDC0 = 223 mV. A nano-power programmable linear regulator is also included in order to provide a stable voltage to the load. The third IC implements an heterogeneous multisource buck-boost converter. It provides up to 9 independent input channels, of which 5 are specific for PZ (with SECE) and 4 for DC energy sources with MPPT. The inductor is shared among channels and an arbiter, designed with asynchronous logic to reduce the energy consumption, avoids simultaneous access to the buck-boost core, with a dynamic schedule based on source priority.
Resumo:
Biotic and abiotic phenological observations can be collected from continental to local spatial scale. Plant phenological observations may only be recorded wherever there is vegetation. Fog, snow and ice are available as phenological para-meters wherever they appear. The singularity of phenological observations is the possibility of spatial intensification to a microclimatic scale where the equipment of meteorological measurements is too expensive for intensive campaigning. The omnipresence of region-specific phenological parameters allows monitoring for a spatially much more detailed assessment of climate change than with weather data. We demonstrate this concept with phenological observations with the use of a special network in the Canton of Berne, Switzerland, with up to 600 observations sites (more than 1 to 10 km² of the inhabited area). Classic cartography, gridding, the integration into a Geographic Information System GIS and large-scale analysis are the steps to a detailed knowledge of topoclimatic conditions of a mountainous area. Examples of urban phenology provide other types of spatially detailed applications. Large potential in phenological mapping in future analyses lies in combining traditionally observed species-specific phenology with remotely sensed and modelled phenology that provide strong spatial information. This is a long history from cartographic intuition to algorithm-based representations of phenology.
Resumo:
In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA's reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs.
Resumo:
Due to the ongoing trend towards increased product variety, fast-moving consumer goods such as food and beverages, pharmaceuticals, and chemicals are typically manufactured through so-called make-and-pack processes. These processes consist of a make stage, a pack stage, and intermediate storage facilities that decouple these two stages. In operations scheduling, complex technological constraints must be considered, e.g., non-identical parallel processing units, sequence-dependent changeovers, batch splitting, no-wait restrictions, material transfer times, minimum storage times, and finite storage capacity. The short-term scheduling problem is to compute a production schedule such that a given demand for products is fulfilled, all technological constraints are met, and the production makespan is minimised. A production schedule typically comprises 500–1500 operations. Due to the problem size and complexity of the technological constraints, the performance of known mixed-integer linear programming (MILP) formulations and heuristic approaches is often insufficient. We present a hybrid method consisting of three phases. First, the set of operations is divided into several subsets. Second, these subsets are iteratively scheduled using a generic and flexible MILP formulation. Third, a novel critical path-based improvement procedure is applied to the resulting schedule. We develop several strategies for the integration of the MILP model into this heuristic framework. Using these strategies, high-quality feasible solutions to large-scale instances can be obtained within reasonable CPU times using standard optimisation software. We have applied the proposed hybrid method to a set of industrial problem instances and found that the method outperforms state-of-the-art methods.