12 resultados para System components
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Earth System Models (ESM) have been successfuly developed over past few years, and are currently beeing used for simulating present day-climate, seasonal to interanual predictions of climate change. The supercomputer performance plays an important role in climate modeling since one of the challenging issues for climate modellers is to efficiently and accurately couple earth System components on present day computers architectures. At the Barcelona Supercomputing Center (BSC), we work with the EC- Earth System Model. The EC- Earth is an ESM, which currently consists of an atmosphere (IFS) and an ocean (NEMO) model that communicate with each other through the OASIS coupler. Additional modules (e.g. for chemistry and vegetation ) are under development. The EC-Earth ESM has been ported successfully over diferent high performance computin platforms (e.g, IBM P6 AIX, CRAY XT-5, Intelbased Linux Clusters, SGI Altix) at diferent sites in Europ (e.g., KNMI, ICHEC, ECMWF). The objective of the first phase of the project was to identify and document the issues related with the portability and performance of EC-Earth on the MareNostrum supercomputer, a System based on IBM PowerPC 970MP processors and run under a Linux Suse Distribution. EC-Earth was successfully ported to MareNostrum, and a compilation incompatibilty was solved by a two step compilation approach using XLF version 10.1 and 12.1 compilers. In addition, the EC-Earth performance was analyzed with respect to escalability and trace analysis with the Paravear software. This analysis showed that EC-Earth with a larger number of IFS CPUs (<128) is not feasible at the moment since some issues exists with the IFS-NEMO balance and MPI Communications.
Resumo:
The velocity of dripline flushing in subsurface drip irrigation (SDI) systems affects system design, cost, management, performance, and longevity. A 30‐day field study was conducted at Kansas State University to analyze the effect of four targeted flushing velocities (0.23, 0.30, 0.46, and 0.61 m/s) for a fixed 15 min duration of flushing and three flushing frequencies (no flushing or flushing every 15 or 30 days) on SDI emitter discharge and sediments within the dripline and removed in the flushing water. At the end of the field experiment (371 h), the amount of solids carried away by the flushing water and retained in every lateral were determined as well as laboratory determination of emitter discharge for every single emitter within each dripline. Greater dripline flushing velocities, which also resulted in greater flushing volumes, tended to result in greater amounts of solids in the flushing water, but the differences were not always statistically significant. Neither the frequency of flushing nor the interaction of flushing frequency and velocity significantly affected the amount of solids in the flushing water. There was a greater concentration of solids in the beginning one‐third of the 90 m laterals, particularly for treatments with no flushing or with slower dripline flushing velocities. As flushing velocity and concurrently flushing volume increased, there was a tendency for greater solids removal and/or more equal distribution within the dripline. At the end of the field study, the average emitter discharge as measured in the laboratory for a total of 3970 emitters was 0.64 L/h. which was significantly less (approximately 2.5%) than the discharge for new and unused emitters. Only six emitters were nearly or fully clogged, with discharges between 0% and 5% of new and unused emitters. Flushing velocity and flushing frequency did not have consistent significant effects on emitter discharge, and those numerical differences that did exist were small (<3%). Emitter discharge was approximately 3% less for the distal ends of the driplines (last 20% of the dripline). Although not a specific factor in the study, the results of solids removals during flushing and solids retention within the different dripline sections suggest that duration of flushing may be a more cost‐effective management option than increasing the dripline flushing velocity through SDI system design. Finally, although microirrigation system components have been improved over the years, the need for flushing to remove solids and reduce clogging potential has not been eliminated
Resumo:
Report for the scientific sojourn carried out at the Model-based Systems and Qualitative Reasoning Group (Technical University of Munich), from September until December 2005. Constructed wetlands (CWs), or modified natural wetlands, are used all over the world as wastewater treatment systems for small communities because they can provide high treatment efficiency with low energy consumption and low construction, operation and maintenance costs. Their treatment process is very complex because it includes physical, chemical and biological mechanisms like microorganism oxidation, microorganism reduction, filtration, sedimentation and chemical precipitation. Besides, these processes can be influenced by different factors. In order to guarantee the performance of CWs, an operation and maintenance program must be defined for each Wastewater Treatment Plant (WWTP). The main objective of this project is to provide a computer support to the definition of the most appropriate operation and maintenance protocols to guarantee the correct performance of CWs. To reach them, the definition of models which represent the knowledge about CW has been proposed: components involved in the sanitation process, relation among these units and processes to remove pollutants. Horizontal Subsurface Flow CWs are chosen as a case study and the filtration process is selected as first modelling-process application. However, the goal is to represent the process knowledge in such a way that it can be reused for other types of WWTP.
Resumo:
Colour image segmentation based on the hue component presents some problems due to the physical process of image formation. One of that problems is colour clipping, which appear when at least one of the sensor components is saturated. We have designed a system, that works for a trained set of colours, to recover the chromatic information of those pixels on which colour has been clipped. The chromatic correction method is based on the fact that hue and saturation are invariant to the uniform scaling of the three RGB components. The proposed method has been validated by means of a specific colour image processing board that has allowed its execution in real time. We show experimental results of the application of our method
Resumo:
Process supervision is the activity focused on monitoring the process operation in order to deduce conditions to maintain the normality including when faults are present Depending on the number/distribution/heterogeneity of variables, behaviour situations, sub-processes, etc. from processes, human operators and engineers do not easily manipulate the information. This leads to the necessity of automation of supervision activities. Nevertheless, the difficulty to deal with the information complicates the design and development of software applications. We present an approach called "integrated supervision systems". It proposes multiple supervisors coordination to supervise multiple sub-processes whose interactions permit one to supervise the global process
Resumo:
This paper addresses the application of a PCA analysis on categorical data prior to diagnose a patients data set using a Case-Based Reasoning (CBR) system. The particularity is that the standard PCA techniques are designed to deal with numerical attributes, but our medical data set contains many categorical data and alternative methods as RS-PCA are required. Thus, we propose to hybridize RS-PCA (Regular Simplex PCA) and a simple CBR. Results show how the hybrid system produces similar results when diagnosing a medical data set, that the ones obtained when using the original attributes. These results are quite promising since they allow to diagnose with less computation effort and memory storage
Resumo:
This paper derives a model of markets with system goods and two technological standards. An established standard incurs lower unit production costs but causes a negative externality. The paper derives the conditions for policy intervention and compares the effect of direct and indirect cost-reducing subsidies in two markets with system goods in the presence of externalities. If consumers are committed to the technology by purchasing one of the components, direct subsidies are preferable. For a medium-low cost difference between technological standards and a low externality cost it is optimal to provide a direct subsidy only to the first technology adopter. As the higher the externality cost raises, the more technology adopters should be provided with direct subsidies. This effect is robust in all extensions. In the absence of consumers commitment to a technological standard indirect and direct subsidies are both desirable. In this case, the subsidy to the first adopter is lower then the subsidy to the second adopter. Moreover, for the low cost difference between technological standards and low externality cost the fi rst fi rm chooses a superior standard without policy intervention. Finally, a perfect compatibility between components based on different technological standards enhances an advantage of indirect subsidies for medium-high externality cost and cost difference between technological standards. Journal of Economic Literature Classi fication Numbers: C72, D21, D40, H23, L13, L22, L51, O25, O33, O38. Keywords: Technological standards; complementary products; externalities; cost-reducing subsidies; compatibility.
Resumo:
Report for the scientific sojourn carried out at the l’ Institute for Computational Molecular Science of the Temple University, United States, from 2010 to 2012. Two-component systems (TCS) are used by pathogenic bacteria to sense the environment within a host and activate mechanisms related to virulence and antimicrobial resistance. A prototypical example is the PhoQ/PhoP system, which is the major regulator of virulence in Salmonella. Hence, PhoQ is an attractive target for the design of new antibiotics against foodborne diseases. Inhibition of the PhoQ-mediated bacterial virulence does not result in growth inhibition, presenting less selective pressure for the generation of antibiotic resistance. Moreover, PhoQ is a histidine kinase (HK) and it is absent in animals. Nevertheless, the design of satisfactory HK inhibitors has been proven to be a challenge. To compete with the intracellular ATP concentrations, the affinity of a HK inhibidor must be in the micromolar-nanomolar range, whereas the current lead compounds have at best millimolar affinities. Moreover, the drug selectivity depends on the conformation of a highly variable loop, referred to as the “ATP-lid, which is difficult to study by X-Ray crystallography due to its flexibility. I have investigated the binding of different HK inhibitors to PhoQ. In particular, all-atom molecular dynamics simulations have been combined with enhanced sampling techniques in order to provide structural and dynamic information of the conformation of the ATP-lid. Transient interactions between these drugs and the ATP-lid have been identified and the free energy of the different binding modes has been estimated. The results obtained pinpoint the importance of protein flexibility in the HK-inhibitor binding, and constitute a first step in developing more potent and selective drugs. The computational resources of the hosting institution as well as the experience of the members of the group in drug binding and free energy methods have been crucial to carry out this work.
Resumo:
A new statistical parallax method using the Maximum Likelihood principle is presented, allowing the simultaneous determination of a luminosity calibration, kinematic characteristics and spatial distribution of a given sample. This method has been developed for the exploitation of the Hipparcos data and presents several improvements with respect to the previous ones: the effects of the selection of the sample, the observational errors, the galactic rotation and the interstellar absorption are taken into account as an intrinsic part of the formulation (as opposed to external corrections). Furthermore, the method is able to identify and characterize physically distinct groups in inhomogeneous samples, thus avoiding biases due to unidentified components. Moreover, the implementation used by the authors is based on the extensive use of numerical methods, so avoiding the need for simplification of the equations and thus the bias they could introduce. Several examples of application using simulated samples are presented, to be followed by applications to real samples in forthcoming articles.
Resumo:
This paper addresses the application of a PCA analysis on categorical data prior to diagnose a patients data set using a Case-Based Reasoning (CBR) system. The particularity is that the standard PCA techniques are designed to deal with numerical attributes, but our medical data set contains many categorical data and alternative methods as RS-PCA are required. Thus, we propose to hybridize RS-PCA (Regular Simplex PCA) and a simple CBR. Results show how the hybrid system produces similar results when diagnosing a medical data set, that the ones obtained when using the original attributes. These results are quite promising since they allow to diagnose with less computation effort and memory storage
Resumo:
We have developed an activator/repressor expression system for budding yeast in which tetracyclines control in opposite ways the ability of tetR-based activator and repressor molecules to bind tetO promoters. This combination allows tight expression of tetO-driven genes, both in a direct (tetracycline-repressible) and reverse (tetracycline-inducible) dual system. Ssn6 and Tup1, that are components of a general repressor complex in yeast, have been tested for their repressing properties in the dual system, using lacZ and CLN2 as reporter genes. Ssn6 gives better results and allows complete switching-off of the regulated genes, although increasing the levels of the Tup1-based repressor by expressing it from a stronger promoter improves repressing efficiency of the latter. Effector-mediated shifts between expression and non-expression conditions are rapid. The dual system here described may be useful for the functional analysis of essential genes whose conditional expression can be tightly controlled by tetracyclines.