870 resultados para Integrated circuits Ultra large scale integration
Resumo:
We study the interaction between dark sectors by considering the momentum transfer caused by the dark matter scattering elastically within the dark energy fluid. Describing the dark scattering analogy to the Thomson scattering which couples baryons and photons, we examine the impact of the dark scattering in CMB observations. Performing global fitting with the latest observational data, we find that for a dark energy equation of state w < -1, the CMB gives tight constraints on dark matter-dark energy elastic scattering. Assuming a dark matter particle of proton mass, we derive an elastic scattering cross section of sigma(D) < 3.295 x 10(-10)sigma(T) where sigma(T) is the cross section of Thomson scattering. For w > -1, however, the constraints are poor. For w = -1, sigma(D) can formally take any value.
Resumo:
This paper presents the development of a mathematical model to optimize the management and operation of the Brazilian hydrothermal system. The system consists of a large set of individual hydropower plants and a set of aggregated thermal plants. The energy generated in the system is interconnected by a transmission network so it can be transmitted to centers of consumption throughout the country. The optimization model offered is capable of handling different types of constraints, such as interbasin water transfers, water supply for various purposes, and environmental requirements. Its overall objective is to produce energy to meet the country's demand at a minimum cost. Called HIDROTERM, the model integrates a database with basic hydrological and technical information to run the optimization model, and provides an interface to manage the input and output data. The optimization model uses the General Algebraic Modeling System (GAMS) package and can invoke different linear as well as nonlinear programming solvers. The optimization model was applied to the Brazilian hydrothermal system, one of the largest in the world. The system is divided into four subsystems with 127 active hydropower plants. Preliminary results under different scenarios of inflow, demand, and installed capacity demonstrate the efficiency and utility of the model. From this and other case studies in Brazil, the results indicate that the methodology developed is suitable to different applications, such as planning operation, capacity expansion, and operational rule studies, and trade-off analysis among multiple water users. DOI: 10.1061/(ASCE)WR.1943-5452.0000149. (C) 2012 American Society of Civil Engineers.
Resumo:
Background Current recommendations for antithrombotic therapy after drug-eluting stent (DES) implantation include prolonged dual antiplatelet therapy (DAPT) with aspirin and clopidogrel >= 12 months. However, the impact of such a regimen for all patients receiving any DES system remains unclear based on scientific evidence available to date. Also, several other shortcomings have been identified with prolonged DAPT, including bleeding complications, compliance, and cost. The second-generation Endeavor zotarolimus-eluting stent (E-ZES) has demonstrated efficacy and safety, despite short duration DAPT (3 months) in the majority of studies. Still, the safety and clinical impact of short-term DAPT with E-ZES in the real world is yet to be determined. Methods The OPTIMIZE trial is a large, prospective, multicenter, randomized (1: 1) non-inferiority clinical evaluation of short-term (3 months) vs long-term (12-months) DAPT in patients undergoing E-ZES implantation in daily clinical practice. Overall, 3,120 patients were enrolled at 33 clinical sites in Brazil. The primary composite endpoint is death (any cause), myocardial infarction, cerebral vascular accident, and major bleeding at 12-month clinical follow-up post-index procedure. Conclusions The OPTIMIZE clinical trial will determine the clinical implications of DAPT duration with the second generation E-ZES in real-world patients undergoing percutaneous coronary intervention. (Am Heart J 2012;164:810-816.e3.)
Resumo:
A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 10(18) eV at the Pierre Auger Observatory is presented. This search is performed as a function of both declination and right ascension in several energy ranges above 10(18) eV, and reported in terms of dipolar and quadrupolar coefficients. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Assuming that any cosmic-ray anisotropy is dominated by dipole and quadrupole moments in this energy range, upper limits on their amplitudes are derived. These upper limits allow us to test the origin of cosmic rays above 10(18) eV from stationary Galactic sources densely distributed in the Galactic disk and predominantly emitting light particles in all directions.
Resumo:
The Large Scale Biosphere Atmosphere Experiment in Amazonia (LBA) is a long term (20 years) research effort aimed at the understanding of the functioning of the Amazonian ecosystem. In particular, the strong biosphere-atmosphere interaction is a key component looking at the exchange processes between vegetation and the atmosphere, focusing on aerosol particles. Two aerosol components are the most visible: The natural biogenic emissions of aerosols and VOCs, and the biomass burning emissions. A large effort was done to characterize natural biogenic aerosols that showed detailed organic characterization and optical properties. The biomass burning component in Amazonia is important in term of aerosol and trace gases emissions, with deforestation rates decreasing, from 27,000 Km2 in 2004 to about 5,000 Km2 in 2011. Biomass burning emissions in Amazonia increases concentrations of aerosol particles, CO, ozone and other species, and also change the surface radiation balance in a significant way. Long term monitoring of aerosols and trace gases were performed in two sites: a background site in Central Amazonia, 55 Km North of Manaus (called ZF2 ecological reservation) and a monitoring station in Porto Velho, Rondonia state, a site heavily impacted by biomass burning smoke. Several instruments were operated to measured aerosol size distribution, optical properties (absorption and scattering at several wavelengths), composition of organic (OC/EC) and inorganic components among other measurements. AERONET and MODIS measurements from 5 long term sites show a large year-to year variability due to climatic and socio-economic issues. Aerosol optical depths of more than 4 at 550nm was observed frequently over biomass burning areas. In the pristine Amazonian atmosphere, aerosol scattering coefficients ranged between 1 and 200 Mm-1 at 450 nm, while absorption ranged between 1 and 20 Mm-1 at 637 nm. A strong seasonal behavior was observed, with greater aerosol loadings during the dry season (Jul-Nov) as compared to the wet season (Dec-Jun). During the wet season in Manaus, aerosol scattering (450 nm) and absorption (637 nm) coefficients averaged, respectively, 14 and 0.9 Mm-1. Angstrom exponents for scattering were lower during the wet season (1.6) in comparison to the dry season (1.9), which is consistent with the shift from biomass burning aerosols, predominant in the fine mode, to biogenic aerosols, predominant in the coarse mode. Single scattering albedo, calculated at 637 nm, did not show a significant seasonal variation, averaging 0.86. In Porto Velho, even in the wet season it was possible to observe an impact from anthropogenic aerosol. Black Carbon was measured at a high 20 ug/m³ in the dry season, showing strong aerosol absorption. This work presents a general description of the aerosol optical properties in Amazonia, both during the Amazonian wet and dry seasons.
Resumo:
Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.
Resumo:
A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above '10 POT. 18' eV at the Pierre Auger Observatory is reported. For the first time, these large-scale anisotropy searches are performed as a function of both the right ascension and the declination and expressed in terms of dipole and quadrupole moments.Within the systematic uncertainties, no significant deviation from isotropy is revealed. Upper limits on dipole and quadrupole amplitudes are derived under the hypothesis that any cosmic ray anisotropy is dominated by such moments in this energy range. These upper limits provide constraints on the production of cosmic rays above '10 POT. 18' eV, since they allow us to challenge an origin from stationary galactic sources densely distributed in the galactic disk and emitting predominantly light particles in all directions.
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
Coordinating activities in a distributed system is an open research topic. Several models have been proposed to achieve this purpose such as message passing, publish/subscribe, workflows or tuple spaces. We have focused on the latter model, trying to overcome some of its disadvantages. In particular we have applied spatial database techniques to tuple spaces in order to increase their performance when handling a large number of tuples. Moreover, we have studied how structured peer to peer approaches can be applied to better distribute tuples on large networks. Using some of these result, we have developed a tuple space implementation for the Globus Toolkit that can be used by Grid applications as a coordination service. The development of such a service has been quite challenging due to the limitations imposed by XML serialization that have heavily influenced its design. Nevertheless, we were able to complete its implementation and use it to implement two different types of test applications: a completely parallelizable one and a plasma simulation that is not completely parallelizable. Using this last application we have compared the performance of our service against MPI. Finally, we have developed and tested a simple workflow in order to show the versatility of our service.
Resumo:
Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.