901 resultados para Many-core systems
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
We find ourselves, after the close of the twentieth century, looking back at a mass of responses to the knowledge organization problem. Many institutions, such as the Dewey Decimal Classification (Furner, 2007), have grown up to address it. Increasingly, many diverse discourses are appropriating the problem and crafting a wide variety of responses. This includes many artistic interpretations of the act and products of knowledge organization. These surface as responses to the expressive power or limits of the Library and Information Studies institutions (e.g., DDC) and their often primarily utilitarian gaze.One way to make sense of this diversity is to approach the study from a descriptive stance, inventorying the population of types of KOS. This population perspective approaches the phenomenon of types and boundaries of Knowledge Organization Systems (KOS) as one that develops out of particular discourses, for particular purposes. For example, both DDC and Martianus Capella, a 5th Century encyclopedist, are KOS in this worldview. Both are part of the population of KOS. Approaching the study of KOS from the population perspective allows the researcher a systematic look at the diversity emergent at the constellation of different factors of design and implementation. However, it is not enough to render a model of core types, but we have to also consider the borders of KOS. Fringe types of KOS inform research, specifically to the basic principles of design and implementation used by others outside of the scholarly and professional discourse of Library and Information Studies.Four examples of fringe types of KOS are presented in this paper. Applying a rubric developed in previous papers, our aim here is to show how the conceptual anatomy of these fringe types relates to more established KOS, thereby laying bare the definitions of domain, purpose, structure, and practice. Fringe types, like Beghtol’s examples (2003), are drawn from areas outside of Library and Information Studies proper, and reflect the reinvention of structures to fit particular purposes in particular domains. The four fringe types discussed in this paper are (1) Roland Barthes’ text S/Z which “indexes” a text of an essay with particular “codes” that are meant to expose the literary rhythm of the work; (2) Mary Daly’s Wickedary, a reference work crafted for radical liberation theology – and specifically designed to remove patriarchy from the language used by what the author calls “wild women”; (3) Luigi Serafini’s Codex Seraphinianus a work of book art that plays on the trope of universal encyclopedia and back-of- the book index; and (4) Martinaus Capella – and his Marriage of Mercury and Philology, a fifth century encyclopedia. We compared these using previous analytic taxonomies (Wright, 2008; Tennis, 2006; Tudhope, 2006, Soergel, 2001, Hodge, 2000).
Resumo:
The use of composite resins in dentistry is well accepted for restoring anterior and posterior teeth. Many polishing protocols have been evaluated for their effect on the surface roughness of restorative materials. This study compared the effect of different polishing systems on the surface roughness of microhybrid composites. Thirty-six specimens were prepared for each composite $#91;Charisma® (Heraeus Kulzer), Fill Magic® (Vigodent), TPH Spectrum® (Dentsply), Z100® (3M/ESPE) and Z250® (3M/ESPE)] and submitted to surface treatment with Enhance® and PoGo® (Dentsply) points, sequential Sof-Lex XT® aluminum oxide disks (3M/ESPE), and felt disks (TDV) combined with Excel® diamond polishing paste (TDV). Average surface roughness (Ra) was measured with a mechanical roughness tester. The data were analyzed by two-way ANOVA with repetition of the factorial design and the Tukey-Kramer test (p<0.01). The F-test result for treatments and resins was high (p<0.0001 for both), indicating that the effect of the treatment applied to the specimen surface and the effect of the type of resin on surface roughness was highly significant. Regarding the interaction between polishing system and type of resin used, a p value of 0.0002 was obtained, indicating a statistically significant difference. A Ra of 1.3663 was obtained for the Sof-Lex/TPH Spectrum interaction. In contrast, the Ra for the felt disk+paste/Z250 interactions was 0.1846. In conclusion, Sof-Lex polishing system produced a higher surface roughness on TPH Spectrum resin when compared to the other interactions.
Resumo:
Over the last couple of decades, many methods for synchronizing chaotic systems have been proposed with communications applications in view. Yet their performance has proved disappointing in face of the nonideal character of usual channels linking transmitter and receiver, that is, due to both noise and signal propagation distortion. Here we consider a discrete-time master-slave system that synchronizes despite channel bandwidth limitations and an allied communication system. Synchronization is achieved introducing a digital filter that limits the spectral content of the feedback loop responsible for producing the transmitted signal. Copyright (C) 2009 Marcio Eisencraft et al.
Resumo:
Dihydroorotate dehydrogenase (DHODH) catalyzes the oxidation of dihydroorotate to orotate during the fourth step of the de novo pyrimidine synthesis pathway. In rapidly proliferating mammalian cells, pyrimidine salvage pathway is insufficient to overcome deficiencies in that pathway for nucleotide synthesis. Moreover, as certain parasites lack salvage enzymes, relying solely on the de novo pathway, DHODH inhibition has turned out as an efficient way to block pyrimidine biosynthesis. Escherichia coli DHODH (EcDHODH) is a class 2 DHODH, found associated to cytosolic membranes through an N-terminal extension. We used electronic spin resonance (ESR) to study the interaction of EcDHODH with vesicles of 1,2-dioleoyl-sn-glycero-phosphatidylcholine/detergent. Changes in vesicle dynamic structure induced by the enzyme were monitored via spin labels located at different positions of phospholipid derivatives. Two-component ESR spectra are obtained for labels 5- and 1 0-phosphatidylcholine in presence of EcDHODH, whereas other probes show a single-component spectrum. The appearance of an additional spectral component with features related to fast-motion regime of the probe is attributed to the formation of a defect-like structure in the membrane hydrophobic region. This is probably the mechanism used by the protein to capture quinones used as electron acceptors during catalysis. The use of specific spectral simulation routines allows us to characterize the ESR spectra in terms of changes in polarity and mobility around the spin-labeled phospholipids. We believe this is the first report of direct evidences concerning the binding of class 2 DHODH to membrane systems.
Resumo:
The existence of quantum correlation (as revealed by quantum discord), other than entanglement and its role in quantum-information processing (QIP), is a current subject for discussion. In particular, it has been suggested that this nonclassical correlation may provide computational speedup for some quantum algorithms. In this regard, bulk nuclear magnetic resonance (NMR) has been successfully used as a test bench for many QIP implementations, although it has also been continuously criticized for not presenting entanglement in most of the systems used so far. In this paper, we report a theoretical and experimental study on the dynamics of quantum and classical correlations in an NMR quadrupolar system. We present a method for computing the correlations from experimental NMR deviation-density matrices and show that, given the action of the nuclear-spin environment, the relaxation produces a monotonic time decay in the correlations. Although the experimental realizations were performed in a specific quadrupolar system, the main results presented here can be applied to whichever system uses a deviation-density matrix formalism.
Resumo:
Cuboctahedron (CUB) and icosahedron (ICO) model structures are widely used in the study of transition-metal (TM) nanoparticles (NPs), however, it might not provide a reliable description for small TM NPs such as the Pt(55) and Au(55) systems in gas phase. In this work, we combined density-functional theory calculations with atomic configurations generated by the basin hopping Monte Carlo algorithm within the empirical Sutton-Chen embedded atom potential. We identified alternative lower energy configurations compared with the ICO and CUB model structures, e. g., our lowest energy structures are 5.22 eV (Pt(55)) and 2.01 eV (Au(55)) lower than ICO. The energy gain is obtained by the Pt and Au diffusion from the ICO core region to the NP surface, which is driven by surface compression (only 12 atoms) on the ICO core region. Therefore, in the lowest energy configurations, the core size reduces from 13 atoms (ICO, CUB) to about 9 atoms while the NP surface increases from 42 atoms (ICO, CUB) to about 46 atoms. The present mechanism can provide an improved atom-level understanding of small TM NPs reconstructions.
Resumo:
An improved flow-based procedure is proposed for turbidimetric sulphate determination in waters. The flow system was designed with solenoid micro-pumps in order to improve mixing conditions and minimize reagent consumption as well as waste generation. Stable baselines were observed in view of the pulsed flow characteristic of the systems designed with solenoid micro-pumps, thus making the use of washing solutions unnecessary. The nucleation process was improved by stopping the flow prior to the measurement, thus avoiding the need of sulphate addition. When a 1-cm optical path flow cell was employed, linear response was achieved within 20-200 mg L(-1), described by the equation S = -0.0767 + 0.00438C (mg L(-1)), r = 0.999. The detection limit was estimated as 3 mg L(-1) at the 99.7% confidence level and the coefficient of variation was 2.4% (n = 20). The sampling rate was estimated as 33 determinations per hour. A long pathlength (100-cm) flow cell based on a liquid core waveguide was exploited to increase sensitivity in turbidimetry. Baseline drifts were avoided by a periodical washing step with EDTA in alkaline medium. Linear response was observed within 7-16 mg L(-1), described by the equation S = -0.865 + 0.132C (mg L(-1)), r = 0.999. The detection limit was estimated as 150 mu g L(-1) at the 99.7% confidence level and the coefficient of variation was 3.0% (n = 20). The sampling rate was estimated as 25 determinations per hour. The results obtained for freshwater and rain water samples were in agreement with those achieved by batch turbidimetry at the 95% confidence level. (C) 2008 Elsevier B.V All rights reserved.
Resumo:
Many factors affect the airflow patterns, thermal comfort, contaminant removal efficiency and indoor air quality at individual workstations in office buildings. In this study, four ventilation systems were used in a test chamber designed to represent an area of a typical office building floor and reproduce the real characteristics of a modern office space. Measurements of particle concentration and thermal parameters (temperature and velocity) were carried out for each of the following types of ventilation systems: (a) conventional air distribution system with ceiling supply and return; (b) conventional air distribution system with ceiling supply and return near the floor; (c) underfloor air distribution system; and (d) split system. The measurements aimed to analyse the particle removal efficiency in the breathing zone and the impact of particle concentration on an individual at the workstation. The efficiency of the ventilation system was analysed by measuring particle size and concentration, ventilation effectiveness and the indoor/outdoor ratio. Each ventilation system showed different airflow patterns and the efficiency of each ventilation system in the removal of the particles in the breathing zone showed no correlation with particle size and the various methods of analyses used. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Cooling towers are widely used in many industrial and utility plants as a cooling medium, whose thermal performance is of vital importance. Despite the wide interest in cooling tower design, rating and its importance in energy conservation, there are few investigations concerning the integrated analysis of cooling systems. This work presents an approach for the systemic performance analysis of a cooling water system. The approach combines experimental design with mathematical modeling. An experimental investigation was carried out to characterize the mass transfer in the packing of the cooling tower as a function of the liquid and gas flow rates, whose results were within the range of the measurement accuracy. Then, an integrated model was developed that relies on the mass and heat transfer of the cooling tower, as well as on the hydraulic and thermal interactions with a heat exchanger network. The integrated model for the cooling water system was simulated and the temperature results agree with the experimental data of the real operation of the pilot plant. A case study illustrates the interaction in the system and the need for a systemic analysis of cooling water system. The proposed mathematical and experimental analysis should be useful for performance analysis of real-world cooling water systems. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Due to the broadband characteristic of chaotic signals, many of the methods that have been proposed for synchronizing chaotic systems do not usually present a satisfactory performance when applied to bandlimited communication channels. Here, the effects of bandwidth limitations imposed by the channel on the synchronous solution of a discrete-time chaotic master-slave network are investigated. The discrete-time system considered in this study is the Henon map. It is analytically shown that synchronism can be achieved in such a network by introducing a digital filter in the feedback loop responsible for generating the chaotic signal that will be sent to the slave node. Numerical simulations relating the filter parameters, such as its order and cut-off frequency, to the maximum Lyapunov exponent of the master node, which determines if the transmitted signal is chaotic or not, are also presented. These results can be useful for practical communication schemes based on chaos.
Resumo:
Fluorescent proteins from the green fluorescent protein family strongly interact with CdSe/ZnS and ZnSe/ZnS nanocrystals at neutral pH. Green emitting CdSe/ZnS nanocrystals and red emitting fluorescent protein dTomato constitute a 72% efficiency FRET system with the largest alteration of the overall photoluminescence profile, following complex formation, observed so far. The substitution of ZnSe/ZnS for CdSe/ZnS nanocrystals as energy donors enabled the use of a green fluorescent protein, GFP5, as energy acceptor. Violet emitting ZnSe/ZnS nanocrystals and green GFP5 constitute a system with 43% FRET efficiency and an unusually strong sensitized emission. ZnSe/ZnS-GFP5 provides a cadmium-free, high-contrast FRET system that covers only the high-energy part of the visible spectrum, leaving room for simultaneous use of the yellow and red color channels. Anisotropic fluorescence measurements confirmed the depolarization of GFP5 sensitized emission.
Resumo:
introducing a pharmaceutical product on the market involves several stages of research. The scale-up stage comprises the integration of previous phases of development and their integration. This phase is extremely important since many process limitations which do not appear on the small scale become significant on the transposition to a large one. Since scientific literature presents only a few reports about the characterization of emulsified systems involving their scaling-up, this research work aimed at evaluating physical properties of non-ionic and anionic emulsions during their manufacturing phases: laboratory stage and scale-up. Prototype non-ionic (glyceryl monostearate) and anionic (potassium cetyl phosphate) emulsified systems had the physical properties by the determination of the droplet size (D[4,3 1, mu m) and rheology profile. Transposition occurred from a batch of 500-50,000 g. Semi-industrial manufacturing involved distinct conditions: intensity of agitation and homogenization. Comparing the non-ionic and anionic systems, it was observed that anionic emulsifiers generated systems with smaller droplet size and higher viscosity in laboratory scale. Besides that, for the concentrations tested, augmentation of the glyceryl monostearate emulsifier content provided formulations with better physical characteristics. For systems with potassium cetyl phosphate, droplet size increased with the elevation of the emulsifier concentration, suggesting inadequate stability. The scale-up provoked more significant alterations on the rheological profile and droplet size on the anionic systems than the non-ionic. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The Brazilian Network of Food Data Systems (BRASILFOODS) has been keeping the Brazilian Food Composition Database-USP (TBCA-USP) (http://www.fcf.usp.br/tabela) since 1998. Besides the constant compilation, analysis and update work in the database, the network tries to innovate through the introduction of food information that may contribute to decrease the risk for non-transmissible chronic diseases, such as the profile of carbohydrates and flavonoids in foods. In 2008, data on carbohydrates, individually analyzed, of 112 foods, and 41 data related to the glycemic response produced by foods widely consumed in the country were included in the TBCA-USP. Data (773) about the different flavonoid subclasses of 197 Brazilian foods were compiled and the quality of each data was evaluated according to the USDAs data quality evaluation system. In 2007, BRASILFOODS/USP and INFOODS/FAO organized the 7th International Food Data Conference ""Food Composition and Biodiversity"". This conference was a unique opportunity for interaction between renowned researchers and participants from several countries and it allowed the discussion of aspects that may improve the food composition area. During the period, the LATINFOODS Regional Technical Compilation Committee and BRASILFOODS disseminated to Latin America the Form and Manual for Data Compilation, version 2009, ministered a Food Composition Data Compilation course and developed many activities related to data production and compilation. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
We propose a review of recent developments on entanglement and nonclassical effects in collective two-atom systems and present a uniform physical picture of the many predicted phenomena. The collective effects have brought into sharp focus some of the most basic features of quantum theory, such as nonclassical states of light and entangled states of multiatom systems. The entangled states are linear superpositions of the internal states of the system which cannot be separated into product states of the individual atoms. This property is recognized as entirely quantum-mechanical effect and have played a crucial role in many discussions of the nature of quantum measurements and, in particular, in the developments of quantum communications. Much of the fundamental interest in entangled states is connected with its practical application ranging from quantum computation, information processing, cryptography, and interferometry to atomic spectroscopy.