905 resultados para Automatic energy management
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Along the southern Brazilian coast, Tijucas Bay is known for its unique muddy tidal flats associated with chenier plains. Previous field observations pointed to very high suspended sediment concentrations (SSCs) in the inner parts of the bay, and in the estuary of the Tijucas River, suggesting the presence of fluid mud. In this study, the occurrences of suspended sediments and fluid mud were examined during a larger-scale, high-resolution 2-day field campaign on 1-2 May 2007, encompassing survey lines spanning nearly 80 km, 75 water sampling stations for near-bottom density estimates, and ten sediment sampling stations. Wave refraction modeling provided qualitative wave energy estimates as a function of different incidence directions. The results show that SSC increases toward the inner bay near the water surface, but seaward near the bottom. This suggests that suspended sediment is supplied by the local rivers, in particular the Tijucas. Near-surface SSCs were of the order of 50 mg l(-1) close to the shore, but exceeded 100 mg l(-1) near the bottom in the deeper parts of the bay. Fluid mud thickness and location given by densimetry and echo-sounding agreed in some places, although being mostly discordant. The best agreement was observed where wave energy was high during the campaign. The discrepancy between the two methods may be an indication for the existence of fluid mud, which is recorded by one method but not the other. Agreement is considered to be an indication of fluidization, whereas disagreement indicates more consolidation. Wave modeling suggests that waves from the ENE and SE are the most effective in supplying energy to the inner bay, which may induce the liquefaction of mud deposits to form fluid mud. Nearshore mud resuspension and weak horizontal currents result in sediment-laden offshore flow, which explains the higher SSCs measured in the deeper parts of the bay, besides providing a mechanism for fine-sediment export to the inner shelf.
Resumo:
Background: This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. Results: The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. Conclusions: We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.
Resumo:
The purpose of this study was to assess the effect of low level laser therapy on subjects with intra-articular temporomandibular disorders (IA-TMD), and to quantify and compare severity of signs and symptoms before, during, and after the laser applications. The sample consisted of 45 subjects randomly divided into three groups (G) of 15 subjects each: G-I: 15 individuals with IA-TMD submitted to an energy dose of 52.5 J/cm(2); G-II: dose of 105.0 J/cm(2); and G-III: placebo group (0 J/cm(2)). In all groups, the applications were performed on condylar points on the masseter and anterior temporalis muscles. Two weekly sessions were held for five weeks, totaling 10 applications. The assessed variables were: mandibular movements and painful symptoms evoked by muscle palpation. These variables were measured before starting the study, then immediately after the first, fifth, and tenth laser application, and finally, 32 days after completing the applications. The results showed that there were statistically significant differences for G-I and G-II at the level of 1% between the doses, as well as between assessments. Therefore, it was concluded that the use of low level laser increased the mean mandibular range of motion and reduced painful symptoms in the groups that received effective treatment, which did not occur in the placebo group.
Resumo:
Background. Rest myocardial perfusion imaging (MPI) is effective in managing patients with acute chest pain in developed countries. We aimed to define the role and feasibility of rest MPI in low-to-middle income countries. Methods and Results. Low-to-intermediate risk patients (n = 356) presenting with chest pain to ten centers in eight developing countries were injected with a Tc-99m-based tracer, and standard imaging was performed. The primary outcome was a composite of death, non-fatal myocardial infarction (MI), recurrent angina, and coronary revascularization at 30 days. Sixty-nine patients had a positive MPI (19.4%), and 52 patients (14.6%) had a primary outcome event. An abnormal rest-MPI result was the only variable which independently predicted the primary outcome [adjusted odds ratio (OR) 8.19, 95% confidence interval 4.10-16.40, P = .0001]. The association of MPI result and the primary outcome was stronger (adjusted OR 17.35) when only the patients injected during pain were considered. Rest-MPI had a negative predictive value of 92.7% for the primary outcome, improving to 99.3% for the hard event composite of death or MI. Conclusions. Our study demonstrates that rest-MPI is a reliable test for ruling out MI when applied to patients in developing countries. (J Nucl Cardiol 2012;19:1146-53.)
Resumo:
Abstract Introduction Several studies have shown that maximizing stroke volume (or increasing it until a plateau is reached) by volume loading during high-risk surgery may improve post-operative outcome. This goal could be achieved simply by minimizing the variation in arterial pulse pressure (ΔPP) induced by mechanical ventilation. We tested this hypothesis in a prospective, randomized, single-centre study. The primary endpoint was the length of postoperative stay in hospital. Methods Thirty-three patients undergoing high-risk surgery were randomized either to a control group (group C, n = 16) or to an intervention group (group I, n = 17). In group I, ΔPP was continuously monitored during surgery by a multiparameter bedside monitor and minimized to 10% or less by volume loading. Results Both groups were comparable in terms of demographic data, American Society of Anesthesiology score, type, and duration of surgery. During surgery, group I received more fluid than group C (4,618 ± 1,557 versus 1,694 ± 705 ml (mean ± SD), P < 0.0001), and ΔPP decreased from 22 ± 75 to 9 ± 1% (P < 0.05) in group I. The median duration of postoperative stay in hospital (7 versus 17 days, P < 0.01) was lower in group I than in group C. The number of postoperative complications per patient (1.4 ± 2.1 versus 3.9 ± 2.8, P < 0.05), as well as the median duration of mechanical ventilation (1 versus 5 days, P < 0.05) and stay in the intensive care unit (3 versus 9 days, P < 0.01) was also lower in group I. Conclusion Monitoring and minimizing ΔPP by volume loading during high-risk surgery improves postoperative outcome and decreases the length of stay in hospital. Trial registration NCT00479011
Resumo:
As the requirement for agriculture to be environmentally suitable there is a necessity to adopt indicators and methodologies approaching sustainability. In Brazil, biodiesel addition into diesel is mandatory and soybean oil is its main source. The material embodiment determines the convergence of inputs into the crop. Moreover, the material flows are necessary for any environmental analysis. This study evaluated distinct production scenarios, and also conventional versus GMO crops, through the material embodiment and energy analysis. GMO crops demanded less indirectly applied inputs. The energy balance showed linearity with yield, whereas for EROI, the increases in input and yield were not affected.
Resumo:
One of contemporary environmental issues refers to progressive and diverse generation of solid waste in urban areas or specific, and requires solutions because the traditional methods of treatment and disposal are becoming unviable over the years and, consequently, a significant contingent of these wastes presents final destination inappropriate. The diversity of solid waste generated as a result of human activities must have the appropriate allocation to specific legislation in force, such as landfill, incineration, among other procedures established by the competent bodies. Thus, also the waste generated in port activities or proceeding vessels require classification and segregation for proper disposal later. This article aims at presenting a methodology for the collection, transportation, treatment and disposal of solid waste port and also application of automation technology that makes possible the implementation of the same.
Resumo:
[EN] This paper describes VPL, a Virtual Programming Lab module for Moodle, developed at the University of Las Palmas of Gran Canaria (ULPGC) and released for free uses under GNU/GPL license. For the students, it is a simple development environment with auto evaluation capabilities. For the instructors, it is a students' work management system, with features to facilitate the preparation of assignments, manage the submissions, check for plagiarism, and do assessments with the aid of powerful and flexible assessment tools based on program testing, all of that being independent of the programming language used for the assignments and taken into account critical security issues.
Resumo:
Process algebraic architectural description languages provide a formal means for modeling software systems and assessing their properties. In order to bridge the gap between system modeling and system im- plementation, in this thesis an approach is proposed for automatically generating multithreaded object-oriented code from process algebraic architectural descriptions, in a way that preserves – under certain assumptions – the properties proved at the architectural level. The approach is divided into three phases, which are illustrated by means of a running example based on an audio processing system. First, we develop an architecture-driven technique for thread coordination management, which is completely automated through a suitable package. Second, we address the translation of the algebraically-specified behavior of the individual software units into thread templates, which will have to be filled in by the software developer according to certain guidelines. Third, we discuss performance issues related to the suitability of synthesizing monitors rather than threads from software unit descriptions that satisfy specific constraints. In addition to the running example, we present two case studies about a video animation repainting system and the implementation of a leader election algorithm, in order to summarize the whole approach. The outcome of this thesis is the implementation of the proposed approach in a translator called PADL2Java and its integration in the architecture-centric verification tool TwoTowers.
Resumo:
Large scale wireless adhoc networks of computers, sensors, PDAs etc. (i.e. nodes) are revolutionizing connectivity and leading to a paradigm shift from centralized systems to highly distributed and dynamic environments. An example of adhoc networks are sensor networks, which are usually composed by small units able to sense and transmit to a sink elementary data which are successively processed by an external machine. Recent improvements in the memory and computational power of sensors, together with the reduction of energy consumptions, are rapidly changing the potential of such systems, moving the attention towards datacentric sensor networks. A plethora of routing and data management algorithms have been proposed for the network path discovery ranging from broadcasting/floodingbased approaches to those using global positioning systems (GPS). We studied WGrid, a novel decentralized infrastructure that organizes wireless devices in an adhoc manner, where each node has one or more virtual coordinates through which both message routing and data management occur without reliance on either flooding/broadcasting operations or GPS. The resulting adhoc network does not suffer from the deadend problem, which happens in geographicbased routing when a node is unable to locate a neighbor closer to the destination than itself. WGrid allow multidimensional data management capability since nodes' virtual coordinates can act as a distributed database without needing neither special implementation or reorganization. Any kind of data (both single and multidimensional) can be distributed, stored and managed. We will show how a location service can be easily implemented so that any search is reduced to a simple query, like for any other data type. WGrid has then been extended by adopting a replication methodology. We called the resulting algorithm WRGrid. Just like WGrid, WRGrid acts as a distributed database without needing neither special implementation nor reorganization and any kind of data can be distributed, stored and managed. We have evaluated the benefits of replication on data management, finding out, from experimental results, that it can halve the average number of hops in the network. The direct consequence of this fact are a significant improvement on energy consumption and a workload balancing among sensors (number of messages routed by each node). Finally, thanks to the replications, whose number can be arbitrarily chosen, the resulting sensor network can face sensors disconnections/connections, due to failures of sensors, without data loss. Another extension to {WGrid} is {W*Grid} which extends it by strongly improving network recovery performance from link and/or device failures that may happen due to crashes or battery exhaustion of devices or to temporary obstacles. W*Grid guarantees, by construction, at least two disjoint paths between each couple of nodes. This implies that the recovery in W*Grid occurs without broadcasting transmissions and guaranteeing robustness while drastically reducing the energy consumption. An extensive number of simulations shows the efficiency, robustness and traffic road of resulting networks under several scenarios of device density and of number of coordinates. Performance analysis have been compared to existent algorithms in order to validate the results.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
MultiProcessor Systems-on-Chip (MPSoC) are the core of nowadays and next generation computing platforms. Their relevance in the global market continuously increase, occupying an important role both in everydaylife products (e.g. smartphones, tablets, laptops, cars) and in strategical market sectors as aviation, defense, robotics, medicine. Despite of the incredible performance improvements in the recent years processors manufacturers have had to deal with issues, commonly called “Walls”, that have hindered the processors development. After the famous “Power Wall”, that limited the maximum frequency of a single core and marked the birth of the modern multiprocessors system-on-chip, the “Thermal Wall” and the “Utilization Wall” are the actual key limiter for performance improvements. The former concerns the damaging effects of the high temperature on the chip caused by the large power densities dissipation, whereas the second refers to the impossibility of fully exploiting the computing power of the processor due to the limitations on power and temperature budgets. In this thesis we faced these challenges by developing efficient and reliable solutions able to maximize performance while limiting the maximum temperature below a fixed critical threshold and saving energy. This has been possible by exploiting the Model Predictive Controller (MPC) paradigm that solves an optimization problem subject to constraints in order to find the optimal control decisions for the future interval. A fully-distributedMPC-based thermal controller with a far lower complexity respect to a centralized one has been developed. The control feasibility and interesting properties for the simplification of the control design has been proved by studying a partial differential equation thermal model. Finally, the controller has been efficiently included in more complex control schemes able to minimize energy consumption and deal with mixed-criticalities tasks
Resumo:
In the framework of the micro-CHP (Combined Heat and Power) energy systems and the Distributed Generation (GD) concept, an Integrated Energy System (IES) able to meet the energy and thermal requirements of specific users, using different types of fuel to feed several micro-CHP energy sources, with the integration of electric generators of renewable energy sources (RES), electrical and thermal storage systems and the control system was conceived and built. A 5 kWel Polymer Electrolyte Membrane Fuel Cell (PEMFC) has been studied. Using experimental data obtained from various measurement campaign, the electrical and CHP PEMFC system performance have been determinate. The analysis of the effect of the water management of the anodic exhaust at variable FC loads has been carried out, and the purge process programming logic was optimized, leading also to the determination of the optimal flooding times by varying the AC FC power delivered by the cell. Furthermore, the degradation mechanisms of the PEMFC system, in particular due to the flooding of the anodic side, have been assessed using an algorithm that considers the FC like a black box, and it is able to determine the amount of not-reacted H2 and, therefore, the causes which produce that. Using experimental data that cover a two-year time span, the ageing suffered by the FC system has been tested and analyzed.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.