120 resultados para dark energy experiments
Resumo:
In this work we solve Mathematical Programs with Complementarity Constraints using the hyperbolic smoothing strategy. Under this approach, the complementarity condition is relaxed through the use of the hyperbolic smoothing function, involving a positive parameter that can be decreased to zero. An iterative algorithm is implemented in MATLAB language and a set of AMPL problems from MacMPEC database were tested.
Resumo:
On this paper we present a modified regularization scheme for Mathematical Programs with Complementarity Constraints. In the regularized formulations the complementarity condition is replaced by a constraint involving a positive parameter that can be decreased to zero. In our approach both the complementarity condition and the nonnegativity constraints are relaxed. An iterative algorithm is implemented in MATLAB language and a set of AMPL problems from MacMPEC database were tested.
Resumo:
Pregnancy is a dynamic state and the placenta is a temporary organ that, among other important functions, plays a crucial role in the transport of nutrients and metabolites between the mother and the fetus, which is essential for a successful pregnancy. Among these nutrients, glucose is considered a primary source of energy and, therefore, fundamental to insure proper fetus development. Several studies have shown that glucose uptake is dependent on several morphological and biochemical placental conditions. Oxidative stress results from the unbalance between reactive oxygen species (ROS) and antioxidants, in favor of the first. During pregnancy, ROS, and therefore oxidative stress, increase, due to increased tissue oxygenation. Moreover, the relation between ROS and some pathological conditions during pregnancy has been well established. For these reasons, it becomes essential to understand if oxidative stress can compromise the uptake of glucose by the placenta. To make this study possible, a trophoblastic cell line, the BeWo cell line, was used. Experiments regarding glucose uptake, either under normal or oxidative stress conditions, were conducted using tert-butylhydroperoxide (tBOOH) as an oxidative stress inducer, and 3H-2-deoxy-D-glucose (3H-DG) as a glucose analogue. Afterwards, studies regarding the involvement of glucose facilitative transporters (GLUT) and the phosphatidylinositol 3-kinases (PI3K) and protein kinase C (PKC) pathways were conducted, also under normal and oxidative stress conditions. A few antioxidants, endogenous and from diet, were also tested in order to study their possible reversible effect of the oxidative effect of tBOOH upon apical 3H-DG uptake. Finally, transepithelial studies gave interesting insights regarding the apical-to-basolateral transport of 3H-DG. Results showed that 3H-DG uptake, in BeWo cells, is roughly 50% GLUT-mediated and that tBOOH (100 μM; 24h) decreases apical 3H-DG uptake in BeWo cells by about 33%, by reducing both GLUT- (by 28%) and non-GLUT-mediated (by 40%) 3H-DG uptake. Uptake of 3H-DG and the effect of tBOOH upon 3H-DG uptake are not dependent on PKC and PI3K. Moreover, the effect of tBOOH is not associated with a reduction in GLUT1 mRNA levels. Resveratrol, quercetin and epigallocatechin-3-gallate, at 50 μM, reversed, by at least 45%, the effect of tBOOH upon 3H-DG uptake. Transwell studies show that the apical-to-basolateral transepithelial transport of 3H-DG is increased by tBOOH.In conclusion, our results show that tBOOH caused a marked decrease in both GLUT and non-GLUT-mediated apical uptake of 3H-DG by BeWo cells. Given the association of increased oxidative stress levels with several important pregnancy pathologies, and the important role of glucose for fetal development, the results of this study appear very interesting.
Resumo:
This research work has been focused in the study of gallinaceous feathers, a waste that may be valorised as sorbent, to remove the Dark Blue Astrazon 2RN (DBA) from Dystar. This study was focused on the following aspects: optimization of experimental conditions through factorial design methodology, kinetic studies into a continuous stirred tank adsorber (at pH 7 and 20ºC), equilibrium isotherms (at pH 5, 7 and 9 at 20 and 45ºC) and column studies (at 20ºC, at pH 5, 7 and 9). In order to evaluate the influence of the presence of other components in the sorption of the dyestuff, all experiments were performed both for the dyestuff in aqueous solution and in real textile effluent. The pseudo-first and pseudo-second order kinetic models were fitted to the experimental data, being the latter the best fit for the aqueous solution of dyestuff. For the real effluent both models fit the experimental results and there is no statistical difference between them. The Central Composite Design (CCD) was used to evaluate the effects of temperature (15 - 45ºC) and pH (5 - 9) over the sorption in aqueous solution. The influence of pH was more significant than temperature. The optimal conditions selected were 45ºC and pH 9. Both Langmuir and Freundlich models could fit the equilibrium data. In the concentration range studied, the highest sorbent capacity was obtained for the optimal conditions in aqueous solution, which corresponds to a maximum capacity of 47± 4 mg g-1. The Yoon-Nelson, Thomas and Yan’s models fitted well the column experimental data. The highest breakthrough time for 50% removal, 170 min, was obtained at pH 9 in aqueous solution. The presence of the dyeing agents in the real wastewater decreased the sorption of the dyestuff mostly for pH 9, which is the optimal pH. The effect of pH is less pronounced in the real effluent than in aqueous solution. This work shows that feathers can be used as sorbent in the treatment of textile wastewaters containing DBA.
Resumo:
This paper presents work in progress, to develop an efficient and economic way to directly produce Technetium 99metastable (99mTc) using low-energy cyclotrons. Its importance is well established and relates with the increased global trouble in delivering 99mTc to Nuclear Medicine Departments relying on this radioisotope. Since the present delivery strategy has clearly demonstrated its intrinsic limits, our group decided to follow a distinct approach that uses the broad distribution of the low energy cyclotrons and the accessibility of Molybdenum 100 (100Mo) as the Target material. This is indeed an important issue to consider, since the system here presented, named CYCLOTECH, it is not based on the use of Highly Enriched (or even Low Enriched) Uranium 235 (235U), so entirely complying with the actual international trends and directives concerning the use of this potential highly critical material. The production technique is based on the nuclear reaction 100Mo (p,2n) 99mTc whose production yields have already been documented. Until this moment two Patent requests have already been submitted (the first at the INPI, in Portugal, and the second at the USPTO, in the USA); others are being prepared for submission on a near future. The object of the CYCLOTECH system is to present 99mTc to Nuclear Medicine radiopharmacists in a routine, reliable and efficient manner that, remaining always flexible, entirely blends with established protocols. To facilitate workflow and Radiation Protection measures, it has been developed a Target Station that can be installed on most of the existing PET cyclotrons and that will tolerate up to 400 μA of beam by allowing the beam to strike the Target material at an adequately oblique angle. The Target Station permits the remote and automatic loading and discharge of the Targets from a carriage of 10 Target bodies. On other hand, several methods of Target material deposition and Target substrates are presented. The object was to create a cost effective means of depositing and intermediate the target material thickness (25 - 100μm) with a minimum of loss on a substrate that is able to easily transport the heat associated with high beam currents. Finally, the separation techniques presented are a combination of both physical and column chemistry. The object was to extract and deliver 99mTc in the identical form now in use in radiopharmacies worldwide. In addition, the Target material is recovered and can be recycled.
Resumo:
O presente trabalho teve como principais objectivos, estudar e optimizar o processo de tratamento do efluente proveniente das máquinas da unidade Cold-press da linha de produção da Empresa Swedwood, caracterizar a solução límpida obtida no tratamento e estudar a sua integração no processo, e por fim caracterizar o resíduo de pasta de cola obtido no tratamento e estudar a possível valorização energética deste resíduo. Após caracterização inicial do efluente e de acordo com os resultados de um estudo prévio solicitado pela Empresa Swedwood a uma empresa externa, decidiu-se iniciar o estudo de tratabilidade do efluente pelo processo físico-químico a coagulação/floculação. No processo de coagulação/floculação estudou-se a aplicabilidade, através de ensaios Jar-test, dos diferentes agentes de coagulação/floculação: a soda cáustica, a cal, o cloreto férrico e o sulfato de alumínio. Os melhores resultados neste processo foram obtidos com a adição de uma dose de cal de 500 mg/Lefluente, seguida da adição de 400 mg/Lefluente de sulfato de alumínio. Contudo, após este tratamento o clarificado obtido não possuía as características necessárias para a sua reintrodução no processo fabril nem para a sua descarga em meio hídrico. Deste modo procedeu-se ao estudo de tratamentos complementares. Nesta segunda fases de estudo testaram-se os seguintes os tratamentos: a oxidação química por Reagente de Fenton, o tratamento biológico por SBR (sequencing batch reactor) e o leito percolador. Da análise dos resultados obtidos nos diferentes tratamentos conclui-se que o tratamento mais eficaz foi o tratamento biológico por SBR com adição de carvão activado. Prevê-se que no final do processo de tratamento o clarificado obtido possa ser descarregado em meio hídrico ou reintroduzido no processo. Como o estudo apenas foi desenvolvido à escala laboratorial, seria útil poder validar os resultados numa escala piloto antes da sua implementação industrial. A partir dos resultados do estudo experimental, procedeu-se ao dimensionamento de uma unidade de tratamento físico-químico e biológico à escala industrial para o tratamento de 20 m3 de efluente produzido na fábrica, numa semana. Dimensionou-se ainda a unidade (leito de secagem) para tratamento das lamas produzidas. Na unidade de tratamento físico-químico (coagulação/floculação) os decantadores estáticos devem possuir o volume útil de 4,8 m3. Sendo necessários semanalmente 36 L da suspensão de cal (Neutrolac 300) e 12,3 L da solução de sulfato de alumínio a 8,3%. Os tanques de armazenamento destes compostos devem possuir 43,2 litros e 96 litros, respectivamente. Nesta unidade estimou-se que são produzidos diariamente 1,4 m3 de lamas. Na unidade de tratamento biológico o reactor biológico deve possuir um volume útil de 6 m3. Para que este processo seja eficaz é necessário fornecer diariamente 2,1 kg de oxigénio. Estima-se que neste processo será necessário efectuar a purga de 325 litros de lamas semanalmente. No final da purga repõe-se o carvão activado, que poderá ser arrastado juntamente com as lamas, adicionando-se 100 mg de carvão por litro de licor misto. De acordo com o volume de lamas produzidos em ambos os tratamentos a área mínima necessária para o leito de secagem é de cerca de 27 m2. A análise económica efectuada mostra que a aquisição do equipamento tem o custo de 22.079,50 euros, o custo dos reagentes necessários neste processo para um ano de funcionamento tem um custo total de 508,50 euros e as necessidades energéticas de 2.008,45 euros.
Resumo:
The introduction of electricity markets and integration of Distributed Generation (DG) have been influencing the power system’s structure change. Recently, the smart grid concept has been introduced, to guarantee a more efficient operation of the power system using the advantages of this new paradigm. Basically, a smart grid is a structure that integrates different players, considering constant communication between them to improve power system operation and management. One of the players revealing a big importance in this context is the Virtual Power Player (VPP). In the transportation sector the Electric Vehicle (EV) is arising as an alternative to conventional vehicles propel by fossil fuels. The power system can benefit from this massive introduction of EVs, taking advantage on EVs’ ability to connect to the electric network to charge, and on the future expectation of EVs ability to discharge to the network using the Vehicle-to-Grid (V2G) capacity. This thesis proposes alternative strategies to control these two EV modes with the objective of enhancing the management of the power system. Moreover, power system must ensure the trips of EVs that will be connected to the electric network. The EV user specifies a certain amount of energy that will be necessary to charge, in order to ensure the distance to travel. The introduction of EVs in the power system turns the Energy Resource Management (ERM) under a smart grid environment, into a complex problem that can take several minutes or hours to reach the optimal solution. Adequate optimization techniques are required to accommodate this kind of complexity while solving the ERM problem in a reasonable execution time. This thesis presents a tool that solves the ERM considering the intensive use of EVs in the smart grid context. The objective is to obtain the minimum cost of ERM considering: the operation cost of DG, the cost of the energy acquired to external suppliers, the EV users payments and remuneration and penalty costs. This tool is directed to VPPs that manage specific network areas, where a high penetration level of EVs is expected to be connected in these areas. The ERM is solved using two methodologies: the adaptation of a deterministic technique proposed in a previous work, and the adaptation of the Simulated Annealing (SA) technique. With the purpose of improving the SA performance for this case, three heuristics are additionally proposed, taking advantage on the particularities and specificities of an ERM with these characteristics. A set of case studies are presented in this thesis, considering a 32 bus distribution network and up to 3000 EVs. The first case study solves the scheduling without considering EVs, to be used as a reference case for comparisons with the proposed approaches. The second case study evaluates the complexity of the ERM with the integration of EVs. The third case study evaluates the performance of scheduling with different control modes for EVs. These control modes, combined with the proposed SA approach and with the developed heuristics, aim at improving the quality of the ERM, while reducing drastically its execution time. The proposed control modes are: uncoordinated charging, smart charging and V2G capability. The fourth and final case study presents the ERM approach applied to consecutive days.
Resumo:
For the past years wireless sensor networks (WSNs) have been coined as one of the most promising technologies for supporting a wide range of applications. However, outside the research community, few are the people who know what they are and what they can offer. Even fewer are the ones that have seen these networks used in real world applications. The main obstacle for the proliferation of these networks is energy, or the lack of it. Even though renewable energy sources are always present in the networks environment, designing devices that can efficiently scavenge that energy in order to sustain the operation of these networks is still an open challenge. Energy scavenging, along with energy efficiency and energy conservation, are the current available means to sustain the operation of these networks, and can all be framed within the broader concept of “Energetic Sustainability”. A comprehensive study of the several issues related to the energetic sustainability of WSNs is presented in this thesis, with a special focus in today’s applicable energy harvesting techniques and devices, and in the energy consumption of commercially available WSN hardware platforms. This work allows the understanding of the different energy concepts involving WSNs and the evaluation of the presented energy harvesting techniques for sustaining wireless sensor nodes. This survey is supported by a novel experimental analysis of the energy consumption of the most widespread commercially available WSN hardware platforms.
Resumo:
The purpose of this study is to analyse the interlimb relation and the influence of mechanical energy on metabolic energy expenditure during gait. In total, 22 subjects were monitored as to electromyographic activity, ground reaction forces and VO2 consumption (metabolic power) during gait. The results demonstrate a moderate negative correlation between the activity of tibialis anterior, biceps femoris and vastus medialis of the trailing limb during the transition between midstance and double support and that of the leading limb during double support for the same muscles, and between these and gastrocnemius medialis and soleus of the trailing limb during double support. Trailing limb soleus during the transition between mid-stance and double support was positively correlated to leading limb tibialis anterior, vastus medialis and biceps femoris during double support. Also, the trailing limb centre of mass mechanical work was strongly influenced by the leading limbs, although only the mechanical power related to forward progression of both limbs was correlated to metabolic power. These findings demonstrate a consistent interlimb relation in terms of electromyographic activity and centre of mass mechanical work, being the relations occurred in the plane of forward progression the more important to gait energy expenditure.
Resumo:
This paper presents a complete, quadratic programming formulation of the standard thermal unit commitment problem in power generation planning, together with a novel iterative optimisation algorithm for its solution. The algorithm, based on a mixed-integer formulation of the problem, considers piecewise linear approximations of the quadratic fuel cost function that are dynamically updated in an iterative way, converging to the optimum; this avoids the requirement of resorting to quadratic programming, making the solution process much quicker. From extensive computational tests on a broad set of benchmark instances of this problem, the algorithm was found to be flexible and capable of easily incorporating different problem constraints. Indeed, it is able to tackle ramp constraints, which although very important in practice were rarely considered in previous publications. Most importantly, optimal solutions were obtained for several well-known benchmark instances, including instances of practical relevance, that are not yet known to have been solved to optimality. Computational experiments and their results showed that the method proposed is both simple and extremely effective.
Resumo:
Future distribution systems will have to deal with an intensive penetration of distributed energy resources ensuring reliable and secure operation according to the smart grid paradigm. SCADA (Supervisory Control and Data Acquisition) is an essential infrastructure for this evolution. This paper proposes a new conceptual design of an intelligent SCADA with a decentralized, flexible, and intelligent approach, adaptive to the context (context awareness). This SCADA model is used to support the energy resource management undertaken by a distribution network operator (DNO). Resource management considers all the involved costs, power flows, and electricity prices, allowing the use of network reconfiguration and load curtailment. Locational Marginal Prices (LMP) are evaluated and used in specific situations to apply Demand Response (DR) programs on a global or a local basis. The paper includes a case study using a 114 bus distribution network and load demand based on real data.
Resumo:
This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding he management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.
Resumo:
The TEM family of enzymes has had a crucial impact on the pharmaceutical industry due to their important role in antibiotic resistance. Even with the latest technologies in structural biology and genomics, no 3D structure of a TEM- 1/antibiotic complex is known previous to acylation. Therefore, the comprehension of their capability in acylate antibiotics is based on the protein macromolecular structure uncomplexed. In this work, molecular docking, molecular dynamic simulations, and relative free energy calculations were applied in order to get a comprehensive and thorough analysis of TEM-1/ampicillin and TEM-1/amoxicillin complexes. We described the complexes and analyzed the effect of ligand binding on the overall structure. We clearly demonstrate that the key residues involved in the stability of the ligand (hot-spots) vary with the nature of the ligand. Structural effects such as (i) the distances between interfacial residues (Ser70−Oγ and Lys73−Nζ, Lys73−Nζ and Ser130−Oγ, and Ser70−Oγ−Ser130−Oγ), (ii) side chain rotamer variation (Tyr105 and Glu240), and (iii) the presence of conserved waters can be also influenced by ligand binding. This study supports the hypothesis that TEM-1 suffers structural modifications upon ligand binding.
Resumo:
O armazenamento de energia pode ser efetuado sobre cinco categorias, designadamente, elétrica, eletromecânica, mecânica, térmica e química. Contudo, o assunto aqui debatido é sobre meios de armazenamento de energia elétrica, sendo que o armazenamento de eletricidade é usualmente efetuado recorrendo a outros géneros de energia, tais como, química, mecânica, térmica ou, até, em energia potencial. [1]. Há nos dias de hoje uma crescente preocupação na forma como é gerido o setor elétrico, uma vez que este implica um elevado impacto ambiental. Neste sentido tem havido algumas alterações, nomeadamente, no que diz respeito à produção de energia elétrica. A utilização de energias renováveis estão cada vez mais presentes na produção de eletricidade (Figura 1), pois permitem diminuir de forma indireta a utilização dos combustíveis fósseis, sendo esta a principal vantagem face às centrais de produção convencionais.
Resumo:
Modern multicore processors for the embedded market are often heterogeneous in nature. One feature often available are multiple sleep states with varying transition cost for entering and leaving said sleep states. This research effort explores the energy efficient task-mapping on such a heterogeneous multicore platform to reduce overall energy consumption of the system. This is performed in the context of a partitioned scheduling approach and a very realistic power model, which improves over some of the simplifying assumptions often made in the state-of-the-art. The developed heuristic consists of two phases, in the first phase, tasks are allocated to minimise their active energy consumption, while the second phase trades off a higher active energy consumption for an increased ability to exploit savings through more efficient sleep states. Extensive simulations demonstrate the effectiveness of the approach.