940 resultados para Optimal power flow (OPF)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Tendinous lesions are very common in athlete horses. The process of tendon healing is slow and the quality of the new tissue is often inferior to the original, leading in many cases to recurrence of the lesion. One of the main reasons for the limited healing capacity of tendons is its poor vascularization. At present, cell therapy is used in equine practice for the treatment of several disorders including tendinitis, desmitis and joint disease. However, there is little information regarding the mechanisms of action of these cells during tissue repair. It is known that Mesenchymal Stem Cells (MSCs) release several growth factors at the site of implantation, some of which promote angiogenesis. Comparison of blood flow using power Doppler ultrasonography was performed after the induction superficial digital flexor tendon tendinitis and implantation of adipose tissue-derived MSCs in order to analyze the effect of cell therapy on tendon neovascularization. For quantification of blood vessel histopathological examinations were conducted. Increased blood flow and number of vessels was observed in treated tendons up to 30 days after cell implantation, suggesting promotion of angiogenesis by the cell therapy.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The main goal of this work was to develop a simple analytical method for quantification of glycerol based on the electrocatalytic oxidation of glycerol on the copper surface adapted in a flow injection system. Under optimal experimental conditions, the peak current response increases linearly with glycerol concentration over the range 60-3200 mg kg(-1) (equivalent to 3-160 mg L(-1) in solution). The repeatability of the electrode response in the flow injection analysis (FIA) configuration was evaluated as 5% (n = 10), and the detection limit of the method was estimated to be 5 mg kg(-1) in biodiesel (equivalent to 250 mu g L(-1) in solution) (S/N = 3). The sample throughput under optimised conditions was estimated to be 90 h(-1). Different types of biodiesel samples (B100), as in the types of vegetable oils or animal fats used to produce the fuels, were analysed (seven samples). The only sample pre-treatment used was an extraction of glycerol from the biodiesel sample containing a ratio of 5 mL of water to 250 mg of biodiesel. The proposed method improves the analytical parameters obtained by other electroanalytical methods for quantification of glycerol in biodiesel samples, and its accuracy was evaluated using a spike-and-recovery assay, where all the biodiesel samples used obtained admissible values according to the Association of Official Analytical Chemists. Crown Copyright (C) 2011 Published by Elsevier Ltd. All rights reserved.
Resumo:
Background The optimal revascularization strategy for diabetic patients with multivessel coronary artery disease (MVD) remains uncertain for lack of an adequately powered, randomized trial. The FREEDOM trial was designed to compare contemporary coronary artery bypass grafting (CABG) to percutaneous coronary intervention (PCI) with drug-eluting stents in diabetic patients with MVD against a background of optimal medical therapy. Methods A total of 1,900 diabetic participants with MVD were randomized to PCI or CABG worldwide from April 2005 to March 2010. FREEDOM is a superiority trial with a mean follow-up of 4.37 years (minimum 2 years) and 80% power to detect a 27.0% relative reduction. We present the baseline characteristics of patients screened and randomized, and provide a comparison with other MVD trials involving diabetic patients. Results The randomized cohort was 63.1 +/- 9.1 years old and 29% female, with a median diabetes duration of 10.2 +/- 8.9 years. Most (83%) had 3-vessel disease and on average took 5.5 +/- 1.7 vascular medications, with 32% on insulin therapy. Nearly all had hypertension and/or dyslipidemia, and 26% had a prior myocardial infarction. Mean hemoglobin A1c was 7.8 +/- 1.7 mg/dL, 29% had low-density lipoprotein <70 mg/dL, and mean systolic blood pressure was 134 +/- 20 mm Hg. The mean SYNTAX score was 26.2 with a symmetric distribution. FREEDOM trial participants have baseline characteristics similar to those of contemporary multivessel and diabetes trial cohorts. Conclusions The FREEDOM trial has successfully recruited a high-risk diabetic MVD cohort. Follow-up efforts include aggressive monitoring to optimize background risk factor control. FREEDOM will contribute significantly to the PCI versus CABG debate in diabetic patients with MVD. (Am Heart J 2012;164:591-9.)
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.
Resumo:
An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.
Resumo:
In the thesis we present the implementation of the quadratic maximum likelihood (QML) method, ideal to estimate the angular power spectrum of the cross-correlation between cosmic microwave background (CMB) and large scale structure (LSS) maps as well as their individual auto-spectra. Such a tool is an optimal method (unbiased and with minimum variance) in pixel space and goes beyond all the previous harmonic analysis present in the literature. We describe the implementation of the QML method in the {\it BolISW} code and demonstrate its accuracy on simulated maps throughout a Monte Carlo. We apply this optimal estimator to WMAP 7-year and NRAO VLA Sky Survey (NVSS) data and explore the robustness of the angular power spectrum estimates obtained by the QML method. Taking into account the shot noise and one of the systematics (declination correction) in NVSS, we can safely use most of the information contained in this survey. On the contrary we neglect the noise in temperature since WMAP is already cosmic variance dominated on the large scales. Because of a discrepancy in the galaxy auto spectrum between the estimates and the theoretical model, we use two different galaxy distributions: the first one with a constant bias $b$ and the second one with a redshift dependent bias $b(z)$. Finally, we make use of the angular power spectrum estimates obtained by the QML method to derive constraints on the dark energy critical density in a flat $\Lambda$CDM model by different likelihood prescriptions. When using just the cross-correlation between WMAP7 and NVSS maps with 1.8° resolution, we show that $\Omega_\Lambda$ is about the 70\% of the total energy density, disfavouring an Einstein-de Sitter Universe at more than 2 $\sigma$ CL (confidence level).
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.
Resumo:
Hybrid vehicles (HV), comprising a conventional ICE-based powertrain and a secondary energy source, to be converted into mechanical power as well, represent a well-established alternative to substantially reduce both fuel consumption and tailpipe emissions of passenger cars. Several HV architectures are either being studied or already available on market, e.g. Mechanical, Electric, Hydraulic and Pneumatic Hybrid Vehicles. Among the others, Electric (HEV) and Mechanical (HSF-HV) parallel Hybrid configurations are examined throughout this Thesis. To fully exploit the HVs potential, an optimal choice of the hybrid components to be installed must be properly designed, while an effective Supervisory Control must be adopted to coordinate the way the different power sources are managed and how they interact. Real-time controllers can be derived starting from the obtained optimal benchmark results. However, the application of these powerful instruments require a simplified and yet reliable and accurate model of the hybrid vehicle system. This can be a complex task, especially when the complexity of the system grows, i.e. a HSF-HV system assessed in this Thesis. The first task of the following dissertation is to establish the optimal modeling approach for an innovative and promising mechanical hybrid vehicle architecture. It will be shown how the chosen modeling paradigm can affect the goodness and the amount of computational effort of the solution, using an optimization technique based on Dynamic Programming. The second goal concerns the control of pollutant emissions in a parallel Diesel-HEV. The emissions level obtained under real world driving conditions is substantially higher than the usual result obtained in a homologation cycle. For this reason, an on-line control strategy capable of guaranteeing the respect of the desired emissions level, while minimizing fuel consumption and avoiding excessive battery depletion is the target of the corresponding section of the Thesis.
Resumo:
In a world focused on the need to produce energy for a growing population, while reducing atmospheric emissions of carbon dioxide, organic Rankine cycles represent a solution to fulfil this goal. This study focuses on the design and optimization of axial-flow turbines for organic Rankine cycles. From the turbine designer point of view, most of this fluids exhibit some peculiar characteristics, such as small enthalpy drop, low speed of sound, large expansion ratio. A computational model for the prediction of axial-flow turbine performance is developed and validated against experimental data. The model allows to calculate turbine performance within a range of accuracy of ±3%. The design procedure is coupled with an optimization process, performed using a genetic algorithm where the turbine total-to-static efficiency represents the objective function. The computational model is integrated in a wider analysis of thermodynamic cycle units, by providing the turbine optimal design. First, the calculation routine is applied in the context of the Draugen offshore platform, where three heat recovery systems are compared. The turbine performance is investigated for three competing bottoming cycles: organic Rankine cycle (operating cyclopentane), steam Rankine cycle and air bottoming cycle. Findings indicate the air turbine as the most efficient solution (total-to-static efficiency = 0.89), while the cyclopentane turbine results as the most flexible and compact technology (2.45 ton/MW and 0.63 m3/MW). Furthermore, the study shows that, for organic and steam Rankine cycles, the optimal design configurations for the expanders do not coincide with those of the thermodynamic cycles. This suggests the possibility to obtain a more accurate analysis by including the computational model in the simulations of the thermodynamic cycles. Afterwards, the performance analysis is carried out by comparing three organic fluids: cyclopentane, MDM and R245fa. Results suggest MDM as the most effective fluid from the turbine performance viewpoint (total-to-total efficiency = 0.89). On the other hand, cyclopentane guarantees a greater net power output of the organic Rankine cycle (P = 5.35 MW), while R245fa represents the most compact solution (1.63 ton/MW and 0.20 m3/MW). Finally, the influence of the composition of an isopentane/isobutane mixture on both the thermodynamic cycle performance and the expander isentropic efficiency is investigated. Findings show how the mixture composition affects the turbine efficiency and so the cycle performance. Moreover, the analysis demonstrates that the use of binary mixtures leads to an enhancement of the thermodynamic cycle performance.
Resumo:
Beside the traditional paradigm of "centralized" power generation, a new concept of "distributed" generation is emerging, in which the same user becomes pro-sumer. During this transition, the Energy Storage Systems (ESS) can provide multiple services and features, which are necessary for a higher quality of the electrical system and for the optimization of non-programmable Renewable Energy Source (RES) power plants. A ESS prototype was designed, developed and integrated into a renewable energy production system in order to create a smart microgrid and consequently manage in an efficient and intelligent way the energy flow as a function of the power demand. The produced energy can be introduced into the grid, supplied to the load directly or stored in batteries. The microgrid is composed by a 7 kW wind turbine (WT) and a 17 kW photovoltaic (PV) plant are part of. The load is given by electrical utilities of a cheese factory. The ESS is composed by the following two subsystems, a Battery Energy Storage System (BESS) and a Power Control System (PCS). With the aim of sizing the ESS, a Remote Grid Analyzer (RGA) was designed, realized and connected to the wind turbine, photovoltaic plant and the switchboard. Afterwards, different electrochemical storage technologies were studied, and taking into account the load requirements present in the cheese factory, the most suitable solution was identified in the high temperatures salt Na-NiCl2 battery technology. The data acquisition from all electrical utilities provided a detailed load analysis, indicating the optimal storage size equal to a 30 kW battery system. Moreover a container was designed and realized to locate the BESS and PCS, meeting all the requirements and safety conditions. Furthermore, a smart control system was implemented in order to handle the different applications of the ESS, such as peak shaving or load levelling.
Resumo:
An implantable transducer for monitoring the flow of Cerebrospinal fluid (CSF) for the treatment of hydrocephalus has been developed which is based on measuring the heat dissipation of a local thermal source. The transducer uses passive telemetry at 13.56 MHz for power supply and read out of the measured flow rate. The in vitro performance of the transducer has been characterized using artificial Cerebrospinal Fluid (CSF) with increased protein concentration and artificial CSF with 10\% fresh blood. After fresh blood was added to the artificial CSF a reduction of flow rate has been observed in case that the sensitive surface of the flow sensor is close to the sedimented erythrocytes. An increase of flow rate has been observed in case that the sensitive surface is in contact with the remaining plasma/artificial CSF mix above the sediment which can be explained by an asymmetric flow profile caused by the sedimentation of erythrocythes having increased viscosity compared to artificial CSF. After removal of blood from artificial CSF, no drift could be observed in the transducer measurement which could be associated to a deposition of proteins at the sensitive surface walls of the packaged flow transducer. The flow sensor specification requirement of +-10\% for a flow range between 2 ml/h and 40 ml/h. could be confirmed at test conditions of 37 degrees C.
Resumo:
Detecting small amounts of genetic subdivision across geographic space remains a persistent challenge. Often a failure to detect genetic structure is mistaken for evidence of panmixia, when more powerful statistical tests may uncover evidence for subtle geographic differentiation. Such slight subdivision can be demographically and evolutionarily important as well as being critical for management decisions. We introduce here a method, called spatial analysis of shared alleles (SAShA), that detects geographically restricted alleles by comparing the spatial arrangement of allelic co-occurrences with the expectation under panmixia. The approach is allele-based and spatially explicit, eliminating the loss of statistical power that can occur with user-defined populations and statistical averaging within populations. Using simulated data sets generated under a stepping-stone model of gene flow, we show that this method outperforms spatial autocorrelation (SA) and UST under common real-world conditions: at relatively high migration rates when diversity is moderate or high, especially when sampling is poor. We then use this method to show clear differences in the genetic patterns of 2 nearshore Pacific mollusks, Tegula funebralis (5 Chlorostoma funebralis) and Katharina tunicata, whose overall patterns of within-species differentiation are similar according to traditional population genetics analyses. SAShA meaningfully complements UST/FST, SA, and other existing geographic genetic analyses and is especially appropriate for evaluating species with high gene flow and subtle genetic differentiation.
Resumo:
A new technique for on-line high resolution isotopic analysis of liquid water, tailored for ice core studies is presented. We built an interface between a Wavelength Scanned Cavity Ring Down Spectrometer (WS-CRDS) purchased from Picarro Inc. and a Continuous Flow Analysis (CFA) system. The system offers the possibility to perform simultaneuous water isotopic analysis of δ18O and δD on a continuous stream of liquid water as generated from a continuously melted ice rod. Injection of sub μl amounts of liquid water is achieved by pumping sample through a fused silica capillary and instantaneously vaporizing it with 100% efficiency in a~home made oven at a temperature of 170 °C. A calibration procedure allows for proper reporting of the data on the VSMOW–SLAP scale. We apply the necessary corrections based on the assessed performance of the system regarding instrumental drifts and dependance on the water concentration in the optical cavity. The melt rates are monitored in order to assign a depth scale to the measured isotopic profiles. Application of spectral methods yields the combined uncertainty of the system at below 0.1‰ and 0.5‰ for δ18O and δD, respectively. This performance is comparable to that achieved with mass spectrometry. Dispersion of the sample in the transfer lines limits the temporal resolution of the technique. In this work we investigate and assess these dispersion effects. By using an optimal filtering method we show how the measured profiles can be corrected for the smoothing effects resulting from the sample dispersion. Considering the significant advantages the technique offers, i.e. simultaneuous measurement of δ18O and δD, potentially in combination with chemical components that are traditionally measured on CFA systems, notable reduction on analysis time and power consumption, we consider it as an alternative to traditional isotope ratio mass spectrometry with the possibility to be deployed for field ice core studies. We present data acquired in the field during the 2010 season as part of the NEEM deep ice core drilling project in North Greenland.