996 resultados para export operation methods
Resumo:
In this paper, the calculation of the steady-state operation of a radial/meshed electrical distribution system (EDS) through solving a system of linear equations (non-iterative load flow) is presented. The constant power type demand of the EDS is modeled through linear approximations in terms of real and imaginary parts of the voltage taking into account the typical operating conditions of the EDS's. To illustrate the use of the proposed set of linear equations, a linear model for the optimal power flow with distributed generator is presented. Results using some test and real systems show the excellent performance of the proposed methodology when is compared with conventional methods. © 2011 IEEE.
Resumo:
Includes bibliography
Resumo:
In this work, a mathematical model to analyze the impact of the installation and operation of dispersed generation units in power distribution systems is proposed. The main focus is to determine the trade-off between the reliability and operational costs of distribution networks when the operation of isolated areas is allowed. In order to increase the system operator revenue, an optimal power flow makes use of the different energy prices offered by the dispersed generation connected to the grid. Simultaneously, the type and location of the protective devices initially installed on the protection system are reconfigured in order to minimize the interruption and expenditure of adjusting the protection system to conditions imposed by the operation of dispersed units. The interruption cost regards the unsupplied energy to customers in secure systems but affected by the normal tripping of protective devices. Therefore, the tripping of fuses, reclosers, and overcurrent relays aims to protect the system against both temporary and permanent fault types. Additionally, in order to reduce the average duration of the system interruption experienced by customers, the isolated operation of dispersed generation is allowed by installing directional overcurrent relays with synchronized reclose capabilities. A 135-bus real distribution system is used in order to show the advantages of using the mathematical model proposed. © 1969-2012 IEEE.
Resumo:
Copepod assemblages from two cascade reservoirs were analyzed during two consecutive years. The upstream reservoir (Chavantes) is a storage system with a high water retention time (WRT of 400 days), and the downstream one (Salto Grande) is a run-of-river system with only 1. 5 days WRT. Copepod composition, richness, abundance, and diversity were correlated with the limnological variables and the hydrological and morphometric features. Standard methods were employed for zooplankton sampling and analysis (vertical 50-μm net hauls and counting under a stereomicroscope). Two hypotheses were postulated and confirmed through the data obtained: (1) compartmentalization is more pronounced in the storage reservoir and determines the differences in the copepod assemblage structure; and (2) the assemblages are more homogeneous in the run-of-river reservoir, where the abundance decreases because of the predominance of washout effects. For both reservoirs, the upstream zone is more distinctive. In addition, in the smaller reservoir the influence of the input from tributaries is stronger (turbid waters). Richness did not differ significantly among seasons, but abundance was higher in the run-of-river reservoir during summer. © 2012 Springer Science+Business Media Dordrecht.
Resumo:
Objective: To assess the influence of air abrasion tips and system operation modes on enamel cutting. Methods: Forty bovine teeth were abraded with the air abrasion system Mach 4.1 for 10 and 15 seconds, employing conventional and sonic tips of 0.45-mm inner diameter and a 90° angle, and 27.5-μm aluminum oxide at 5.51 bar air pressure in continuous and pulsed modes. The width and depth of the resulting cuts were measured in SEM. Results: The multivariate analysis of variances revealed that, compared to the sonic tip, the conventional tip produced shallower cuts independent of the operation mode and the application period. Conclusions: The cutting patterns observed in this study suggest that the pulsed mode produced deeper cuts when both the conventional and sonic tips were used, and that the sonic tip cut more dental tissue than the conventional one.
Resumo:
Includes bibliography
Resumo:
Dispute settlement mechanisms help to create a fairly predictable and accurate environment in which economic agents can pursue their activities in the international arena. The World Trade Organization (WTO) Dispute Settlement Body (DSB) has now been in operation for 10 years and it is fitting, at this point to assess the progress achieved by Latin America and the Caribbean, the region that made most use of this mechanism during the period, and whose countries have made significant gains against protectionism in key export sectors. These successes constitute important precedents which will influence upcoming multilateral negotiations and future trade disputes.This article reviews the work carried out by the DSB, the role of the leading stakeholders in the system (the United States and the European Union) and progress made by countries of the region in a global context marked by the complexity of trade issues and the legal framework that regulates them. The findings presented in this article are based on the study "Una década de funcionamiento del Sistema de Solución de Diferencias de la OMC: avances y desafíos".
Resumo:
In this paper we report on a search for short-duration gravitational wave bursts in the frequency range 64 Hz-1792 Hz associated with gamma-ray bursts (GRBs), using data from GEO 600 and one of the LIGO or Virgo detectors. We introduce the method of a linear search grid to analyze GRB events with large sky localization uncertainties, for example the localizations provided by the Fermi Gamma-ray Burst Monitor (GBM). Coherent searches for gravitational waves (GWs) can be computationally intensive when the GRB sky position is not well localized, due to the corrections required for the difference in arrival time between detectors. Using a linear search grid we are able to reduce the computational cost of the analysis by a factor of O(10) for GBM events. Furthermore, we demonstrate that our analysis pipeline can improve upon the sky localization of GRBs detected by the GBM, if a high-frequency GW signal is observed in coincidence. We use the method of the linear grid in a search for GWs associated with 129 GRBs observed satellite-based gamma-ray experiments between 2006 and 2011. The GRBs in our sample had not been previously analyzed for GW counterparts. A fraction of our GRB events are analyzed using data from GEO 600 while the detector was using squeezed-light states to improve its sensitivity; this is the first search for GWs using data from a squeezed-light interferometric observatory. We find no evidence for GW signals, either with any individual GRB in this sample or with the population as a whole. For each GRB we place lower bounds on the distance to the progenitor, under an assumption of a fixed GW emission energy of 10(-2)M circle dot c(2), with a median exclusion distance of 0.8 Mpc for emission at 500 Hz and 0.3 Mpc at 1 kHz. The reduced computational cost associated with a linear search grid will enable rapid searches for GWs associated with Fermi GBM events once the advanced LIGO and Virgo detectors begin operation.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
Traditionally, the study of internal combustion engines operation has focused on the steady-state performance. However, the daily driving schedule of automotive engines is inherently related to unsteady conditions. There are various operating conditions experienced by (diesel) engines that can be classified as transient. Besides the variation of the engine operating point, in terms of engine speed and torque, also the warm up phase can be considered as a transient condition. Chapter 2 has to do with this thermal transient condition; more precisely the main issue is the performance of a Selective Catalytic Reduction (SCR) system during cold start and warm up phases of the engine. The proposal of the underlying work is to investigate and identify optimal exhaust line heating strategies, to provide a fast activation of the catalytic reactions on SCR. Chapters 3 and 4 focus the attention on the dynamic behavior of the engine, when considering typical driving conditions. The common approach to dynamic optimization involves the solution of a single optimal-control problem. However, this approach requires the availability of models that are valid throughout the whole engine operating range and actuator ranges. In addition, the result of the optimization is meaningful only if the model is very accurate. Chapter 3 proposes a methodology to circumvent those demanding requirements: an iteration between transient measurements to refine a purpose-built model and a dynamic optimization which is constrained to the model validity region. Moreover all numerical methods required to implement this procedure are presented. Chapter 4 proposes an approach to derive a transient feedforward control system in an automated way. It relies on optimal control theory to solve a dynamic optimization problem for fast transients. From the optimal solutions, the relevant information is extracted and stored in maps spanned by the engine speed and the torque gradient.
Resumo:
The revision hip arthroplasty is a surgical procedure, consisting in the reconstruction of the hip joint through the replacement of the damaged hip prosthesis. Several factors may give raise to the failure of the artificial device: aseptic loosening, infection and dislocation represent the principal causes of failure worldwide. The main effect is the raise of bone defects in the region closest to the prosthesis that weaken the bone structure for the biological fixation of the new artificial hip. For this reason bone reconstruction is necessary before the surgical revision operation. This work is born by the necessity to test the effects of bone reconstruction due to particular bone defects in the acetabulum, after the hip prosthesis revision. In order to perform biomechanical in vitro tests on hip prosthesis implanted in human pelvis or hemipelvis a practical definition of a reference frame for these kind of bone specimens is required. The aim of the current study is to create a repeatable protocol to align hemipelvic samples in the testing machine, that relies on a reference system based on anatomical landmarks on the human pelvis. In chapter 1 a general overview of the human pelvic bone is presented: anatomy, bone structure, loads and the principal devices for hip joint replacement. The purpose of chapters 2 is to identify the most common causes of the revision hip arthroplasty, analysing data from the most reliable orthopaedic registries in the world. Chapter 3 presents an overview of the most used classifications for acetabular bone defects and fractures and the most common techniques for acetabular and bone reconstruction. After a critical review of the scientific literature about reference frames for human pelvis, in chapter 4, the definition of a new reference frame is proposed. Based on this reference frame, the alignment protocol for the human hemipelvis is presented as well as the statistical analysis that confirm the good repeatability of the method.
Resumo:
BACKGROUND: The arterial switch operation (ASO) is currently the treatment of choice in neonates with transposition of the great arteries (TGA). The outcome in childhood is encouraging but only limited data for long-term outcome into adulthood exist. METHODS AND RESULTS: We studied 145 adult patients (age>16, median 25years) with ASO followed at our institution. Three patients died in adulthood (mortality 2.4/1000-patient-years). Most patients were asymptomatic and had normal left ventricular function. Coronary lesions requiring interventions were rare (3 patients) and in most patients related to previous surgery. There were no acute coronary syndromes. Aortic root dilatation was frequent (56% patients) but rarely significant (>45mm in 3 patients, maximal-diameter 49mm) and appeared not to be progressive. There were no acute aortic events and no patient required elective aortic root surgery. Progressive neo-aortic-valve dysfunction was not observed in our cohort and only 1 patient required neo-aortic-valve replacement. Many patients (42.1%), however, had significant residual lesions or required reintervention in adulthood. Right ventricular outflow tract lesions or dysfunction of the neo-pulmonary-valve were frequent and 8 patients (6%) required neo-pulmonary-valve replacement. Cardiac interventions during childhood (OR 3.0, 95% CI 1.7-5.4, P<0.0001) were strong predictors of outcome (cardiac intervention/significant residual lesion/death) in adulthood. CONCLUSIONS: Adult patients with previous ASO remain free of acute coronary or aortic complications and have low mortality. However, a large proportion of patients require re-interventions or present with significant right sided lesions. Life-long cardiac follow-up is, therefore, warranted. Periodic noninvasive surveillance for coronary complications appears to be safe in adult ASO patients.
Resumo:
OBJECTIVES: This study analyzes the results of the arterial switch operation for transposition of the great arteries in member institutions of the European Congenital Heart Surgeons Association. METHODS: The records of 613 patients who underwent primary arterial switch operations in each of 19 participating institutions in the period from January 1998 through December 2000 were reviewed retrospectively. RESULTS: A ventricular septal defect was present in 186 (30%) patients. Coronary anatomy was type A in 69% of the patients, and aortic arch pathology was present in 20% of patients with ventricular septal defect. Rashkind septostomy was performed in 75% of the patients, and 69% received prostaglandin. There were 37 hospital deaths (operative mortality, 6%), 13 (3%) for patients with an intact ventricular septum and 24 (13%) for those with a ventricular septal defect (P < .001). In 36% delayed sternal closure was performed, 8% required peritoneal dialysis, and 2% required mechanical circulatory support. Median ventilation time was 58 hours, and intensive care and hospital stay were 6 and 14 days, respectively. Although of various preoperative risk factors the presence of a ventricular septal defect, arch pathology, and coronary anomalies were univariate predictors of operative mortality, only the presence of a ventricular septal defect approached statistical significance (P = .06) on multivariable analysis. Of various operative parameters, aortic crossclamp time and delayed sternal closure were also univariate predictors; however, only the latter was an independent statistically significant predictor of death. CONCLUSIONS: Results of the procedure in European centers are compatible with those in the literature. The presence of a ventricular septal defect is the clinically most important preoperative risk factor for operative death, approaching statistical significance on multivariable analysis.
Resumo:
Anthropogenic activities have increased phosphorus (P) loading in tributaries to the Laurentian Great Lakes resulting in eutrophication in small bays to most notably, Lake Erie. Changes to surface water quality from P loading have resulted in billions of dollars in damage and threaten the health of the world’s largest freshwater resource. To understand the factors affecting P delivery with projected increasing urban lands and biofuels expansion, two spatially explicit models were coupled. The coupled models predict that the majority of the basin will experience a significant increase in urban area P sources while the agriculture intensity and forest sources of P will decrease. Changes in P loading across the basin will be highly variable spatially. Additionally, the impacts of climate change on high precipitation events across the Great Lakes were examined. Using historical regression relationships on phosphorus concentrations, key Great Lakes tributaries were found to have future changes including decreasing total loads and increases to high-flow loading events. The urbanized Cuyahoga watersheds exhibits the most vulnerability to these climate-induced changes with increases in total loading and storm loading , while the forested Au Sable watershed exhibits greater resilience. Finally, the monitoring network currently in place for sampling the amount of phosphorus entering the U.S. Great Lakes was examined with a focus on the challenges to monitoring. Based on these interviews, the research identified three issues that policy makers interested in maintaining an effective phosphorus monitoring network in the Great Lakes should consider: first, that the policy objectives driving different monitoring programs vary, which results in different patterns of sampling design and frequency; second, that these differences complicate efforts to encourage collaboration; and third, that methods of funding sampling programs vary from agency to agency, further complicating efforts to generate sufficient long-term data to improve our understanding of phosphorus into the Great Lakes. The dissertation combines these three areas of research to present the potential future impacts of P loading in the Great Lakes as anthropogenic activities, climate and monitoring changes. These manuscripts report new experimental data for future sources, loading and climate impacts on phosphorus.