899 resultados para Lower cost
Resumo:
Gomesin (Gm) was the first antimicrobial peptide (AMP) isolated from the hemocytes of a spider, the Brazilian mygalomorph Acanthoscurria gomesiana. We have been studying the properties of this interesting AMP, which also displays anticancer, antimalarial, anticryptococcal and anti-Leishmania activities. In the present study, the total syntheses of backbone-cyclized analogues of Gm (two disulfide bonds), [Cys(Acm)2,15]-Gm (one disulfide bond) and [Thr2,6,11,15,d-Pro9]-Gm (no disulfide bonds) were accomplished, and the impact of cyclization on their properties was examined. The consequence of simultaneous deletion of pGlu1 and Arg16-Glu-Arg18-NH2 on Gm antimicrobial activity and structure was also analyzed. The results obtained showed that the synthetic route that includes peptide backbone cyclization on resin was advantageous and that a combination of 20% DMSO/NMP, EDC/HOBt, 60?degrees C and conventional heating appears to be particularly suitable for backbone cyclization of bioactive peptides. The biological properties of the Gm analogues clearly revealed that the N-terminal amino acid pGlu1 and the amidated C-terminal tripeptide Arg16-Glu-Arg18-NH2 play a major role in the interaction of Gm with the target membranes. Moreover, backbone cyclization practically did not affect the stability of the peptides in human serum; it also did not affect or enhanced hemolytic activity, but induced selectivity and, in some cases, discrete enhancements of antimicrobial activity and salt tolerance. Because of its high therapeutic index, easy synthesis and lower cost, the [Thr2,6,11,15,d-Pro9]-Gm analogue remains the best active Gm-derived AMP developed so far; nevertheless, its elevated instability in human serum may limit its therapeutic potential. Copyright (c) 2012 European Peptide Society and John Wiley & Sons, Ltd.
Resumo:
Subsurface drip irrigation that uses an emitter protection system to avoid its clogging by roots and soil particles may be viable compared to a conventional system. The objective of this work was to evaluate the performance of a system with emitter protection, and to compare the results with a system that uses a conventional emitter for subsurface drip irrigation. In the system with protection an inexpensive materials polyethylene hose, microtube, connector, and a dripper to control the flow rate were used; and, in the conventional system a commercial emitter was used. After 12 months of evaluation, the system with protector showed good performance, with relative average flow rate of 0.97 and 0.98 in pots with and without crop, respectively, showing no clogging problems and lower cost. In conventional system relative flow rate of 0.51 and 0.98 were observed in pots with and without crop, respectively, also clogging degree by roots of 49.22%, and emitters with soil inside was observed. Thus, the use of emitter with protection presented feasibility for subsurface drip irrigation, under conditions used in this research.
Resumo:
One of the most common dental problems in today's clinics is tooth wear, specifically when related to bruxism. In such cases, the esthetics of anterior teeth may be compromised when excessive wear to the incisal surfaces occurs. Anterior tooth wear resulting from parafunctional bruxism can be conservatively treated with the use of direct resin composite restorations. This restorative approach has the advantages of presenting good predictability, load resistance, acceptable longevity, preservation of healthy dental tissues, and lower cost when compared with indirect restorations. The use of resin composites to solve esthetic problems, however, requires skill and practice. Thus, the present article demonstrates a conservative approach for restoring the esthetics and function of worn anterior teeth with the aid of direct resin composite restorations and selective occlusal adjustment. CLINICAL SIGNIFICANCE A conservative approach to restore anterior teeth with excessive wear is possible with direct resin composites. (J Esthet Restor Dent 24:171-184, 2011)
Resumo:
2-Methylisoborneol (MIB) and geosmin (GSM) are sub products from algae decomposition and, depending on their concentration, can be toxic: otherwise, they give unpleasant taste and odor to water. For water treatment companies it is important to constantly monitor their presence in the distributed water and avoid further costumer complaints. Lower-cost and easy-to-read instrumentation would be very promising in this regard. In this study, we evaluate the potentiality of an electronic tongue (ET) system based on non-specific polymeric sensors and impedance measurements in monitoring MIB and GSM in water samples. Principal component analysis (PCA) applied to the generated data matrix indicated that this ET was capable to perform with remarkable reproducibility the discrimination of these two contaminants in either distilled or tap water, in concentrations as low as 25 ng L-1. Nonetheless, this analysis methodology was rather qualitative and laborious, and the outputs it provided were greatly subjective. Also, data analysis based on PCA severely restricts automation of the measuring system or its use by non-specialized operators. To circumvent these drawbacks, a fuzzy controller was designed to quantitatively perform sample classification while providing outputs in simpler data charts. For instance, the ET along with the referred fuzzy controller performed with a 100% hit rate the quantification of MIB and GSM samples in distilled and tap water. The hit rate could be read directly from the plot. The lower cost of these polymeric sensors allied to the especial features of the fuzzy controller (easiness on programming and numerical outputs) provided initial requirements for developing an automated ET system to monitor odorant species in water production and distribution. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Many pathways can be used to synthesize polythiophenes derivatives. The polycondensation reactions performed with organometallics are preferred since they lead to regioregular polymers (with high content of heat-to-tail coupling) which have enhanced conductivity and luminescence. However, these pathways have several steps; the reactants are highly moisture sensitive and expensive. On the other hand, the oxidative polymerization using FeCl3 is a one-pot reaction that requires less moisture sensitive reactants with lower cost, although the most common reaction conditions lead to polymers with low regioregularity. Here, we report that by changing the reaction conditions, such as FeCl3 addition rate and reaction temperature, poly-3-octylthiophenes with different the regioregularities can be obtained, reaching about 80% of heat-to-tail coupling. Different molar mass distributions and polydispersivities were obtained. The preliminary results suggest that the oxidative polymerization process could be improved to yield polythiophenes with higher regioregularity degree and narrower molar mass distributions by just setting some reaction conditions. We also verified that it is possible to solvent extract part of the lower regioregular fraction of the polymer further improving the regioregularity degree. (C) 2011 Wiley Periodicals, Inc. J Appl Polym Sci, 2012
Resumo:
OBJECTIVE: Hypertension is a major issue in public health, and the financial costs associated with hypertension continue to increase. Cost-effectiveness studies focusing on antihypertensive drug combinations, however, have been scarce. The cost-effectiveness ratios of the traditional treatment (hydrochlorothiazide and atenolol) and the current treatment (losartan and amlodipine) were evaluated in patients with grade 1 or 2 hypertension (HT1-2). For patients with grade 3 hypertension (HT3), a third drug was added to the treatment combinations: enalapril was added to the traditional treatment, and hydrochlorothiazide was added to the current treatment. METHODS: Hypertension treatment costs were estimated on the basis of the purchase prices of the antihypertensive medications, and effectiveness was measured as the reduction in systolic blood pressure and diastolic blood pressure (in mm Hg) at the end of a 12-month study period. RESULTS: When the purchase price of the brand-name medication was used to calculate the cost, the traditional treatment presented a lower cost-effectiveness ratio [US$/mm Hg] than the current treatment in the HT1-2 group. In the HT3 group, however, there was no difference in cost-effectiveness ratio between the traditional treatment and the current treatment. The cost-effectiveness ratio differences between the treatment regimens maintained the same pattern when the purchase price of the lower-cost medication was used. CONCLUSIONS: We conclude that the traditional treatment is more cost-effective (US$/mm Hg) than the current treatment in the HT1-2 group. There was no difference in cost-effectiveness between the traditional treatment and the current treatment for the HT3 group.
Resumo:
Abstract Background Measurement of vital capacity (VC) by spirometry is the most widely used technique for lung function evaluation, however, this form of assessment is costly and further investigation of other reliable methods at lower cost is necessary. Objective: To analyze the correlation between direct vital capacity measured with ventilometer and with incentive inspirometer in patients in pre and post cardiac surgery. Methodology Cross-sectional comparative study with patients undergoing cardiac surgery. Respiratory parameters were evaluated through the measurement of VC performed by ventilometer and inspirometer. To analyze data normality the Kolmogorov-Smirnov test was applied, for correlation the Pearson correlation coefficient was used and for comparison of variables in pre and post operative period Student's t test was adopted. We established a level of ignificance of 5%. Data was presented as an average, standard deviation and relative frequency when needed. The significance level was set at 5%. Results We studied 52 patients undergoing cardiac surgery, 20 patients in preoperative with VC-ventilometer: 32.95 ± 11.4 ml/kg and VC-inspirometer: 28.9 ± 11 ml/Kg, r = 0.7 p < 0.001. In the post operatory, 32 patients were evaluated with VC-ventilometer: 28.27 ± 12.48 ml/kg and VC-inspirometer: 26.98 ± 11 ml/Kg, r = 0.95 p < 0.001. Presenting a very high correlation between the evaluation forms studied. Conclusion There was a high correlation between DVC measures with ventilometer and incentive spirometer in pre and post CABG surgery. Despite this, arises the necessity of further studies to evaluate the repercussion of this method in lowering costs at hospitals.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
The ever-increasing spread of automation in industry puts the electrical engineer in a central role as a promoter of technological development in a sector such as the use of electricity, which is the basis of all the machinery and productive processes. Moreover the spread of drives for motor control and static converters with structures ever more complex, places the electrical engineer to face new challenges whose solution has as critical elements in the implementation of digital control techniques with the requirements of inexpensiveness and efficiency of the final product. The successfully application of solutions using non-conventional static converters awake an increasing interest in science and industry due to the promising opportunities. However, in the same time, new problems emerge whose solution is still under study and debate in the scientific community During the Ph.D. course several themes have been developed that, while obtaining the recent and growing interest of scientific community, have much space for the development of research activity and for industrial applications. The first area of research is related to the control of three phase induction motors with high dynamic performance and the sensorless control in the high speed range. The management of the operation of induction machine without position or speed sensors awakes interest in the industrial world due to the increased reliability and robustness of this solution combined with a lower cost of production and purchase of this technology compared to the others available in the market. During this dissertation control techniques will be proposed which are able to exploit the total dc link voltage and at the same time capable to exploit the maximum torque capability in whole speed range with good dynamic performance. The proposed solution preserves the simplicity of tuning of the regulators. Furthermore, in order to validate the effectiveness of presented solution, it is assessed in terms of performance and complexity and compared to two other algorithm presented in literature. The feasibility of the proposed algorithm is also tested on induction motor drive fed by a matrix converter. Another important research area is connected to the development of technology for vehicular applications. In this field the dynamic performances and the low power consumption is one of most important goals for an effective algorithm. Towards this direction, a control scheme for induction motor that integrates within a coherent solution some of the features that are commonly required to an electric vehicle drive is presented. The main features of the proposed control scheme are the capability to exploit the maximum torque in the whole speed range, a weak dependence on the motor parameters, a good robustness against the variations of the dc-link voltage and, whenever possible, the maximum efficiency. The second part of this dissertation is dedicated to the multi-phase systems. This technology, in fact, is characterized by a number of issues worthy of investigation that make it competitive with other technologies already on the market. Multiphase systems, allow to redistribute power at a higher number of phases, thus making possible the construction of electronic converters which otherwise would be very difficult to achieve due to the limits of present power electronics. Multiphase drives have an intrinsic reliability given by the possibility that a fault of a phase, caused by the possible failure of a component of the converter, can be solved without inefficiency of the machine or application of a pulsating torque. The control of the magnetic field spatial harmonics in the air-gap with order higher than one allows to reduce torque noise and to obtain high torque density motor and multi-motor applications. In one of the next chapters a control scheme able to increase the motor torque by adding a third harmonic component to the air-gap magnetic field will be presented. Above the base speed the control system reduces the motor flux in such a way to ensure the maximum torque capability. The presented analysis considers the drive constrains and shows how these limits modify the motor performance. The multi-motor applications are described by a well-defined number of multiphase machines, having series connected stator windings, with an opportune permutation of the phases these machines can be independently controlled with a single multi-phase inverter. In this dissertation this solution will be presented and an electric drive consisting of two five-phase PM tubular actuators fed by a single five-phase inverter will be presented. Finally the modulation strategies for a multi-phase inverter will be illustrated. The problem of the space vector modulation of multiphase inverters with an odd number of phases is solved in different way. An algorithmic approach and a look-up table solution will be proposed. The inverter output voltage capability will be investigated, showing that the proposed modulation strategy is able to fully exploit the dc input voltage either in sinusoidal or non-sinusoidal operating conditions. All this aspects are considered in the next chapters. In particular, Chapter 1 summarizes the mathematical model of induction motor. The Chapter 2 is a brief state of art on three-phase inverter. Chapter 3 proposes a stator flux vector control for a three- phase induction machine and compares this solution with two other algorithms presented in literature. Furthermore, in the same chapter, a complete electric drive based on matrix converter is presented. In Chapter 4 a control strategy suitable for electric vehicles is illustrated. Chapter 5 describes the mathematical model of multi-phase induction machines whereas chapter 6 analyzes the multi-phase inverter and its modulation strategies. Chapter 7 discusses the minimization of the power losses in IGBT multi-phase inverters with carrier-based pulse width modulation. In Chapter 8 an extended stator flux vector control for a seven-phase induction motor is presented. Chapter 9 concerns the high torque density applications and in Chapter 10 different fault tolerant control strategies are analyzed. Finally, the last chapter presents a positioning multi-motor drive consisting of two PM tubular five-phase actuators fed by a single five-phase inverter.
Resumo:
The thesis main topic is the conflict between disclosure in financial markets and the need for confidentiality of the firm. After a recognition of the major dynamics of information production and dissemination in the stock market, the analysis moves to the interactions between the information that a firm is tipically interested in keeping confidential, such as trade secrets or the data usually covered by patent protection, and the countervailing demand for disclosure arising from finacial markets. The analysis demonstrates that despite the seeming divergence between informational contents tipically disclosed to investors and information usually covered by intellectual property protection, the overlapping areas are nonetheless wide and the conflict between transparency in financial markets and the firm’s need for confidentiality arises frequently and sistematically. Indeed, the company’s disclosure policy is based on a continuous trade-off between the costs and the benefits related to the public dissemination of information. Such costs are mainly represented by the competitive harm caused by competitors’ access to sensitive data, while the benefits mainly refer to the lower cost of capital that the firm obtains as a consequence of more disclosure. Secrecy shields the value of costly produced information against third parties’ free riding and constitutes therefore a means to protect the firm’s incentives toward the production of new information and especially toward technological and business innovation. Excessively demanding standards of transparency in financial markets might hinder such set of incentives and thus jeopardize the dynamics of innovation production. Within Italian securities regulation, there are two sets of rules mostly relevant with respect to such an issue: the first one is the rule that mandates issuers to promptly disclose all price-sensitive information to the market on an ongoing basis; the second one is the duty to disclose in the prospectus all the information “necessary to enable investors to make an informed assessment” of the issuers’ financial and economic perspectives. Both rules impose high disclosure standards and have potentially unlimited scope. Yet, they have safe harbours aimed at protecting the issuer need for confidentiality. Despite the structural incompatibility between public dissemination of information and the firm’s need to keep certain data confidential, there are certain ways to convey information to the market while preserving at the same time the firm’s need for confidentality. Such means are insider trading and selective disclosure: both are based on mechanics whereby the process of price reaction to the new information takes place without any corresponding activity of public release of data. Therefore, they offer a solution to the conflict between disclosure and the need for confidentiality that enhances market efficiency and preserves at the same time the private set of incentives toward innovation.
Resumo:
The recent advent of Next-generation sequencing technologies has revolutionized the way of analyzing the genome. This innovation allows to get deeper information at a lower cost and in less time, and provides data that are discrete measurements. One of the most important applications with these data is the differential analysis, that is investigating if one gene exhibit a different expression level in correspondence of two (or more) biological conditions (such as disease states, treatments received and so on). As for the statistical analysis, the final aim will be statistical testing and for modeling these data the Negative Binomial distribution is considered the most adequate one especially because it allows for "over dispersion". However, the estimation of the dispersion parameter is a very delicate issue because few information are usually available for estimating it. Many strategies have been proposed, but they often result in procedures based on plug-in estimates, and in this thesis we show that this discrepancy between the estimation and the testing framework can lead to uncontrolled first-type errors. We propose a mixture model that allows each gene to share information with other genes that exhibit similar variability. Afterwards, three consistent statistical tests are developed for differential expression analysis. We show that the proposed method improves the sensitivity of detecting differentially expressed genes with respect to the common procedures, since it is the best one in reaching the nominal value for the first-type error, while keeping elevate power. The method is finally illustrated on prostate cancer RNA-seq data.
Resumo:
Fuel cells are a topic of high interest in the scientific community right now because of their ability to efficiently convert chemical energy into electrical energy. This thesis is focused on solid oxide fuel cells (SOFCs) because of their fuel flexibility, and is specifically concerned with the anode properties of SOFCs. The anodes are composed of a ceramic material (yttrium stabilized zirconia, or YSZ), and conducting material. Recent research has shown that an infiltrated anode may offer better performance at a lower cost. This thesis focuses on the creation of a model of an infiltrated anode that mimics the underlying physics of the production process. Using the model, several key parameters for anode performance are considered. These are the initial volume fraction of YSZ in the slurry before sintering, the final porosity of the composite anode after sintering, and the size of the YSZ and conducting particles in the composite. The performance measures of the anode, namely percolation threshold and effective conductivity, are analyzed as a function of these important input parameters. Simple two and three-dimensional percolation models are used to determine the conditions at which the full infiltrated anode would be investigated. These more simple models showed that the aspect ratio of the anode has no effect on the threshold or effective conductivity, and that cell sizes of 303 are needed to obtain accurate conductivity values. The full model of the infiltrated anode is able to predict the performance of the SOFC anodes and it can be seen that increasing the size of the YSZ decreases the percolation threshold and increases the effective conductivity at low conductor loadings. Similar trends are seen for a decrease in final porosity and a decrease in the initial volume fraction of YSZ.
Resumo:
Solid oxide fuel cells (SOFCs) provide a potentially clean way of using energy sources. One important aspect of a functioning fuel cell is the anode and its characteristics (e.g. conductivity). Using infiltration of conductor particles has been shown to be a method for production at lower cost with comparable functionality. While these methods have been demonstrated experimentally, there is a vast range of variables to consider. Because of the long time for manufacture, a model is desired to aid in the development of the desired anode formulation. This thesis aims to (1) use an idealized system to determine the appropriate size and aspect ratio to determine the percolation threshold and effective conductivity as well as to (2) simulate the infiltrated fabrication method to determine the effective conductivity and percolation threshold as a function of ceramic and pore former particle size, particle fraction and the cell¿s final porosity. The idealized system found that the aspect ratio of the cell does not affect the cells functionality and that an aspect ratio of 1 is the most efficient computationally to use. Additionally, at cell sizes greater than 50x50, the conductivity asymptotes to a constant value. Through the infiltrated model simulations, it was found that by increasing the size of the ceramic (YSZ) and pore former particles, the percolation threshold can be decreased and the effective conductivity at low loadings can be increased. Furthermore, by decreasing the porosity of the cell, the percolation threshold and effective conductivity at low loadings can also be increased
Resumo:
Solid oxide fuel cell (SOFC) technology has the potential to be a significant player in our future energy technology repertoire based on its ability to convert chemical energy into electrical energy. Infiltrated SOFCs, in particular, have demonstrated improved performance and at lower cost than traditional SOFCs. An infiltrated electrode comprises porous ceramic scaffolding (typically constructed from the oxygen ion conducting material) that is infiltrated with electron conducting and catalytic particles. Two important SOFC electrode properties are effective conductivity and three phase boundary density (TPB). Researchers study these electrode properties separately, and fail to recognize them as competing properties. This thesis aims to (1) develop a method to model the TPB density and use it to determine the effect of porosity, scaffolding particle size, and pore former size on TPB density as well as to (2) compare the effect of porosity, scaffolding particle size, and pore former size on TPB density and effective conductivity to determine a desired set of parameters for infiltrated SOFC electrode performance. A computational model was used to study the effect of microstructure parameters on the effective conductivity and TPB density of the infiltrated SOFC electrode. From this study, effective conductivity and TPB density are determined to be competing properties of SOFC electrodes. Increased porosity, scaffolding particle size, and pore former particle size increase the effective conductivity for a given infiltrate loading above percolation threshold. Increased scaffolding particle size and pore former size ratio, however, decreases the TPB density. The maximum TPB density is achievable between porosities of 45% and 60%. The effect of microstructure parameters are more prominent at low loading with scaffolding particle size being the most significant factor and pore former size ratio being the least significant factor.
Resumo:
OBJECTIVE: To compare the efficacy of vaginal misoprostol versus dinoprostone for induction of labor (IOL) in patients with preeclampsia according to the WHO criteria. STUDY DESIGN: Ninety-eight patients were retrospectively analyzed. A total of 47 patients received 3 mg dinoprostone suppositories every 6 h (max. 6 mg/24 h) whereas 51 patients in the misoprostol group received either 50 mug misoprostol vaginally every 12 h, or 25 mug every 6 h (max. 100 mug/24 h). Primary outcomes were vaginal delivery within 24 and 48 h, respectively. RESULTS: The probability of delivering within 48 h was more than three-fold higher in the misoprostol than in the dinoprostone group: odds ratio (OR)=3.48; 95% confidence interval (CI) 1.24, 10.30, whereas no significant difference was observed within 24 h (P=0.34). No correlation was seen between a ripe cervix prior to IOL and delivery within 24/48 h (P=0.33 and P=1.0, respectively). More cesarean sections were performed in the dinoprostone group due to failed IOL (P=0.0009). No significant differences in adverse maternal outcome were observed between both study groups, whereas more neonates (12 vs. 6) of the dinoprostone group were admitted to the NICU (P=0.068). CONCLUSION: This study suggests that misoprostol may have some advantages compared to dinoprostone, including improved efficacy and lower cost of the drug, even in cases of preeclampsia.