954 resultados para Objective method
Resumo:
In this study is presented an economic optimization method to design telescope irrigation laterals (multidiameter) with regular spaced outlets. The proposed analytical hydraulic solution was validated by means of a pipeline composed of three different diameters. The minimum acquisition cost of the telescope pipeline was determined by an ideal arrangement of lengths and respective diameters for each one of the three segments. The mathematical optimization method based on the Lagrange multipliers provides a strategy for finding the maximum or minimum of a function subject to certain constraints. In this case, the objective function describes the acquisition cost of pipes, and the constraints are determined from hydraulic parameters as length of irrigation laterals and total head loss permitted. The developed analytical solution provides the ideal combination of each pipe segment length and respective diameter, resulting in a decreased of the acquisition cost.
Resumo:
The objective of this study was to investigate the possibility of using hydric restriction as a method for evaluating vigor of soybean seeds. The soybean seeds, cultivar BRS 245RR, represented by four different seed lots, were characterized by germination and vigor. For the treatment of hydric restriction and temperature, the combination of substrate water potential and temperature were the following: deionized water (0.0 MPa); polyethylene glycol (PEG 6000) aqueous solution (-0.1, -0.3 and -0.5 MPa); and four temperatures (20 ºC, 25 ºC, 30 ºC, and 35 ºC), respectively. A completely randomized experimental design was used, with four replications per treatment, and the ANOVA was performed individually for each combination of temperature and water potential of substrate. According to results obtained, the test of hydric restriction has the same efficiency of the accelerated aging test in estimating vigor of soybean seeds, cv. BRS 245RR, when water potentials of -0.1 MPa or -0.3 MPa at a temperature of 25 ºC, or -0.3 MPa at a temperature of 30 ºC are used.
Resumo:
Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.
Resumo:
[EN]This Ph.D. thesis presents a general, robust methodology that may cover any type of 2D acoustic optimization problem. A procedure involving the coupling of Boundary Elements (BE) and Evolutionary Algorithms is proposed for systematic geometric modifications of road barriers that lead to designs with ever-increasing screening performance. Numerical simulations involving single- and multi-objective optimizations of noise barriers of varied nature are included in this document. results disclosed justify the implementation of this methodology by leading to optimal solutions of previously defined topologies that, in general, greatly outperform the acoustic efficiency of classical, widely used barrier designs normally erected near roads.
Resumo:
Geometric nonlinearities of flexure hinges introduced by large deflections often complicate the analysis of compliant mechanisms containing such members, and therefore, Pseudo-Rigid-Body Models (PRBMs) have been well proposed and developed by Howell [1994] to analyze the characteristics of slender beams under large deflection. These models, however, fail to approximate the characteristics for the deep beams (short beams) or the other flexure hinges. Lobontiu's work [2001] contributed to the diverse flexure hinge analysis building on the assumptions of small deflection, which also limits the application range of these flexure hinges and cannot analyze the stiffness and stress characteristics of these flexure hinges for large deflection. Therefore, the objective of this thesis is to analyze flexure hinges considering both the effects of large-deflection and shear force, which guides the design of flexure-based compliant mechanisms. The main work conducted in the thesis is outlined as follows. 1. Three popular types of flexure hinges: (circular flexure hinges, elliptical flexure hinges and corner-filleted flexure hinges) are chosen for analysis at first. 2. Commercial software (Comsol) based Finite Element Analysis (FEA) method is then used for correcting the errors produced by the equations proposed by Lobontiu when the chosen flexure hinges suffer from large deformation. 3. Three sets of generic design equations for the three types of flexure hinges are further proposed on the basis of stiffness and stress characteristics from the FEA results. 4. A flexure-based four-bar compliant mechanism is finally studied and modeled using the proposed generic design equations. The load-displacement relationships are verified by a numerical example. The results show that a maximum error about the relationship between moment and rotation deformation is less than 3.4% for a flexure hinge, and it is lower than 5% for the four-bar compliant mechanism compared with the FEA results.
Resumo:
DI Diesel engine are widely used both for industrial and automotive applications due to their durability and fuel economy. Nonetheless, increasing environmental concerns force that type of engine to comply with increasingly demanding emission limits, so that, it has become mandatory to develop a robust design methodology of the DI Diesel combustion system focused on reduction of soot and NOx simultaneously while maintaining a reasonable fuel economy. In recent years, genetic algorithms and CFD three-dimensional combustion simulations have been successfully applied to that kind of problem. However, combining GAs optimization with actual CFD three-dimensional combustion simulations can be too onerous since a large number of calculations is usually needed for the genetic algorithm to converge, resulting in a high computational cost and, thus, limiting the suitability of this method for industrial processes. In order to make the optimization process less time-consuming, CFD simulations can be more conveniently used to generate a training set for the learning process of an artificial neural network which, once correctly trained, can be used to forecast the engine outputs as a function of the design parameters during a GA optimization performing a so-called virtual optimization. In the current work, a numerical methodology for the multi-objective virtual optimization of the combustion of an automotive DI Diesel engine, which relies on artificial neural networks and genetic algorithms, was developed.
Resumo:
In the last decade the near-surface mounted (NSM) strengthening technique using carbon fibre reinforced polymers (CFRP) has been increasingly used to improve the load carrying capacity of concrete members. Compared to externally bonded reinforcement (EBR), the NSM system presents considerable advantages. This technique consists in the insertion of carbon fibre reinforced polymer laminate strips into pre-cut slits opened in the concrete cover of the elements to be strengthened. CFRP reinforcement is bonded to concrete with an appropriate groove filler, typically epoxy adhesive or cement grout. Up to now, research efforts have been mainly focused on several structural aspects, such as: bond behaviour, flexural and/or shear strengthening effectiveness, and energy dissipation capacity of beam-column joints. In such research works, as well as in field applications, the most widespread adhesives that are used to bond reinforcements to concrete are epoxy resins. It is largely accepted that the performance of the whole application of NSM systems strongly depends on the mechanical properties of the epoxy resins, for which proper curing conditions must be assured. Therefore, the existence of non-destructive methods that allow monitoring the curing process of epoxy resins in the NSM CFRP system is desirable, in view of obtaining continuous information that can provide indication in regard to the effectiveness of curing and the expectable bond behaviour of CFRP/adhesive/concrete systems. The experimental research was developed at the Laboratory of the Structural Division of the Civil Engineering Department of the University of Minho in Guimar\~aes, Portugal (LEST). The main objective was to develop and propose a new method for continuous quality control of the curing of epoxy resins applied in NSM CFRP strengthening systems. This objective is pursued through the adaptation of an existing technique, termed EMM-ARM (Elasticity Modulus Monitoring through Ambient Response Method) that has been developed for monitoring the early stiffness evolution of cement-based materials. The experimental program was composed of two parts: (i) direct pull-out tests on concrete specimens strengthened with NSM CFRP laminate strips were conducted to assess the evolution of bond behaviour between CFRP and concrete since early ages; and, (ii) EMM-ARM tests were carried out for monitoring the progressive stiffness development of the structural adhesive used in CFRP applications. In order to verify the capability of the proposed method for evaluating the elastic modulus of the epoxy, static E-Modulus was determined through tension tests. The results of the two series of tests were then combined and compared to evaluate the possibility of implementation of a new method for the continuous monitoring and quality control of NSM CFRP applications.
Resumo:
The main objective of this thesis is to obtain a better understanding of the methods to assess the stability of a slope. We have illustrated the principal variants of the Limit Equilibrium (LE) method found in literature, focalizing our attention on the Minimum Lithostatic Deviation (MLD) method, developed by Prof. Tinti and his collaborators (e.g. Tinti and Manucci, 2006, 2008). We had two main goals: the first was to test the MLD method on some real cases. We have selected the case of the Vajont landslide with the objective to reconstruct the conditions that caused the destabilization of Mount Toc, and two sites in the Norwegian margin, where failures has not occurred recently, with the aim to evaluate the present stability state and to assess under which conditions they might be mobilized. The second goal was to study the stability charts by Taylor and by Michalowski, and to use the MLD method to investigate the correctness and adequacy of this engineering tool.
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
The objective of this study was to estimate the potential of method restriction as a public health strategy in suicide prevention. Data from the Swiss Federal Statistical Office and the Swiss Institutes of Forensic Medicine from 2004 were gathered and categorized into suicide submethods according to accessibility to restriction of means. Of suicides in Switzerland, 39.2% are accessible to method restriction. The highest proportions were found in private weapons (13.2%), army weapons (10.4%), and jumps from hot-spots (4.6%). The presented method permits the estimation of the suicide prevention potential of a country by method restriction and the comparison of restriction potentials between suicide methods. In Switzerland, reduction of firearm suicides has the highest potential to reduce the total number of suicides.
Resumo:
OBJECTIVE: To compare four different implantation modalities for the repair of superficial osteochondral defects in a caprine model using autologous, scaffold-free, engineered cartilage constructs, and to describe the short-term outcome of successfully implanted constructs. METHODS: Scaffold-free, autologous cartilage constructs were implanted within superficial osteochondral defects created in the stifle joints of nine adult goats. The implants were distributed between four 6-mm-diameter superficial osteochondral defects created in the trochlea femoris and secured in the defect using a covering periosteal flap (PF) alone or in combination with adhesives (platelet-rich plasma (PRP) or fibrin), or using PRP alone. Eight weeks after implantation surgery, the animals were killed. The defect sites were excised and subjected to macroscopic and histopathologic analyses. RESULTS: At 8 weeks, implants that had been held in place exclusively with a PF were well integrated both laterally and basally. The repair tissue manifested an architecture similar to that of hyaline articular cartilage. However, most of the implants that had been glued in place in the absence of a PF were lost during the initial 4-week phase of restricted joint movement. The use of human fibrin glue (FG) led to massive cell infiltration of the subchondral bone. CONCLUSIONS: The implantation of autologous, scaffold-free, engineered cartilage constructs might best be performed beneath a PF without the use of tissue adhesives. Successfully implanted constructs showed hyaline-like characteristics in adult goats within 2 months. Long-term animal studies and pilot clinical trials are now needed to evaluate the efficacy of this treatment strategy.
Resumo:
OBJECTIVE: Measuring peritoneal lactate concentrations could be useful for detecting splanchnic hypoperfusion. The aims of this study were to evaluate the properties of a new membrane-based microdialyzer in vitro and to assess the ability of the dialyzer to detect a clinically relevant decrease in splanchnic blood flow in vivo. DESIGN: A membrane-based microdialyzer was first validated in vitro. The same device was tested afterward in a randomized, controlled animal experiment. SETTING: University experimental research laboratory. SUBJECTS: Twenty-four Landrace pigs of both genders. INTERVENTIONS: In vitro: Membrane microdialyzers were kept in warmed sodium lactate baths with lactate concentrations between 2 and 8 mmol/L for 10-120 mins, and microdialysis lactate concentrations were measured repeatedly (210 measurements). In vivo: An extracorporeal shunt with blood reservoir and roller pump was inserted between the proximal and distal abdominal aorta, and a microdialyzer was inserted intraperitoneally. In 12 animals, total splanchnic blood flow (measured by transit time ultrasound) was reduced by a median 43% (range, 13% to 72%) by activating the shunt; 12 animals served as controls. MEASUREMENTS AND MAIN RESULTS: In vitro: The fractional lactate recovery was 0.59 (0.32-0.83) after 60 mins and 0.82 (0.71-0.87) after 90 mins, with no further increase thereafter. At 60 and 90 mins, the fractional recovery was independent of the lactate concentration. In vivo: Abdominal blood flow reduction resulted in an increase in peritoneal microdialysis lactate concentration from 1.7 (0.3-3.8) mmol/L to 2.8 (1.3-6.2) mmol/L (p = .006). At the same time, mesenteric venous-arterial lactate gradient increased from 0.1 (-0.2-0.8) mmol/L to 0.3 (-0.3 -1.8) mmol/L (p = .032), and mesenteric venous-arterial Pco2 gradients increased from 12 (8-19) torr to 21 (11-54) torr (p = .005). CONCLUSIONS: Peritoneal membrane microdialysis provides a method for the assessment of splanchnic ischemia, with potential for clinical application.
Resumo:
OBJECTIVE: A previous study of radiofrequency neurotomy of the articular branches of the obturator nerve for hip joint pain produced modest results. Based on an anatomical and radiological study, we sought to define a potentially more effective radiofrequency method. DESIGN: Ten cadavers were studied, four of them bilaterally. The obturator nerve and its articular branches were marked by wires. Their radiological relationship to the bone structures on fluoroscopy was imaged and analyzed. A magnetic resonance imaging (MRI) study was undertaken on 20 patients to determine the structures that would be encountered by the radiofrequency electrode during different possible percutaneous approaches. RESULTS: The articular branches of the obturator nerve vary in location over a wide area. The previously described method of denervating the hip joint did not take this variation into account. Moreover, it approached the nerves perpendicularly. Because optimal coagulation requires electrodes to lie parallel to the nerves, a perpendicular approach probably produced only a minimal lesion. In addition, MRI demonstrated that a perpendicular approach is likely to puncture femoral vessels. Vessel puncture can be avoided if an oblique pass is used. Such an approach minimizes the angle between the target nerves and the electrode, and increases the likelihood of the nerve being captured by the lesion made. Multiple lesions need to be made in order to accommodate the variability in location of the articular nerves. CONCLUSIONS: The method that we described has the potential to produce complete and reliable nerve coagulation. Moreover, it minimizes the risk of penetrating the great vessels. The efficacy of this approach should be tested in clinical trials.
Resumo:
PURPOSE: To correlate the dimension of the visual field (VF) tested by Goldman kinetic perimetry with the extent of visibility of the highly reflective layer between inner and outer segments of photoreceptors (IOS) seen in optical coherence tomography (OCT) images in patients with retinitis pigmentosa (RP). METHODS: In a retrospectively designed cross-sectional study, 18 eyes of 18 patients with RP were examined with OCT and Goldmann perimetry using test target I4e and compared with 18 eyes of 18 control subjects. A-scans of raw scan data of Stratus OCT images (Carl Zeiss Meditec, AG, Oberkochen, Germany) were quantitatively analyzed for the presence of the signal generated by the highly reflective layer between the IOS in OCT images. Starting in the fovea, the distance to which this signal was detectable was measured. Visual fields were analyzed by measuring the distance from the center point to isopter I4e. OCT and visual field data were analyzed in a clockwise fashion every 30 degrees , and corresponding measures were correlated. RESULTS: In corresponding alignments, the distance from the center point to isopter I4e and the distance to which the highly reflective signal from the IOS can be detected correlate significantly (r = 0.75, P < 0.0001). The greater the distance in VF, the greater the distance measured in OCT. CONCLUSIONS: The authors hypothesize that the retinal structure from which the highly reflective layer between the IOS emanates is of critical importance for visual and photoreceptor function. Further research is warranted to determine whether this may be useful as an objective marker of progression of retinal degeneration in patients with RP.
Resumo:
OBJECTIVE: In ictal scalp electroencephalogram (EEG) the presence of artefacts and the wide ranging patterns of discharges are hurdles to good diagnostic accuracy. Quantitative EEG aids the lateralization and/or localization process of epileptiform activity. METHODS: Twelve patients achieving Engel Class I/IIa outcome following temporal lobe surgery (1 year) were selected with approximately 1-3 ictal EEGs analyzed/patient. The EEG signals were denoised with discrete wavelet transform (DWT), followed by computing the normalized absolute slopes and spatial interpolation of scalp topography associated to detection of local maxima. For localization, the region with the highest normalized absolute slopes at the time when epileptiform activities were registered (>2.5 times standard deviation) was designated as the region of onset. For lateralization, the cerebral hemisphere registering the first appearance of normalized absolute slopes >2.5 times the standard deviation was designated as the side of onset. As comparison, all the EEG episodes were reviewed by two neurologists blinded to clinical information to determine the localization and lateralization of seizure onset by visual analysis. RESULTS: 16/25 seizures (64%) were correctly localized by the visual method and 21/25 seizures (84%) by the quantitative EEG method. 12/25 seizures (48%) were correctly lateralized by the visual method and 23/25 seizures (92%) by the quantitative EEG method. The McNemar test showed p=0.15 for localization and p=0.0026 for lateralization when comparing the two methods. CONCLUSIONS: The quantitative EEG method yielded significantly more seizure episodes that were correctly lateralized and there was a trend towards more correctly localized seizures. SIGNIFICANCE: Coupling DWT with the absolute slope method helps clinicians achieve a better EEG diagnostic accuracy.