932 resultados para optimization method
Resumo:
[EN]In this paper we review the novel meccano method. We summarize the main stages (subdivision, mapping, optimization) of this automatic tetrahedral mesh generation technique and we concentrate the study to complex genus-zero solids. In this case, our procedure only requires a surface triangulation of the solid. A crucial consequence of our method is the volume parametrization of the solid to a cube. We construct volume T-meshes for isogeometric analysis by using this result. The efficiency of the proposed technique is shown with several examples. A comparison between the meccano method and standard mesh generation techniques is introduced.-1…
Resumo:
[EN]This work introduces a new technique for tetrahedral mesh optimization. The procedure relocates boundary and inner nodes without changing the mesh topology. In order to maintain the boundary approximation while boundary nodes are moved, a local refinement of tetrahedra with faces on the solid boundary is necessary in some cases. New nodes are projected on the boundary by using a surface parameterization. In this work, the proposed method is applied to tetrahedral meshes of genus-zero solids that are generated by the meccano method. In this case, the solid boundary is automatically decomposed into six surface patches which are parameterized into the six faces of a cube with the Floater parameterization...
Resumo:
Human reactions to vibration have been extensively investigated in the past. Vibration, as well as whole-body vibration (WBV), has been commonly considered as an occupational hazard for its detrimental effects on human condition and comfort. Although long term exposure to vibrations may produce undesirable side-effects, a great part of the literature is dedicated to the positive effects of WBV when used as method for muscular stimulation and as an exercise intervention. Whole body vibration training (WBVT) aims to mechanically activate muscles by eliciting neuromuscular activity (muscle reflexes) via the use of vibrations delivered to the whole body. The most mentioned mechanism to explain the neuromuscular outcomes of vibration is the elicited neuromuscular activation. Local tendon vibrations induce activity of the muscle spindle Ia fibers, mediated by monosynaptic and polysynaptic pathways: a reflex muscle contraction known as the Tonic Vibration Reflex (TVR) arises in response to such vibratory stimulus. In WBVT mechanical vibrations, in a range from 10 to 80 Hz and peak to peak displacements from 1 to 10 mm, are usually transmitted to the patient body by the use of oscillating platforms. Vibrations are then transferred from the platform to a specific muscle group through the subject body. To customize WBV treatments, surface electromyography (SEMG) signals are often used to reveal the best stimulation frequency for each subject. Use of SEMG concise parameters, such as root mean square values of the recordings, is also a common practice; frequently a preliminary session can take place in order to discover the more appropriate stimulation frequency. Soft tissues act as wobbling masses vibrating in a damped manner in response to mechanical excitation; Muscle Tuning hypothesis suggest that neuromuscular system works to damp the soft tissue oscillation that occurs in response to vibrations; muscles alters their activity to dampen the vibrations, preventing any resonance phenomenon. Muscle response to vibration is however a complex phenomenon as it depends on different parameters, like muscle-tension, muscle or segment-stiffness, amplitude and frequency of the mechanical vibration. Additionally, while in the TVR study the applied vibratory stimulus and the muscle conditions are completely characterised (a known vibration source is applied directly to a stretched/shortened muscle or tendon), in WBV study only the stimulus applied to a distal part of the body is known. Moreover, mechanical response changes in relation to the posture. The transmissibility of vibratory stimulus along the body segment strongly depends on the position held by the subject. The aim of this work was the investigation on the effects that the use of vibrations, in particular the effects of whole body vibrations, may have on muscular activity. A new approach to discover the more appropriate stimulus frequency, by the use of accelerometers, was also explored. Different subjects, not affected by any known neurological or musculoskeletal disorders, were voluntarily involved in the study and gave their informed, written consent to participate. The device used to deliver vibration to the subjects was a vibrating platform. Vibrations impressed by the platform were exclusively vertical; platform displacement was sinusoidal with an intensity (peak-to-peak displacement) set to 1.2 mm and with a frequency ranging from 10 to 80 Hz. All the subjects familiarized with the device and the proper positioning. Two different posture were explored in this study: position 1 - hack squat; position 2 - subject standing on toes with heels raised. SEMG signals from the Rectus Femoris (RF), Vastus Lateralis (VL) and Vastus medialis (VM) were recorded. SEMG signals were amplified using a multi-channel, isolated biomedical signal amplifier The gain was set to 1000 V/V and a band pass filter (-3dB frequency 10 - 500 Hz) was applied; no notch filters were used to suppress line interference. Tiny and lightweight (less than 10 g) three-axial MEMS accelerometers (Freescale semiconductors) were used to measure accelerations of onto patient’s skin, at EMG electrodes level. Accelerations signals provided information related to individuals’ RF, Biceps Femoris (BF) and Gastrocnemius Lateralis (GL) muscle belly oscillation; they were pre-processed in order to exclude influence of gravity. As demonstrated by our results, vibrations generate peculiar, not negligible motion artifact on skin electrodes. Artifact amplitude is generally unpredictable; it appeared in all the quadriceps muscles analysed, but in different amounts. Artifact harmonics extend throughout the EMG spectrum, making classic high-pass filters ineffective; however, their contribution was easy to filter out from the raw EMG signal with a series of sharp notch filters centred at the vibration frequency and its superior harmonics (1.5 Hz wide). However, use of these simple filters prevents the revelation of EMG power potential variation in the mentioned filtered bands. Moreover our experience suggests that the possibility of reducing motion artefact, by using particular electrodes and by accurately preparing the subject’s skin, is not easily viable; even though some small improvements were obtained, it was not possible to substantially decrease the artifact. Anyway, getting rid of those artifacts lead to some true EMG signal loss. Nevertheless, our preliminary results suggest that the use of notch filters at vibration frequency and its harmonics is suitable for motion artifacts filtering. In RF SEMG recordings during vibratory stimulation only a little EMG power increment should be contained in the mentioned filtered bands due to synchronous electromyographic activity of the muscle. Moreover, it is better to remove the artifact that, in our experience, was found to be more than 40% of the total signal power. In summary, many variables have to be taken into account: in addition to amplitude, frequency and duration of vibration treatment, other fundamental variables were found to be subject anatomy, individual physiological condition and subject’s positioning on the platform. Studies on WBV treatments that include surface EMG analysis to asses muscular activity during vibratory stimulation should take into account the presence of motion artifacts. Appropriate filtering of artifacts, to reveal the actual effect on muscle contraction elicited by vibration stimulus, is mandatory. However as a result of our preliminary study, a simple multi-band notch filtering may help to reduce randomness of the results. Muscle tuning hypothesis seemed to be confirmed. Our results suggested that the effects of WBV are linked to the actual muscle motion (displacement). The greater was the muscle belly displacement the higher was found the muscle activity. The maximum muscle activity has been found in correspondence with the local mechanical resonance, suggesting a more effective stimulation at the specific system resonance frequency. Holding the hypothesis that muscle activation is proportional to muscle displacement, treatment optimization could be obtained by simply monitoring local acceleration (resonance). However, our study revealed some short term effects of vibratory stimulus; prolonged studies should be assembled in order to consider the long term effectiveness of these results. Since local stimulus depends on the kinematic chain involved, WBV muscle stimulation has to take into account the transmissibility of the stimulus along the body segment in order to ensure that vibratory stimulation effectively reaches the target muscle. Combination of local resonance and muscle response should also be further investigated to prevent hazards to individuals undergoing WBV treatments.
Resumo:
The research activity described in this thesis is focused mainly on the study of finite-element techniques applied to thermo-fluid dynamic problems of plant components and on the study of dynamic simulation techniques applied to integrated building design in order to enhance the energy performance of the building. The first part of this doctorate thesis is a broad dissertation on second law analysis of thermodynamic processes with the purpose of including the issue of the energy efficiency of buildings within a wider cultural context which is usually not considered by professionals in the energy sector. In particular, the first chapter includes, a rigorous scheme for the deduction of the expressions for molar exergy and molar flow exergy of pure chemical fuels. The study shows that molar exergy and molar flow exergy coincide when the temperature and pressure of the fuel are equal to those of the environment in which the combustion reaction takes place. A simple method to determine the Gibbs free energy for non-standard values of the temperature and pressure of the environment is then clarified. For hydrogen, carbon dioxide, and several hydrocarbons, the dependence of the molar exergy on the temperature and relative humidity of the environment is reported, together with an evaluation of molar exergy and molar flow exergy when the temperature and pressure of the fuel are different from those of the environment. As an application of second law analysis, a comparison of the thermodynamic efficiency of a condensing boiler and of a heat pump is also reported. The second chapter presents a study of borehole heat exchangers, that is, a polyethylene piping network buried in the soil which allows a ground-coupled heat pump to exchange heat with the ground. After a brief overview of low-enthalpy geothermal plants, an apparatus designed and assembled by the author to carry out thermal response tests is presented. Data obtained by means of in situ thermal response tests are reported and evaluated by means of a finite-element simulation method, implemented through the software package COMSOL Multyphysics. The simulation method allows the determination of the precise value of the effective thermal properties of the ground and of the grout, which are essential for the design of borehole heat exchangers. In addition to the study of a single plant component, namely the borehole heat exchanger, in the third chapter is presented a thorough process for the plant design of a zero carbon building complex. The plant is composed of: 1) a ground-coupled heat pump system for space heating and cooling, with electricity supplied by photovoltaic solar collectors; 2) air dehumidifiers; 3) thermal solar collectors to match 70% of domestic hot water energy use, and a wood pellet boiler for the remaining domestic hot water energy use and for exceptional winter peaks. This chapter includes the design methodology adopted: 1) dynamic simulation of the building complex with the software package TRNSYS for evaluating the energy requirements of the building complex; 2) ground-coupled heat pumps modelled by means of TRNSYS; and 3) evaluation of the total length of the borehole heat exchanger by an iterative method developed by the author. An economic feasibility and an exergy analysis of the proposed plant, compared with two other plants, are reported. The exergy analysis was performed by considering the embodied energy of the components of each plant and the exergy loss during the functioning of the plants.
Resumo:
The research activities described in the present thesis have been oriented to the design and development of components and technological processes aimed at optimizing the performance of plasma sources in advanced in material treatments. Consumables components for high definition plasma arc cutting (PAC) torches were studied and developed. Experimental activities have in particular focussed on the modifications of the emissive insert with respect to the standard electrode configuration, which comprises a press fit hafnium insert in a copper body holder, to improve its durability. Based on a deep analysis of both the scientific and patent literature, different solutions were proposed and tested. First, the behaviour of Hf cathodes when operating at high current levels (250A) in oxidizing atmosphere has been experimentally investigated optimizing, with respect to expected service life, the initial shape of the electrode emissive surface. Moreover, the microstructural modifications of the Hf insert in PAC electrodes were experimentally investigated during first cycles, in order to understand those phenomena occurring on and under the Hf emissive surface and involved in the electrode erosion process. Thereafter, the research activity focussed on producing, characterizing and testing prototypes of composite inserts, combining powders of a high thermal conductibility (Cu, Ag) and high thermionic emissivity (Hf, Zr) materials The complexity of the thermal plasma torch environment required and integrated approach also involving physical modelling. Accordingly, a detailed line-by-line method was developed to compute the net emission coefficient of Ar plasmas at temperatures ranging from 3000 K to 25000 K and pressure ranging from 50 kPa to 200 kPa, for optically thin and partially autoabsorbed plasmas. Finally, prototypal electrodes were studied and realized for a newly developed plasma source, based on the plasma needle concept and devoted to the generation of atmospheric pressure non-thermal plasmas for biomedical applications.
Resumo:
DI Diesel engine are widely used both for industrial and automotive applications due to their durability and fuel economy. Nonetheless, increasing environmental concerns force that type of engine to comply with increasingly demanding emission limits, so that, it has become mandatory to develop a robust design methodology of the DI Diesel combustion system focused on reduction of soot and NOx simultaneously while maintaining a reasonable fuel economy. In recent years, genetic algorithms and CFD three-dimensional combustion simulations have been successfully applied to that kind of problem. However, combining GAs optimization with actual CFD three-dimensional combustion simulations can be too onerous since a large number of calculations is usually needed for the genetic algorithm to converge, resulting in a high computational cost and, thus, limiting the suitability of this method for industrial processes. In order to make the optimization process less time-consuming, CFD simulations can be more conveniently used to generate a training set for the learning process of an artificial neural network which, once correctly trained, can be used to forecast the engine outputs as a function of the design parameters during a GA optimization performing a so-called virtual optimization. In the current work, a numerical methodology for the multi-objective virtual optimization of the combustion of an automotive DI Diesel engine, which relies on artificial neural networks and genetic algorithms, was developed.
Resumo:
This study is focused on radio-frequency inductively coupled thermal plasma (ICP) synthesis of nanoparticles, combining experimental and modelling approaches towards process optimization and industrial scale-up, in the framework of the FP7-NMP SIMBA European project (Scaling-up of ICP technology for continuous production of Metallic nanopowders for Battery Applications). First the state of the art of nanoparticle production through conventional and plasma routes is summarized, then results for the characterization of the plasma source and on the investigation of the nanoparticle synthesis phenomenon, aiming at highlighting fundamental process parameters while adopting a design oriented modelling approach, are presented. In particular, an energy balance of the torch and of the reaction chamber, employing a calorimetric method, is presented, while results for three- and two-dimensional modelling of an ICP system are compared with calorimetric and enthalpy probe measurements to validate the temperature field predicted by the model and used to characterize the ICP system under powder-free conditions. Moreover, results from the modeling of critical phases of ICP synthesis process, such as precursor evaporation, vapour conversion in nanoparticles and nanoparticle growth, are presented, with the aim of providing useful insights both for the design and optimization of the process and on the underlying physical phenomena. Indeed, precursor evaporation, one of the phases holding the highest impact on industrial feasibility of the process, is discussed; by employing models to describe particle trajectories and thermal histories, adapted from the ones originally developed for other plasma technologies or applications, such as DC non-transferred arc torches and powder spherodization, the evaporation of micro-sized Si solid precursor in a laboratory scale ICP system is investigated. Finally, a discussion on the role of thermo-fluid dynamic fields on nano-particle formation is presented, as well as a study on the effect of the reaction chamber geometry on produced nanoparticle characteristics and process yield.
Resumo:
The aim of the research activity focused on the investigation of the correlation between the degree of purity in terms of chemical dopants in organic small molecule semiconductors and their electrical and optoelectronic performances once introduced as active material in devices. The first step of the work was addressed to the study of the electrical performances variation of two commercial organic semiconductors after being processed by means of thermal sublimation process. In particular, the p-type 2,2′′′-Dihexyl-2,2′:5′,2′′:5′′,2′′′-quaterthiophene (DH4T) semiconductor and the n-type 2,2′′′- Perfluoro-Dihexyl-2,2′:5′,2′′:5′′,2′′′-quaterthiophene (DFH4T) semiconductor underwent several sublimation cycles, with consequent improvement of the electrical performances in terms of charge mobility and threshold voltage, highlighting the benefits brought by this treatment to the electric properties of the discussed semiconductors in OFET devices by the removal of residual impurities. The second step consisted in the provision of a metal-free synthesis of DH4T, which was successfully prepared without organometallic reagents or catalysts in collaboration with Dr. Manuela Melucci from ISOF-CNR Institute in Bologna. Indeed the experimental work demonstrated that those compounds are responsible for the electrical degradation by intentionally doping the semiconductor obtained by metal-free method by Tetrakis(triphenylphosphine)palladium(0) (Pd(PPh3)4) and Tributyltin chloride (Bu3SnCl), as well as with an organic impurity, like 5-hexyl-2,2':5',2''-terthiophene (HexT3) at, in different concentrations (1, 5 and 10% w/w). After completing the entire evaluation process loop, from fabricating OFET devices by vacuum sublimation with implemented intentionally-doped batches to the final electrical characterization in inherent-atmosphere conditions, commercial DH4T, metal-free DH4T and the intentionally-doped DH4T were systematically compared. Indeed, the fabrication of OFET based on doped DH4T clearly pointed out that the vacuum sublimation is still an inherent and efficient purification method for crude semiconductors, but also a reliable way to fabricate high performing devices.
Resumo:
Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.
Resumo:
The aim of this thesis was to establish a method for repeated transfection of in vitro transcribed RNA (IVT-RNA) leading to a sustained protein expression lasting for days or even weeks. Once transfected cells recognize IVT-RNA as "non-self" and initiate defense pathways leading to an upregulated interferon (IFN) response and stalled translation. In this work Protein Kinase R (PKR) was identified as the main effector molecule mediating this cellular response. We assessed four strategies to inhibit PKR and the IFN response: A small molecule PKR inhibitor enhanced protein expression and hampered the induction of IFN-transcripts, but had to be excluded due to cytotoxicity. A siRNA mediated PKR knockdown and the overexpression of a kinase inactive PKR mutant elevated the protein expression, but the down-regulation of the IFN response was insufficient. The co-transfer of the viral inhibitors of PKR and the IFN response was most successful. The use of E3, K3 and B18R co-transfection enabled repeated IVT-RNA-based transfection of human fibroblasts. Thus, the developed protocol allows a continuous IVT-RNA encoded protein expression of proteins, which could be the basis for the generation of induced pluripotent stem cells (iPS) for several therapeutic applications in regenerative medicine or drug research.
Resumo:
The ability of the pm3 semiempirical quantum mechanical method to reproduce hydrogen bonding in nucleotide base pairs was assessed. Results of pm3 calculations on the nucleotides 2′-deoxyadenosine 5′-monophosphate (pdA), 2′-deoxyguanosine 5′-monophosphate (pdG), 2′-deoxycytidine 5′-monophosphate (pdC), and 2′-deoxythymidine 5′-monophosphate (pdT) and the base pairs pdA–pdT, pdG–pdC, and pdG(syn)–pdC are presented and discussed. The pm3 method is the first of the parameterized nddo quantum mechanical models with any ability to reproduce hydrogen bonding between nucleotide base pairs. Intermolecular hydrogen bond lengths between nucleotides displaying Watson–Crick base pairing are 0.1–0.2 Å less than experimental results. Nucleotide bond distances, bond angles, and torsion angles about the glycosyl bond (χ), the C4′C5′ bond (γ), and the C5′O5′ bond (β) agree with experimental results. There are many possible conformations of nucleotides. pm3 calculations reveal that many of the most stable conformations are stabilized by intramolecular CHO hydrogen bonds. These interactions disrupt the usual sugar puckering. The stacking interactions of a dT–pdA duplex are examined at different levels of gradient optimization. The intramolecular hydrogen bonds found in the nucleotide base pairs disappear in the duplex, as a result of the additional constraints on the phosphate group when part of a DNA backbone. Sugar puckering is reproduced by the pm3 method for the four bases in the dT–pdA duplex. pm3 underestimates the attractive stacking interactions of base pairs in a B-DNA helical conformation. The performance of the pm3 method implemented in SPARTAN is contrasted with that implemented in MOPAC. At present, accurate ab initio calculations are too timeconsuming to be of practical use, and molecular mechanics methods cannot be used to determine quantum mechanical properties such as reaction-path calculations, transition-state structures, and activation energies. The pm3 method should be used with extreme caution for examination of small DNA systems. Future parameterizations of semiempirical methods should incorporate base stacking interactions into the parameterization data set to enhance the ability of these methods.
Resumo:
A recombinant metal-dependent phosphatidylinositol-specific phospholipase C (PI-PLC) from Streptomyces antibioticus has been crystallized by the hanging-drop method with and without heavy metals. The native crystals belonged to the orthorhombic space group P222, with unit-cell parameters a = 41.26, b = 51.86, c= 154.78 A. The X-ray diffraction results showed significant differences in the crystal quality of samples soaked with heavy atoms. Additionally, drop pinning, which increases the surface area of the drops, was also used to improve crystal growth and quality. The combination of heavy-metal soaks and drop pinning was found to be critical for producing high-quality crystals that diffracted to 1.23 A resolution.
Resumo:
Development of novel implants in orthopaedic trauma surgery is based on limited datasets of cadaver trials or artificial bone models. A method has been developed whereby implants can be constructed in an evidence based method founded on a large anatomic database consisting of more than 2.000 datasets of bones extracted from CT scans. The aim of this study was the development and clinical application of an anatomically pre-contoured plate for the treatment of distal fibular fractures based on the anatomical database. 48 Caucasian and Asian bone models (left and right) from the database were used for the preliminary optimization process and validation of the fibula plate. The implant was constructed to fit bilaterally in a lateral position of the fibula. Then a biomechanical comparison of the designed implant to the current gold standard in the treatment of distal fibular fractures (locking 1/3 tubular plate) was conducted. Finally, a clinical surveillance study to evaluate the grade of implant fit achieved was performed. The results showed that with a virtual anatomic database it was possible to design a fibula plate with an optimized fit for a large proportion of the population. Biomechanical testing showed the novel fibula plate to be superior to 1/3 tubular plates in 4-point bending tests. The clinical application showed a very high degree of primary implant fit. Only in a small minority of cases further intra-operative implant bending was necessary. Therefore, the goal to develop an implant for the treatment of distal fibular fractures based on the evidence of a large anatomical database could be attained. Biomechanical testing showed good results regarding the stability and the clinical application confirmed the high grade of anatomical fit.
Resumo:
An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.
Resumo:
This dissertation discusses structural-electrostatic modeling techniques, genetic algorithm based optimization and control design for electrostatic micro devices. First, an alternative modeling technique, the interpolated force model, for electrostatic micro devices is discussed. The method provides improved computational efficiency relative to a benchmark model, as well as improved accuracy for irregular electrode configurations relative to a common approximate model, the parallel plate approximation model. For the configuration most similar to two parallel plates, expected to be the best case scenario for the approximate model, both the parallel plate approximation model and the interpolated force model maintained less than 2.2% error in static deflection compared to the benchmark model. For the configuration expected to be the worst case scenario for the parallel plate approximation model, the interpolated force model maintained less than 2.9% error in static deflection while the parallel plate approximation model is incapable of handling the configuration. Second, genetic algorithm based optimization is shown to improve the design of an electrostatic micro sensor. The design space is enlarged from published design spaces to include the configuration of both sensing and actuation electrodes, material distribution, actuation voltage and other geometric dimensions. For a small population, the design was improved by approximately a factor of 6 over 15 generations to a fitness value of 3.2 fF. For a larger population seeded with the best configurations of the previous optimization, the design was improved by another 7% in 5 generations to a fitness value of 3.0 fF. Third, a learning control algorithm is presented that reduces the closing time of a radiofrequency microelectromechanical systems switch by minimizing bounce while maintaining robustness to fabrication variability. Electrostatic actuation of the plate causes pull-in with high impact velocities, which are difficult to control due to parameter variations from part to part. A single degree-of-freedom model was utilized to design a learning control algorithm that shapes the actuation voltage based on the open/closed state of the switch. Experiments on 3 test switches show that after 5-10 iterations, the learning algorithm lands the switch with an impact velocity not exceeding 0.2 m/s, eliminating bounce.