915 resultados para Two-level scheduling and optimization
Resumo:
Maps depicting spatial pattern in the stability of summer greenness could advance understanding of how forest ecosystems will respond to global changes such as a longer growing season. Declining summer greenness, or “greendown”, is spectrally related to declining near-infrared reflectance and is observed in most remote sensing time series to begin shortly after peak greenness at the end of spring and extend until the beginning of leaf coloration in autumn,. Understanding spatial patterns in the strength of greendown has recently become possible with the advancement of Landsat phenology products, which show that greendown patterns vary at scales appropriate for linking these patterns to proposed environmental forcing factors. This study tested two non-mutually exclusive hypotheses for how leaf measurements and environmental factors correlate with greendown and decreasing NIR reflectance across sites. At the landscape scale, we used linear regression to test the effects of maximum greenness, elevation, slope, aspect, solar irradiance and canopy rugosity on greendown. Secondly, we used leaf chemical traits and reflectance observations to test the effect of nitrogen availability and intrinsic water use efficiency on leaf-level greendown, and landscape-level greendown measured from Landsat. The study was conducted using Quercus alba canopies across 21 sites of an eastern deciduous forest in North America between June and August 2014. Our linear model explained greendown variance with an R2=0.47 with maximum greenness as the greatest model effect. Subsequent models excluding one model effect revealed elevation and aspect were the two topographic factors that explained the greatest amount of greendown variance. Regression results also demonstrated important interactions between all three variables, with the greatest interaction showing that aspect had greater influence on greendown at sites with steeper slopes. Leaf-level reflectance was correlated with foliar δ13C (proxy for intrinsic water use efficiency), but foliar δ13C did not translate into correlations with landscape-level variation in greendown from Landsat. Therefore, we conclude that Landsat greendown is primarily indicative of landscape position, with a small effect of canopy structure, and no measureable effect of leaf reflectance. With this understanding of Landsat greendown we can better explain the effects of landscape factors on vegetation reflectance and perhaps on phenology, which would be very useful for studying phenology in the context of global climate change
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Two-phase flow heat exchangers have been shown to have very high efficiencies, but the lack of a dependable model and data precludes them from use in many cases. Herein a new method for the measurement of local convective heat transfer coefficients from the outside of a heat transferring wall has been developed, which results in accurate local measurements of heat flux during two-phase flow. This novel technique uses a chevron-pattern corrugated plate heat exchanger consisting of a specially machined Calcium Fluoride plate and the refrigerant HFE7100, with heat flux values up to 1 W cm-2 and flow rates up to 300 kg m-2s-1. As Calcium Fluoride is largely transparent to infra-red radiation, the measurement of the surface temperature of PHE that is in direct contact with the liquid is accomplished through use of a mid-range (3.0-5.1 µm) infra-red camera. The objective of this study is to develop, validate, and use a unique infrared thermometry method to quantify the heat transfer characteristics of flow boiling within different Plate Heat Exchanger geometries. This new method allows high spatial and temporal resolution measurements. Furthermore quasi-local pressure measurements enable us to characterize the performance of each geometry. Validation of this technique will be demonstrated by comparison to accepted single and two-phase data. The results can be used to come up with new heat transfer correlations and optimization tools for heat exchanger designers. The scientific contribution of this thesis is, to give PHE developers further tools to allow them to identify the heat transfer and pressure drop performance of any corrugated plate pattern directly without the need to account for typical error sources due to inlet and outlet distribution systems. Furthermore, the designers will now gain information on the local heat transfer distribution within one plate heat exchanger cell which will help to choose the correct corrugation geometry for a given task.
Resumo:
Over the last decade, success of social networks has significantly reshaped how people consume information. Recommendation of contents based on user profiles is well-received. However, as users become dominantly mobile, little is done to consider the impacts of the wireless environment, especially the capacity constraints and changing channel. In this dissertation, we investigate a centralized wireless content delivery system, aiming to optimize overall user experience given the capacity constraints of the wireless networks, by deciding what contents to deliver, when and how. We propose a scheduling framework that incorporates content-based reward and deliverability. Our approach utilizes the broadcast nature of wireless communication and social nature of content, by multicasting and precaching. Results indicate this novel joint optimization approach outperforms existing layered systems that separate recommendation and delivery, especially when the wireless network is operating at maximum capacity. Utilizing limited number of transmission modes, we significantly reduce the complexity of the optimization. We also introduce the design of a hybrid system to handle transmissions for both system recommended contents ('push') and active user requests ('pull'). Further, we extend the joint optimization framework to the wireless infrastructure with multiple base stations. The problem becomes much harder in that there are many more system configurations, including but not limited to power allocation and how resources are shared among the base stations ('out-of-band' in which base stations transmit with dedicated spectrum resources, thus no interference; and 'in-band' in which they share the spectrum and need to mitigate interference). We propose a scalable two-phase scheduling framework: 1) each base station obtains delivery decisions and resource allocation individually; 2) the system consolidates the decisions and allocations, reducing redundant transmissions. Additionally, if the social network applications could provide the predictions of how the social contents disseminate, the wireless networks could schedule the transmissions accordingly and significantly improve the dissemination performance by reducing the delivery delay. We propose a novel method utilizing: 1) hybrid systems to handle active disseminating requests; and 2) predictions of dissemination dynamics from the social network applications. This method could mitigate the performance degradation for content dissemination due to wireless delivery delay. Results indicate that our proposed system design is both efficient and easy to implement.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Background: Agro-wastes were used for the production of fibrinolytic enzyme in solid-state fermentation. The process parameters were optimized to enhance the production of fibrinolytic enzyme from Bacillus halodurans IND18 by statistical approach. The fibrinolytic enzyme was purified, and the properties were studied. Results: A two-level full factorial design was used to screen the significant factors. The factors such as moisture, pH, and peptone were significantly affected enzyme production and these three factors were selected for further optimization using central composite design. The optimum medium for fibrinolytic enzyme production was wheat bran medium containing 1% peptone and 80% moisture with pH 8.32. Under these optimized conditions, the production of fibrinolytic enzyme was found to be 6851 U/g. The fibrinolytic enzyme was purified by 3.6-fold with 1275 U/mg specific activity. The molecular mass of fibrinolytic enzyme was determined by sodium dodecyl sulphate polyacrylamide gel electrophoresis, and it was observed as 29 kDa. The fibrinolytic enzyme depicted an optimal pH of 9.0 and was stable at a range of pH from 8.0 to 10.0. The optimal temperature was 60°C and was stable up to 50°C. This enzyme activated plasminogen and also degraded the fibrin net of blood clot, which suggested its potential as an effective thrombolytic agent. Conclusions: Wheat bran was found to be an effective substrate for the production of fibrinolytic enzyme. The purified fibrinolytic enzyme degraded fibrin clot. The fibrinolytic enzyme could be useful to make as an effective thrombolytic agent.
Resumo:
The objective of this study was to determine the optimal feeding level and feeding frequency for the culture of freshwater angelfish (Pterophyllum scalare). A randomized block design in a factorial scheme (3 × 2) with three feeding levels (30, 60 and 90 g/kg of body weight (BW)/day) and two feeding frequencies (1x and 2x/day) was set up in duplicate, representing 24 experimental units. Data were analyzed using two-way ANOVA and the Tukey test for comparison between means. After 84 days, results indicated that both factors influenced fish performance. No interaction between these factors was, however, observed. Increased feeding level and feeding frequency resulted in increased feed intake. The feed conversion ratio was negatively affected by feeding level, but not affected by feeding frequency. Final weights were higher when fish were fed twice daily, at levels of 60 or 90 g/kg BW/day. Specific growth rate was higher when fish received 60 or 90 g/kg BW/day, regardless of the feeding frequency. Survival was not affected by any treatment, with mean survival rates higher than 90%. It is recommended that juveniles be fed at a level of 60 g/kg BW/day with a minimum of two meals per day, to attain optimal survival, growth and feed efficiency.
Resumo:
Dentin adhesion procedure presents limitations, especially regarding to lifetime stability of formed hybrid layer. Alternative procedures have been studied in order to improve adhesion to dentin. OBJECTIVE: The aim of this study was to evaluate in vitro the influence of deproteinization or dentin tubular occlusion, as well as the combination of both techniques, on microtensile bond strength (µTBS) and marginal microleakage of composite resin restorations. MATERIAL AND METHODS: Extracted erupted human third molars were randomly divided into 4 groups. Dentin surfaces were treated with one of the following procedures: (A) 35% phosphoric acid gel (PA) + adhesive system (AS); (B) PA + 10% NaOCl + AS; (C) PA + oxalate + AS and (D) PA + oxalate + 10% NaOCl + AS. Bond strength data were analyzed statistically by two-way ANOVA and Tukey's test. The microleakage scores were analyzed using Kruskal-Wallis and Mann-Whitney non-parametric tests. Significance level was set at 0.05 for all analyses. RESULTS: µTBS data presented statistically lower values for groups D and B, ranking data as A>C>B>D. The use of oxalic acid resulted in microleakage reduction along the tooth/restoration interface, being significant when used alone. On the other hand, the use of 10% NaOCl alone or in combination with oxalic acid, resulted in increased microleakage. CONCLUSIONS: Dentin deproteinization with 10% NaOCl or in combination with oxalate significantly compromised both the adhesive bond strength and the microleakage at interface. Tubular occlusion prior to adhesive system application seems to be a useful technique to reduce marginal microleakage.
Resumo:
OBJECTIVE: Removable partial dentures (RPD) require different hygiene care, and association of brushing and chemical cleansing is the most recommended to control biofilm formation. However, the effect of cleansers has not been evaluated in RPD metallic components. The aim of this study was to evaluate in vitro the effect of different denture cleansers on the weight and ion release of RPD. MATERIAL AND METHODS: Five specimens (12x3 mm metallic disc positioned in a 38x18x4 mm mould filled with resin), 7 cleanser agents [Periogard (PE), Cepacol (CE), Corega Tabs (CT), Medical Interporous (MI), Polident (PO), 0.05% sodium hypochlorite (NaOCl), and distilled water (DW) (control)] and 2 cobalt-chromium alloys [DeguDent (DD), and VeraPDI (VPDI)] were used for each experimental situation. One hundred and eighty immersions were performed and the weight was analyzed with a high precision analytic balance. Data were recorded before and after the immersions. The ion release was analyzed using mass spectrometry with inductively coupled plasma. Data were analyzed by two-way ANOVA and Tukey HSD post hoc test at 5% significance level. RESULTS: Statistical analysis showed that CT and MI had higher values of weight loss with higher change in VPDI alloy compared to DD. The solutions that caused more ion release were NaOCl and MI. CONCLUSIONS: It may be concluded that 0.05% NaOCl and Medical Interporous tablets are not suitable as auxiliary chemical solutions for RPD care.
Resumo:
Background: HBV-HIV co-infection is associated with an increased liver-related morbidity and mortality. However, little is known about the natural history of chronic hepatitis B in HIV-infected individuals under highly active antiretroviral therapy (HAART) receiving at least one of the two drugs that also affect HBV (TDF and LAM). Information about HBeAg status and HBV viremia in HIV/HBV co-infected patients is scarce. The objective of this study was to search for clinical and virological variables associated with HBeAg status and HBV viremia in patients of an HIV/HBV co-infected cohort. Methods: A retrospective cross-sectional study was performed, of HBsAg-positive HIV-infected patients in treatment between 1994 and 2007 in two AIDS outpatient clinics located in the Sao Paulo metropolitan area, Brazil. The baseline data were age, sex, CD4 T+ cell count, ALT level, HIV and HBV viral load, HBV genotype, and duration of antiretroviral use. The variables associated to HBeAg status and HBV viremia were assessed using logistic regression. Results: A total of 86 HBsAg patients were included in the study. Of these, 48 (56%) were using combination therapy that included lamivudine (LAM) and tenofovir (TDF), 31 (36%) were using LAM monotherapy, and 7 patients had no previous use of either one. Duration of use of TDF and LAM varied from 4 to 21 and 7 to 144 months, respectively. A total of 42 (48. 9%) patients were HBeAg positive and 44 (51. 1%) were HBeAg negative. The multivariate analysis revealed that the use of TDF for longer than 12 months was associated with undetectable HBV DNA viral load (serum HBV DNA level < 60 UI/ml) (p = 0. 047). HBeAg positivity was associated with HBV DNA > 60 UI/ml (p = 0. 001) and ALT levels above normality (p = 0. 038). Conclusion: Prolonged use of TDF containing HAART is associated with undetectable HBV DNA viral load. HBeAg positivity is associated with HBV viremia and increased ALT levels.
Resumo:
Objective: In this study we evaluated the ablation rate of superficial and deep dentin irradiated with different Er:YAG laser energy levels, and observed the micromorphological aspects of the lased substrates with a scanning electron microscope (SEM). Background Data: Little is known about the effect of Er: YAG laser irradiation on different dentin depths. Materials and Methods: Sixty molar crowns were bisected, providing 120 specimens, which were randomly assigned into two groups ( superficial or deep dentin), and later into five subgroups (160, 200, 260, 300, or 360 mJ). Initial masses of the specimens were obtained. After laser irradiation, the final masses were obtained and mass losses were calculated followed by the preparation of specimens for SEM examination. Mass-loss values were subjected to two-way ANOVA and Fisher's least significant difference multiple-comparison tests (p < 0.05). Results: There was no difference between superficial and deep dentin. A significant and gradual increase in the mass-loss values was reached when energies were raised, regardless of the dentin depth. The energy level of 360 mJ showed the highest values and was statistically significantly different from the other energy levels. The SEM images showed that deep dentin was more selectively ablated, especially intertubular dentin, promoting tubule protrusion. At 360 mJ the micromorphological features were similar for both dentin depths. Conclusion: The ablation rate did not depend on the depth of the dentin, and an energy level lower than 360 mJ is recommended to ablate both superficial and deep dentin effectively without causing tissue damage.
Resumo:
In this study, the one- and two-photon absorption spectra of seven azoaromatic compounds (five pseudostilbenes-type and two aminoazobenzenes) were theoretically investigated using the density functional theory combined with the response functions formalism. The equilibrium molecular structure of each compound was obtained at three different levels of theory: Hartree-Fock, density functional theory (DFT), and Moller-Plesset 2. The effect of solvent on the equilibrium structure and the electronic transitions of the compounds were investigated using the polarizable continuum model. For the one-photon absorption, the allowed pi ->pi(*) transition energy showed to be dependent on the molecular structures and the effect of solvent, while the n ->pi(*) and pi ->pi(*)(n) transition energies exhibited only a slight dependence. An inversion between the bands corresponding to the pi ->pi(*) and n ->pi(*) states due to the effect of solvent was observed for the pseudostilbene-type compounds. To characterize the allowed two-photon absorption transitions for azoaromatic compounds, the response functions formalism combined with DFT using the hybrid B3LYP and PBE0 functionals and the long-range corrected CAM-B3LYP functional was employed. The theoretical results support the previous findings based on the three-state model. The model takes into account the ground and two electronic excited states and has already been used to describe and interpret the two-photon absorption spectrum of azoaromatic compounds. The highest energy two-photon allowed transition for the pseudostilbene-type compounds shows to be more effectively affected (similar to 20%) by the torsion of the molecular structure than the lowest allowed transition (similar to 10%). In order to elucidate the effect of the solvent on the two-photon absorption spectra, the lowest allowed two-photon transition (dipolar transition) for each compound was analyzed using a two-state approximation and the polarizable continuum model. The results obtained reveal that the effect of solvent increases drastically the two-photon cross-section of the dipolar transition of the pseudostilbene-type compounds. In general, the features of both one- and two-photon absorption spectra of the azoaromatic compounds are well reproduced by the theoretical calculations.
Resumo:
Twenty-two (14)C datings were performed at the central sector of the Parana coast to define Holocene regressive barrier evolution. The barrier Pleistocene substratum was ascribed an age between 40400 and 30000 yr BP, but it can also represent the penultimate sea level highstand during marine isotope stage 5e. The Holocene barrier samples provided ages between 8542-8279 and 2987-2751 cal yr BP, and showed at least six age inversions that were related to age differences between in situ or low-distance transported shells or trunk fragments, and high-distance transported vegetal debris, wood fragments and organic matter samples. The regressive Holocene barrier age was 4402-4135 cal yr BP near the base, and 2987-2751 cal yr BP near the top. Most of the vegetal remains were transported by ebb tidal currents from the estuaries to the inner shelf below wave base level during the mid-Holocene highstand; they were transported onshore by storm waves and littoral currents during the sea level lowering after the sea level maximum, and were deposited mainly as middle shoreface swaley cross-stratification facies. (C) 2008 Published by Elsevier B.V.
Resumo:
By means of continuous topology optimization, this paper discusses the influence of material gradation and layout in the overall stiffness behavior of functionally graded structures. The formulation is associated to symmetry and pattern repetition constraints, including material gradation effects at both global and local levels. For instance, constraints associated with pattern repetition are applied by considering material gradation either on the global structure or locally over the specific pattern. By means of pattern repetition, we recover previous results in the literature which were obtained using homogenization and optimization of cellular materials.
Resumo:
Micro-tools offer significant promise in a wide range of applications Such as cell Manipulation, microsurgery, and micro/nanotechnology processes. Such special micro-tools consist of multi-flexible structures actuated by two or more piezoceramic devices that must generate output displacements and forces lit different specified points of the domain and at different directions. The micro-tool Structure acts as a mechanical transformer by amplifying and changing the direction of the piezoceramics Output displacements. The design of these micro-tools involves minimization of the coupling among movements generated by various piezoceramics. To obtain enhanced micro-tool performance, the concept of multifunctional and functionally graded materials is extended by, tailoring elastic and piezoelectric properties Of the piezoceramics while simultaneously optimizing the multi-flexible structural configuration using multiphysics topology optimization. The design process considers the influence of piezoceramic property gradation and also its polarization sign. The method is implemented considering continuum material distribution with special interpolation of fictitious densities in the design domain. As examples, designs of a single piezoactuator, an XY nano-positioner actuated by two graded piezoceramics, and a micro-gripper actuated by three graded piezoceramics are considered. The results show that material gradation plays an important role to improve actuator performance, which may also lead to optimal displacements and coupling ratios with reduced amount of piezoelectric material. The present examples are limited to two-dimensional models because many of the applications for Such micro-tools are planar devices. Copyright (c) 2008 John Wiley & Sons, Ltd.