943 resultados para Efficiency models
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Cientifico e Tecnológico (CNPq)
Resumo:
INVESTIGATION INTO CURRENT EFFICIENCY FOR PULSE ELECTROCHEMICAL MACHINING OF NICKEL ALLOY Yu Zhang, M.S. University of Nebraska, 2010 Adviser: Kamlakar P. Rajurkar Electrochemical machining (ECM) is a nontraditional manufacturing process that can machine difficult-to-cut materials. In ECM, material is removed by controlled electrochemical dissolution of an anodic workpiece in an electrochemical cell. ECM has extensive applications in automotive, petroleum, aerospace, textile, medical, and electronics industries. Improving current efficiency is a challenging task for any electro-physical or electrochemical machining processes. The current efficiency is defined as the ratio of the observed amount of metal dissolved to the theoretical amount predicted from Faraday’s law, for the same specified conditions of electrochemical equivalent, current, etc [1]. In macro ECM, electrolyte conductivity greatly influences the current efficiency of the process. Since there is a certain limit to enhance the conductivity of the electrolyte, a process innovation is needed for further improvement in current efficiency in ECM. Pulse electrochemical machining (PECM) is one such approach in which the electrolyte conductivity is improved by electrolyte flushing in pulse off-time. The aim of this research is to study the influence of major factors on current efficiency in a pulse electrochemical machining process in macro scale and to develop a linear regression model for predicting current efficiency of the process. An in-house designed electrochemical cell was used for machining nickel alloy (ASTM B435) by PECM. The effects of current density, type of electrolyte, and electrolyte flow rate, on current efficiency under different experimental conditions were studied. Results indicated that current efficiency is dependent on electrolyte, electrolyte flow rate, and current density. Linear regression models of current efficiency were compared with twenty new data points graphically and quantitatively. Models developed were close enough to the actual results to be reliable. In addition, an attempt has been made in this work to consider those factors in PECM that have not been investigated in earlier works. This was done by simulating the process by using COMSOL software. However, it was found that the results from this attempt were not substantially different from the earlier reported studies.
Resumo:
Chimpanzees have been the traditional referential models for investigating human evolution and stone tool use by hominins. We enlarge this comparative scenario by describing normative use of hammer stones and anvils in two wild groups of bearded capuchin monkeys (Cebus libidinosus) over one year. We found that most of the individuals habitually use stones and anvils to crack nuts and other encased food items. Further, we found that in adults (1) males use stone tools more frequently than females, (2) males crack high resistance nuts more frequently than females, (3) efficiency at opening a food by percussive tool use varies according to the resistance of the encased food, (4) heavier individuals are more efficient at cracking high resistant nuts than smaller individuals, and (5) to crack open encased foods, both sexes select hammer stones on the basis of material and weight. These findings confirm and extend previous experimental evidence concerning tool selectivity in wild capuchin monkeys (Visalberghi et al., 2009b; Fragaszy et al., 2010b). Male capuchins use tools more frequently than females and body mass is the best predictor of efficiency, but the sexes do not differ in terms of efficiency. We argue that the contrasting pattern of sex differences in capuchins compared with chimpanzees, in which females use tools more frequently and more skillfully than males, may have arisen from the degree of sexual dimorphism in body size of the two species, which is larger in capuchins than in chimpanzees. Our findings show the importance of taking sex and body mass into account as separate variables to assess their role in tool use. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Within the nutritional context, the supplementation of microminerals in bird food is often made in quantities exceeding those required in the attempt to ensure the proper performance of the animals. The experiments of type dosage x response are very common in the determination of levels of nutrients in optimal food balance and include the use of regression models to achieve this objective. Nevertheless, the regression analysis routine, generally, uses a priori information about a possible relationship between the response variable. The isotonic regression is a method of estimation by least squares that generates estimates which preserves data ordering. In the theory of isotonic regression this information is essential and it is expected to increase fitting efficiency. The objective of this work was to use an isotonic regression methodology, as an alternative way of analyzing data of Zn deposition in tibia of male birds of Hubbard lineage. We considered the models of plateau response of polynomial quadratic and linear exponential forms. In addition to these models, we also proposed the fitting of a logarithmic model to the data and the efficiency of the methodology was evaluated by Monte Carlo simulations, considering different scenarios for the parametric values. The isotonization of the data yielded an improvement in all the fitting quality parameters evaluated. Among the models used, the logarithmic presented estimates of the parameters more consistent with the values reported in literature.
Resumo:
[EN] Information about anaerobic energy production and mechanical efficiency that occurs over time during short-lasting maximal exercise is scarce and controversial. Bilateral leg press is an interesting muscle contraction model to estimate anaerobic energy production and mechanical efficiency during maximal exercise because it largely differs from the models used until now. This study examined the changes in muscle metabolite concentration and power output production during the first and the second half of a set of 10 repetitions to failure (10RM) of bilateral leg press exercise. On two separate days, muscle biopsies were obtained from vastus lateralis prior and immediately after a set of 5 or a set of 10 repetitions. During the second set of 5 repetitions, mean power production decreased by 19% and the average ATP utilisation accounted for by phosphagen decreased from 54% to 19%, whereas ATP utilisation from anaerobic glycolysis increased from 46 to 81%. Changes in contraction time and power output were correlated to the changes in muscle Phosphocreatine (PCr; r = -0.76; P<0.01) and lactate (r = -0.91; P<0.01), respectively, and were accompanied by parallel decreases (P<0.01-0.05) in muscle energy charge (0.6%), muscle ATP/ADP (8%) and ATP/AMP (19%) ratios, as well as by increases in ADP content (7%). The estimated average rate of ATP utilisation from anaerobic sources during the final 5 repetitions fell to 83% whereas total anaerobic ATP production increased by 9% due to a 30% longer average duration of exercise (18.4 +/- 4.0 vs 14.2 +/- 2.1 s). These data indicate that during a set of 10RM of bilateral leg press exercise there is a decrease in power output which is associated with a decrease in the contribution of PCr and/or an increase in muscle lactate. The higher energy cost per repetition during the second 5 repetitions is suggestive of decreased mechanical efficiency.
Resumo:
The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.
Resumo:
Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn
Resumo:
Reliable electronic systems, namely a set of reliable electronic devices connected to each other and working correctly together for the same functionality, represent an essential ingredient for the large-scale commercial implementation of any technological advancement. Microelectronics technologies and new powerful integrated circuits provide noticeable improvements in performance and cost-effectiveness, and allow introducing electronic systems in increasingly diversified contexts. On the other hand, opening of new fields of application leads to new, unexplored reliability issues. The development of semiconductor device and electrical models (such as the well known SPICE models) able to describe the electrical behavior of devices and circuits, is a useful means to simulate and analyze the functionality of new electronic architectures and new technologies. Moreover, it represents an effective way to point out the reliability issues due to the employment of advanced electronic systems in new application contexts. In this thesis modeling and design of both advanced reliable circuits for general-purpose applications and devices for energy efficiency are considered. More in details, the following activities have been carried out: first, reliability issues in terms of security of standard communication protocols in wireless sensor networks are discussed. A new communication protocol is introduced, allows increasing the network security. Second, a novel scheme for the on-die measurement of either clock jitter or process parameter variations is proposed. The developed scheme can be used for an evaluation of both jitter and process parameter variations at low costs. Then, reliability issues in the field of “energy scavenging systems” have been analyzed. An accurate analysis and modeling of the effects of faults affecting circuit for energy harvesting from mechanical vibrations is performed. Finally, the problem of modeling the electrical and thermal behavior of photovoltaic (PV) cells under hot-spot condition is addressed with the development of an electrical and thermal model.
Resumo:
Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.
Resumo:
Aquatic species can experience different selective pressures on morphology in different flow regimes. Species inhabiting lotic regimes often adapt to these conditions by evolving low-drag (i.e., streamlined) morphologies that reduce the likelihood of dislodgment or displacement. However, hydrodynamic factors are not the only selective pressures influencing organismal morphology and shapes well suited to flow conditions may compromise performance in other roles. We investigated the possibility of morphological trade-offs in the turtle Pseudemys concinna. Individuals living in lotic environments have flatter, more streamlined shells than those living in lentic environments; however, this flatter shape may also make the shells less capable of resisting predator-induced loads. We tested the idea that ‘‘lotic’’ shell shapes are weaker than ‘‘lentic’’ shell shapes, concomitantly examining effects of sex. Geometric morphometric data were used to transform an existing finite element shell model into a series of models corresponding to the shapes of individual turtles. Models were assigned identical material properties and loaded under identical conditions, and the stresses produced by a series of eight loads were extracted to describe the strength of the shells. ‘‘Lotic’’ shell shapes produced significantly higher stresses than ‘‘lentic’’ shell shapes, indicating that the former is weaker than the latter. Females had significantly stronger shell shapes than males, although these differences were less consistent than differences between flow regimes. We conclude that, despite the potential for many-to-one mapping of shell shape onto strength, P. concinna experiences a trade-off in shell shape between hydrodynamic and mechanical performance. This trade-off may be evident in many other turtle species or any other aquatic species that also depend on a shell for defense. However, evolution of body size may provide an avenue of escape from this trade-off in some cases, as changes in size can drastically affect mechanical performance while having little effect on hydrodynamic performance.
Resumo:
Purpose of review: Overview on integrated care trials focusing on effectiveness and efficiency published from 2011 to 2013. Recent findings: Eight randomized controlled trials (RCTs) and 21 non-RCT studies were published from 2011 to 2013. Studies differed in several methodological aspects such as study population, psychotherapeutic approaches used, outcome parameters, follow-up times, fidelities, and implementation of the integrated care model and the nation-specific healthcare context with different control conditions. This makes it difficult to draw firm conclusions. Most studies demonstrated relevant improvements regarding symptoms (P = 0.001) and functioning (P = 0.01), quality of life (P = 0.01), adherence (P <0.05) and patient's satisfaction (P = 0.01), and reduction of caregiver's stress (P < 0.05). Mean total costs were favoring or at least equalizing costs but with positive effects found on subjective health favoring integrated care models. Summary: There is an increasing interest in the effectiveness and efficiency of integrated care models in patients with mental disorders, specifically in those with severe and persistent mental illness. To increase generalizability, future trials should exactly describe rationales and content of integrated care model and control conditions.
Resumo:
BACKGROUND: Individual adaptation of processed patient's blood volume (PBV) should reduce number and/or duration of autologous peripheral blood progenitor cell (PBPC) collections. STUDY DESIGN AND METHODS: The durations of leukapheresis procedures were adapted by means of an interim analysis of harvested CD34+ cells to obtain the intended yield of CD34+ within as few and/or short as possible leukapheresis procedures. Absolute efficiency (AE; CD34+/kg body weight) and relative efficiency (RE; total CD34+ yield of single apheresis/total number of preapheresis CD34+) were calculated, assuming an intraapheresis recruitment if RE was greater than 1, and a yield prediction models for adults was generated. RESULTS: A total of 196 adults required a total of 266 PBPC collections. The median AE was 7.99 x 10(6), and the median RE was 1.76. The prediction model for AE showed a satisfactory predictive value for preapheresis CD34+ only. The prediction model for RE also showed a low predictive value (R2 = 0.36). Twenty-eight children underwent 44 PBPC collections. The median AE was 12.13 x 10(6), and the median RE was 1.62. Major complications comprised bleeding episodes related to central venous catheters (n = 4) and severe thrombocytopenia of less than 10 x 10(9) per L (n = 16). CONCLUSION: A CD34+ interim analysis is a suitable tool for individual adaptation of the duration of leukapheresis. During leukapheresis, a substantial recruitment of CD34+ was observed, resulting in a RE of greater than 1 in more than 75 percent of patients. The upper limit of processed PBV showing an intraapheresis CD34+ recruitment is higher than in a standard large-volume leukapheresis. Therefore, a reduction of individually needed PBPC collections by means of a further escalation of the processed PBV seems possible.
Resumo:
A method is given for proving efficiency of NPMLE directly linked to empirical process theory. The conditions in general are appropriate consistency of the NPMLE, differentiability of the model, differentiability of the parameter of interest, local convexity of the parameter space, and a Donsker class condition for the class of efficient influence functions obtained by varying the parameters. For the case that the model is linear in the parameter and the parameter space is convex, as with most nonparametric missing data models, we show that the method leads to an identity for the NPMLE which almost says that the NPMLE is efficient and provides us straightforwardly with a consistency and efficiency proof. This identify is extended to an almost linear class of models which contain biased sampling models. To illustrate, the method is applied to the univariate censoring model, random truncation models, interval censoring case I model, the class of parametric models and to a class of semiparametric models.