21 resultados para Models performance
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Background: Unstable distal femoral fractures in children are challenging lesions with restricted surgical options for adequate stabilization. Elastic nails have become popular for treating femoral shaft fractures, yet they are still challenging for using in distal fractures. The aim of this study was to test whether end caps (CAP) inserted into the nail extremity improved the mechanical stabilization of a segmental defect at the distal femoral metaphyseal-diaphyseal junction created in an artificial pediatric bone model. Methods: Two 3.5-mm titanium elastic nails (TEN) were introduced intramedullary into pediatric femur models, and a 7.0-mm-thick segmental defect was created at the distal diaphyseal-metaphyseal junction. Nondestructive 4-point bending, axial-bending, and torsion tests were conducted. After this, the end caps were inserted into the external tips of the nails and then screwed into the bone cortex. The mechanical tests were repeated. Stiffness, displacement, and torque were analyzed using the Wilcoxon nonparametric test for paired samples. Results: In the combined axial-bending tests, the TEN + CAP combination was 8.75% stiffer than nails alone (P < 0.01); in torsion tests, the TEN + CAP was 14% stiffer than nails alone (P < 0.01). In contrast, the 4-point bending test did not show differences between the methods (P = 0.91, stiffness; P = 0.51, displacement). Thus, the end caps contributed to an increase in the construct stability for torsion and axial-bending forces but not for 4-point bending forces. Conclusions: These findings indicate that end caps fitted to elastic nails may contribute to the stabilization of fractures that our model mimics (small distal fragment, bone comminution, and distal bone fragment loss).
Resumo:
System thinking allows companies to use subjective constructs indicators like recursiveness, cause-effect relationships and autonomy to performance evaluation. Thus, the question that motivates this paper is: Are Brazilian companies searching new performance measurement and evaluation models based on system thinking? The study investigates models looking for system thinking roots in their framework. It was both exploratory and descriptive based on a multiple four case studies strategy in chemical sector. The findings showed organizational models have some characteristics that can be related to system thinking as system control and communication. Complexity and autonomy are deficiently formalized by the companies. All data suggest, inside its context, that system thinking seems to be adequate to organizational performance evaluation but remains distant from the management proceedings.
Resumo:
When a scaled structure (model or replica) is used to predict the response of a full-size compound (prototype), the model geometric dimensions should relate to the corresponding prototype dimensions by a single scaling factor. However, owing to manufacturing technical restrictions, this condition cannot be accomplished for some of the dimensions in real structures. Accordingly, the distorted geometry will not comply with the overall geometric scaling factor, infringing the Pi theorem requirements for complete dynamic similarity. In the present study, a method which takes geometrical distortions into account is introduced, leading to a model similar to the prototype. As a means to infer the performance of this method, three analytical problems of structures subjected to dynamic loads are analysed. It is shown that the replica developed applying this technique is able to accurately predict the full-size structure behaviour even when the studied models have some of their dimensions severely distorted. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Background and Purpose: Becoming proficient in laparoscopic surgery is dependent on the acquisition of specialized skills that can only be obtained from specific training. This training could be achieved in various ways using inanimate models, animal models, or live patient surgery-each with its own pros and cons. Currently, there are substantial data that support the benefits of animal model training in the initial learning of laparoscopy. Nevertheless, whether these benefits extent themselves to moderately experienced surgeons is uncertain. The purpose of this study was to determine if training using a porcine model results in a quantifiable gain in laparoscopic skills for moderately experienced laparoscopic surgeons. Materials and Methods: Six urologists with some laparoscopic experience were asked to perform a radical nephrectomy weekly for 10 weeks in a porcine model. The procedures were recorded, and surgical performance was assessed by two experienced laparoscopic surgeons using a previously published surgical performance assessment tool. The obtained data were then submitted to statistical analysis. Results: With training, blood loss was reduced approximately 45% when comparing the averages of the first and last surgical procedures (P = 0.006). Depth perception showed an improvement close to 35% (P = 0.041), and dexterity showed an improvement close to 25% (P = 0.011). Total operative time showed trends of improvement, although it was not significant (P = 0.158). Autonomy, efficiency, and tissue handling were the only aspects that did not show any noteworthy change (P = 0.202, P = 0.677, and P = 0.456, respectively). Conclusions: These findings suggest that there are quantifiable gains in laparoscopic skills obtained from training in an animal model. Our results suggest that these benefits also extend to more advanced stages of the learning curve, but it is unclear how far along the learning curve training with animal models provides a clear benefit for the performance of laparoscopic procedures. Future studies are necessary to confirm these findings and better understand the impact of this learning tool on surgical practice.
Resumo:
An extension of some standard likelihood based procedures to heteroscedastic nonlinear regression models under scale mixtures of skew-normal (SMSN) distributions is developed. This novel class of models provides a useful generalization of the heteroscedastic symmetrical nonlinear regression models (Cysneiros et al., 2010), since the random term distributions cover both symmetric as well as asymmetric and heavy-tailed distributions such as skew-t, skew-slash, skew-contaminated normal, among others. A simple EM-type algorithm for iteratively computing maximum likelihood estimates of the parameters is presented and the observed information matrix is derived analytically. In order to examine the performance of the proposed methods, some simulation studies are presented to show the robust aspect of this flexible class against outlying and influential observations and that the maximum likelihood estimates based on the EM-type algorithm do provide good asymptotic properties. Furthermore, local influence measures and the one-step approximations of the estimates in the case-deletion model are obtained. Finally, an illustration of the methodology is given considering a data set previously analyzed under the homoscedastic skew-t nonlinear regression model. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this paper we obtain asymptotic expansions, up to order n(-1/2) and under a sequence of Pitman alternatives, for the nonnull distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the class of symmetric linear regression models. This is a wide class of models which encompasses the t model and several other symmetric distributions with longer-than normal tails. The asymptotic distributions of all four statistics are obtained for testing a subset of regression parameters. Furthermore, in order to compare the finite-sample performance of these tests in this class of models, Monte Carlo simulations are presented. An empirical application to a real data set is considered for illustrative purposes. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Background: In the analysis of effects by cell treatment such as drug dosing, identifying changes on gene network structures between normal and treated cells is a key task. A possible way for identifying the changes is to compare structures of networks estimated from data on normal and treated cells separately. However, this approach usually fails to estimate accurate gene networks due to the limited length of time series data and measurement noise. Thus, approaches that identify changes on regulations by using time series data on both conditions in an efficient manner are demanded. Methods: We propose a new statistical approach that is based on the state space representation of the vector autoregressive model and estimates gene networks on two different conditions in order to identify changes on regulations between the conditions. In the mathematical model of our approach, hidden binary variables are newly introduced to indicate the presence of regulations on each condition. The use of the hidden binary variables enables an efficient data usage; data on both conditions are used for commonly existing regulations, while for condition specific regulations corresponding data are only applied. Also, the similarity of networks on two conditions is automatically considered from the design of the potential function for the hidden binary variables. For the estimation of the hidden binary variables, we derive a new variational annealing method that searches the configuration of the binary variables maximizing the marginal likelihood. Results: For the performance evaluation, we use time series data from two topologically similar synthetic networks, and confirm that our proposed approach estimates commonly existing regulations as well as changes on regulations with higher coverage and precision than other existing approaches in almost all the experimental settings. For a real data application, our proposed approach is applied to time series data from normal Human lung cells and Human lung cells treated by stimulating EGF-receptors and dosing an anticancer drug termed Gefitinib. In the treated lung cells, a cancer cell condition is simulated by the stimulation of EGF-receptors, but the effect would be counteracted due to the selective inhibition of EGF-receptors by Gefitinib. However, gene expression profiles are actually different between the conditions, and the genes related to the identified changes are considered as possible off-targets of Gefitinib. Conclusions: From the synthetically generated time series data, our proposed approach can identify changes on regulations more accurately than existing methods. By applying the proposed approach to the time series data on normal and treated Human lung cells, candidates of off-target genes of Gefitinib are found. According to the published clinical information, one of the genes can be related to a factor of interstitial pneumonia, which is known as a side effect of Gefitinib.
Resumo:
Effects of roads on wildlife and its habitat have been measured using metrics, such as the nearest road distance, road density, and effective mesh size. In this work we introduce two new indices: (1) Integral Road Effect (IRE), which measured the sum effects of points in a road at a fixed point in the forest; and (2) Average Value of the Infinitesimal Road Effect (AVIRE), which measured the average of the effects of roads at this point. IRE is formally defined as the line integral of a special function (the infinitesimal road effect) along the curves that model the roads, whereas AVIRE is the quotient of IRE by the length of the roads. Combining tools of ArcGIS software with a numerical algorithm, we calculated these and other road and habitat cover indices in a sample of points in a human-modified landscape in the Brazilian Atlantic Forest, where data on the abundance of two groups of small mammals (forest specialists and habitat generalists) were collected in the field. We then compared through the Akaike Information Criterion (AIC) a set of candidate regression models to explain the variation in small mammal abundance, including models with our two new road indices (AVIRE and IRE) or models with other road effect indices (nearest road distance, mesh size, and road density), and reference models (containing only habitat indices, or only the intercept without the effect of any variable). Compared to other road effect indices, AVIRE showed the best performance to explain abundance of forest specialist species, whereas the nearest road distance obtained the best performance to generalist species. AVIRE and habitat together were included in the best model for both small mammal groups, that is, higher abundance of specialist and generalist small mammals occurred where there is lower average road effect (less AVIRE) and more habitat. Moreover, AVIRE was not significantly correlated with habitat cover of specialists and generalists differing from the other road effect indices, except mesh size, which allows for separating the effect of roads from the effect of habitat on small mammal communities. We suggest that the proposed indices and GIS procedures could also be useful to describe other spatial ecological phenomena, such as edge effect in habitat fragments. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Within the nutritional context, the supplementation of microminerals in bird food is often made in quantities exceeding those required in the attempt to ensure the proper performance of the animals. The experiments of type dosage x response are very common in the determination of levels of nutrients in optimal food balance and include the use of regression models to achieve this objective. Nevertheless, the regression analysis routine, generally, uses a priori information about a possible relationship between the response variable. The isotonic regression is a method of estimation by least squares that generates estimates which preserves data ordering. In the theory of isotonic regression this information is essential and it is expected to increase fitting efficiency. The objective of this work was to use an isotonic regression methodology, as an alternative way of analyzing data of Zn deposition in tibia of male birds of Hubbard lineage. We considered the models of plateau response of polynomial quadratic and linear exponential forms. In addition to these models, we also proposed the fitting of a logarithmic model to the data and the efficiency of the methodology was evaluated by Monte Carlo simulations, considering different scenarios for the parametric values. The isotonization of the data yielded an improvement in all the fitting quality parameters evaluated. Among the models used, the logarithmic presented estimates of the parameters more consistent with the values reported in literature.
Resumo:
The present investigation was undertaken to test whether exercise training (ET) associated with AMPK/PPAR agonists (EM) would improve skeletal muscle function in mdx mice. These drugs have the potential to improve oxidative metabolism. This is of particular interest because oxidative muscle fibers are less affected in the course of the disease than glycolitic counterparts. Therefore, a cohort of 34 male congenic C57Bl/10J mdx mice included in this study was randomly assigned into four groups: vehicle solution (V), EM [AICAR (AMPK agonist, 50 mg/Kg-1.day-1, ip) and GW 1516 (PPAR delta agonist, 2.5 mg/Kg-1.day-1, gavage)], ET (voluntary running on activity wheel) and EM+ET. Functional performance (grip meter and rotarod), aerobic capacity (running test), muscle histopathology, serum creatine kinase (CK), levels of ubiquitined proteins, oxidative metabolism protein expression (AMPK, PPAR, myoglobin and SCD) and intracellular calcium handling (DHPR, SERCA and NCX) protein expression were analyzed. Treatments started when the animals were two months old and were maintained for one month. A significant functional improvement (p<0.05) was observed in animals submitted to the combination of ET and EM. CK levels were decreased and the expression of proteins related to oxidative metabolism was increased in this group. There were no differences among the groups in the intracellular calcium handling protein expression. To our knowledge, this is the first study that tested the association of ET with EM in an experimental model of muscular dystrophy. Our results suggest that the association of ET and EM should be further tested as a potential therapeutic approach in muscular dystrophies.
Resumo:
We propose a new general Bayesian latent class model for evaluation of the performance of multiple diagnostic tests in situations in which no gold standard test exists based on a computationally intensive approach. The modeling represents an interesting and suitable alternative to models with complex structures that involve the general case of several conditionally independent diagnostic tests, covariates, and strata with different disease prevalences. The technique of stratifying the population according to different disease prevalence rates does not add further marked complexity to the modeling, but it makes the model more flexible and interpretable. To illustrate the general model proposed, we evaluate the performance of six diagnostic screening tests for Chagas disease considering some epidemiological variables. Serology at the time of donation (negative, positive, inconclusive) was considered as a factor of stratification in the model. The general model with stratification of the population performed better in comparison with its concurrents without stratification. The group formed by the testing laboratory Biomanguinhos FIOCRUZ-kit (c-ELISA and rec-ELISA) is the best option in the confirmation process by presenting false-negative rate of 0.0002% from the serial scheme. We are 100% sure that the donor is healthy when these two tests have negative results and he is chagasic when they have positive results.
Resumo:
During the last three decades, several predictive models have been developed to estimate the somatic production of macroinvertebrates. Although the models have been evaluated for their ability to assess the production of macrobenthos in different marine ecosystems, these approaches have not been applied specifically to sandy beach macrofauna and may not be directly applicable to this transitional environment. Hence, in this study, a broad literature review of sandy beach macrofauna production was conducted and estimates obtained with cohort-based and size-based methods were collected. The performance of nine models in estimating the production of individual populations from the sandy beach environment, evaluated for all taxonomic groups combined and for individual groups separately, was assessed, comparing the production predicted by the models to the estimates obtained from the literature (observed production). Most of the models overestimated population production compared to observed production estimates, whether for all populations combined or more specific taxonomic groups. However, estimates by two models developed by Cusson and Bourget provided best fits to measured production, and thus represent the best alternatives to the cohort-based and size-based methods in this habitat. The consistent performance of one of these Cusson and Bourget models, which was developed for the macrobenthos of sandy substrate habitats (C&B-SS), shows that the performance of a model does not depend on whether it was developed for a specific taxonomic group. Moreover, since some widely used models (e.g., the Robertson model) show very different responses when applied to the macrofauna of different marine environments (e.g., sandy beaches and estuaries), prior evaluation of these models is essential.
Resumo:
This study aims to compare and validate two soil-vegetation-atmosphere-transfer (SVAT) schemes: TERRA-ML and the Community Land Model (CLM). Both SVAT schemes are run in standalone mode (decoupled from an atmospheric model) and forced with meteorological in-situ measurements obtained at several tropical African sites. Model performance is quantified by comparing simulated sensible and latent heat fluxes with eddy-covariance measurements. Our analysis indicates that the Community Land Model corresponds more closely to the micrometeorological observations, reflecting the advantages of the higher model complexity and physical realism. Deficiencies in TERRA-ML are addressed and its performance is improved: (1) adjusting input data (root depth) to region-specific values (tropical evergreen forest) resolves dry-season underestimation of evapotranspiration; (2) adjusting the leaf area index and albedo (depending on hard-coded model constants) resolves overestimations of both latent and sensible heat fluxes; and (3) an unrealistic flux partitioning caused by overestimated superficial water contents is reduced by adjusting the hydraulic conductivity parameterization. CLM is by default more versatile in its global application on different vegetation types and climates. On the other hand, with its lower degree of complexity, TERRA-ML is much less computationally demanding, which leads to faster calculation times in a coupled climate simulation.
Resumo:
The choice of an appropriate family of linear models for the analysis of longitudinal data is often a matter of concern for practitioners. To attenuate such difficulties, we discuss some issues that emerge when analyzing this type of data via a practical example involving pretestposttest longitudinal data. In particular, we consider log-normal linear mixed models (LNLMM), generalized linear mixed models (GLMM), and models based on generalized estimating equations (GEE). We show how some special features of the data, like a nonconstant coefficient of variation, may be handled in the three approaches and evaluate their performance with respect to the magnitude of standard errors of interpretable and comparable parameters. We also show how different diagnostic tools may be employed to identify outliers and comment on available software. We conclude by noting that the results are similar, but that GEE-based models may be preferable when the goal is to compare the marginal expected responses.
Resumo:
To shed light on the interactions occurring in fermented milks when using co-cultures of Streptococcus thermophilus with Lactobacillus bulgaricus (StLb) or Lactobacillus acidophilus (StLa), a new co-metabolic model was proposed and checked either in the presence of Inulin as a prebiotic or not. For this purpose, the experimental data of concentrations of substrates and fermented products were utilized in balances of carbon, reduction degree and ATP. S. thermophilus exhibited always quicker growth compared to the other two microorganisms, while the percentage of lactose fermented to lactic acid, that of galactose metabolized, and the levels of diacetyl and acetoin formed strongly depended on the type of co-culture and the presence of inulin. The StLb co-culture led to higher acetoin and lower diacetyl levels compared to StLa, probably because of more reducing conditions or limited acetoin dehydrogenation. Inulin addition to StLa suppressed acetoin accumulation and hindered that of diacetyl, suggesting catabolite repression of alpha-acetolactate synthase expression in S. thermophilus. Both co-cultures showed the highest ATP requirements for biomass growth and maintenance at the beginning of fermentation, consistently with the high energy demand of enzyme induction during lag phase. Inulin reduced these requirements making biomass synthesis and maintenance less energy-consuming. Only a fraction of galactose was released from lactose, consistently with the galactose-positive phenotype of most dairy strains. The galactose fraction metabolized without inulin was about twice that in its presence, which suggests inhibition of the galactose transport system of S. thermophilus by fructose released from partial inulin hydrolysis. (C) 2012 Elsevier B.V. All rights reserved.