965 resultados para linear-threshold model
Resumo:
The myogenic differentiation 1 gene (MYOD1) has a key role in skeletal muscle differentiation and composition through its regulation of the expression of several muscle-specific genes. We first used a general linear mixed model approach to evaluate the association of MYOD1 expression levels on individual beef tenderness phenotypes. MYOD1 mRNA levels measured by quantitative polymerase chain reactions in 136 Nelore steers were significantly associated (P ? 0.01) with Warner?Bratzler shear force, measured on the longissimus dorsi muscle after 7 and 14 days of beef aging. Transcript abundance for the muscle regulatory gene MYOD1 was lower in animals with more tender beef. We also performed a coexpression network analysis using whole transcriptome sequence data generated from 30 samples of longissimus muscle tissue to identify genes that are potentially regulated by MYOD1. The effect of MYOD1 gene expression on beef tenderness may emerge from its function as an activator of muscle-specific gene transcription such as for the serum response factor (C-fos serum response element-binding transcription factor) gene (SRF), which determines muscle tissue development, composition, growth and maturation.
Resumo:
Twitter is a highly popular social media which on one hand allows information transmission in real time and on the other hand represents a source of open access homogeneous text data. We propose an analysis of the most common self-reported COVID symptoms from a dataset of Italian tweets to investigate the evolution of the pandemic in Italy from the end of September 2020 to the end of January 2021. After manually filtering tweets actually describing COVID symptoms from the database - which contains words related to fever, cough and sore throat - we discuss usefulness of such filtering. We then compare our time series with the daily data of new hospitalisations in Italy, with the aim of building a simple linear regression model that accounts for the delay which is observed from the tweets mentioning individual symptoms to new hospitalisations. We discuss both the results and limitations of linear regression given that our data suggests that the relationship between time series of symptoms tweets and of new hospitalisations changes towards the end of the acquisition.
Resumo:
This doctoral thesis presents a project carried out in secondary schools located in the city of Ferrara with the primary objective of demonstrating the effectiveness of an intervention based on Well-Being Therapy (Fava, 2016) in reducing alcohol use and improving lifestyles. In the first part (chapters 1-3), an introduction on risky behaviors and unhealthy lifestyle in adolescence is presented, followed by an examination of the phenomenon of binge drinking and of the concept of psychological well-being. In the second part (chapters 4-6), the experimental study is presented. A three-arm cluster randomized controlled trial including three test periods was implemented. The study involved eleven classes that were randomly assigned to receive well-being intervention (WBI), lifestyle intervention (LI) or not receive intervention (NI). Results were analyzed by linear mixed model and mixed-effects logistic regression with the aim to test the efficacy of WBI in comparison with LI and NI. AUDIT-C total score increased more in NI in comparison with WBI (p=0.008) and LI (p=0.003) at 6-month. The odds to be classified as at-risk drinker was lower in WBI (OR 0.01; 95%CI 0.01–0.14) and LI (OR 0.01; 95%CI 0.01–0.03) than NI at 6-month. The odds to use e-cigarettes at 6-month (OR 0.01; 95%CI 0.01–0.35) and cannabis at post-test (OR 0.01; 95%CI 0.01–0.18) were less in WBI than NI. Sleep hours at night decreased more in NI than in WBI (p = 0.029) and LI (p = 0.006) at 6-month. Internet addiction scores decreased more in WBI (p = 0.003) and LI (p = 0.004) at post-test in comparison with NI. Conclusions about the obtained results, limitations of the study, and future implications are discussed. In the seventh chapter, the data of the project collected during the pandemic are presented and compared with those from recent literature.
Resumo:
The great challenges of today pose great pressure on the food chain to provide safe and nutritious food that meets regulations and consumer health standards. In this context, Risk Analysis is used to produce an estimate of the risks to human health and to identify and implement effective risk-control measures. The aims of this work were 1) describe how QRA is used to evaluate the risk for consumers health, 2) address the methodology to obtain models to apply in QMRA; 3) evaluate solutions to mitigate the risk. The application of a QCRA to the Italian milk industry enabled the assessment of Aflatoxin M1 exposure, impact on different population categories, and comparison of risk-mitigation strategies. The results highlighted the most sensitive population categories, and how more stringent sampling plans reduced risk. The application of a QMRA to Spanish fresh cheeses evidenced how the contamination of this product with Listeria monocytogenes may generate a risk for the consumers. Two risk-mitigation actions were evaluated, i.e. reducing shelf life and domestic refrigerator temperature, both resulting effective in reducing the risk of listeriosis. A description of the most applied protocols for data generation for predictive model development, was provided to increase transparency and reproducibility and to provide the means to better QMRA. The development of a linear regression model describing the fate of Salmonella spp. in Italian salami during the production process and HPP was described. Alkaline electrolyzed water was evaluated for its potential use to reduce microbial loads on working surfaces, with results showing its effectiveness. This work showed the relevance of QRA, of predictive microbiology, and of new technologies to ensure food safety on a more integrated way. Filling of data gaps, the development of better models and the inclusion of new risk-mitigation strategies may lead to improvements in the presented QRAs.
Resumo:
We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.
Resumo:
Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS.
Resumo:
Here, we study the stable integration of real time optimization (RTO) with model predictive control (MPC) in a three layer structure. The intermediate layer is a quadratic programming whose objective is to compute reachable targets to the MPC layer that lie at the minimum distance to the optimum set points that are produced by the RTO layer. The lower layer is an infinite horizon MPC with guaranteed stability with additional constraints that force the feasibility and convergence of the target calculation layer. It is also considered the case in which there is polytopic uncertainty in the steady state model considered in the target calculation. The dynamic part of the MPC model is also considered unknown but it is assumed to be represented by one of the models of a discrete set of models. The efficiency of the methods presented here is illustrated with the simulation of a low order system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
We modified the noninvasive, in vivo technique for strain application in the tibiae of rats (Turner et al,, Bone 12:73-79, 1991), The original model applies four-point bending to right tibiae via an open-loop, stepper-motor-driven spring linkage, Depending on the magnitude of applied load, the model produces new bone formation at periosteal (Ps) or endocortical surfaces (Ec.S). Due to the spring linkage, however, the range of frequencies at which loads can be applied is limited. The modified system replaces this design with an electromagnetic vibrator. A load transducer in series with the loading points allows calibration, the loaders' position to be adjusted, and cyclic loading completed under load central as a closed servo-loop. Two experiments were conducted to validate the modified system: (1) a strain gauge was applied to the lateral surface of the right tibia of 5 adult female rats and strains measured at applied loads from 10 to 60 N; and (2) the bone formation response was determined in 28 adult female Sprague-Dawley rats. Loading was applied as a haversine wave with a frequency of 2 Hz for 18 sec, every second day for 10 days. Peak bending loads mere applied at 33, 40, 52, and 64 N, and a sham-loading group tr as included at 64 N, Strains in the tibiae were linear between 10 and 60 N, and the average peak strain at the Ps.S at 60 N was 2664 +/- 250 microstrain, consistent with the results of Turner's group. Lamellar bone formation was stimulated at the Ec.S by applied bending, but not by sham loading. Bending strains above a loading threshold of 40 N increased Ec Lamellar hone formation rate, bone forming surface, and mineral apposition rate with a dose response similar to that reported by Turner et al, (J Bone Miner Res 9:87-97, 1994). We conclude that the modified loading system offers precision for applied loads of between 0 and 70 N, versatility in the selection of loading rates up to 20 Hz, and a reproducible bone formation response in the rat tibia, Adjustment of the loader also enables study of mechanical usage in murine tibia, an advantage with respect to the increasing variety of transgenic strains available in bone and mineral research. (Bone 23:307-310; 1998) (C) 1998 by Elsevier Science Inc. All rights reserved.
Resumo:
Modeling volatile organic compounds (voc`s) adsorption onto cup-stacked carbon nanotubes (cscnt) using the linear driving force model. Volatile organic compounds (VOC`s) are an important category of air pollutants and adsorption has been employed in the treatment (or simply concentration) of these compounds. The current study used an ordinary analytical methodology to evaluate the properties of a cup-stacked nanotube (CSCNT), a stacking morphology of truncated conical graphene, with large amounts of open edges on the outer surface and empty central channels. This work used a Carbotrap bearing a cup-stacked structure (composite); for comparison, Carbotrap was used as reference (without the nanotube). The retention and saturation capacities of both adsorbents to each concentration used (1, 5, 20 and 35 ppm of toluene and phenol) were evaluated. The composite performance was greater than Carbotrap; the saturation capacities for the composite was 67% higher than Carbotrap (average values). The Langmuir isotherm model was used to fit equilibrium data for both adsorbents, and a linear driving force model (LDF) was used to quantify intraparticle adsorption kinetics. LDF was suitable to describe the curves.
Resumo:
The conventional convection-dispersion model is widely used to interrelate hepatic availability (F) and clearance (Cl) with the morphology and physiology of the liver and to predict effects such as changes in liver blood flow on F and Cl. The extension of this model to include nonlinear kinetics and zonal heterogeneity of the liver is not straightforward and requires numerical solution of partial differential equation, which is not available in standard nonlinear regression analysis software. In this paper, we describe an alternative compartmental model representation of hepatic disposition (including elimination). The model allows the use of standard software for data analysis and accurately describes the outflow concentration-time profile for a vascular marker after bolus injection into the liver. In an evaluation of a number of different compartmental models, the most accurate model required eight vascular compartments, two of them with back mixing. In addition, the model includes two adjacent secondary vascular compartments to describe the tail section of the concentration-time profile for a reference marker. The model has the added flexibility of being easy to modify to model various enzyme distributions and nonlinear elimination. Model predictions of F, MTT, CV2, and concentration-time profile as well as parameter estimates for experimental data of an eliminated solute (palmitate) are comparable to those for the extended convection-dispersion model.
Resumo:
This work deals with the numerical simulation of air stripping process for the pre-treatment of groundwater used in human consumption. The model established in steady state presents an exponential solution that is used, together with the Tau Method, to get a spectral approach of the solution of the system of partial differential equations associated to the model in transient state.
Resumo:
"Published online before print November 20, 2015"
Resumo:
Magdeburg, Univ., Diss, 2007