923 resultados para Linear mixed effect models
Resumo:
Background/Aim - People of south Asian origin have an excessive risk of morbidity and mortality from cardiovascular disease. We examined the effect of ethnicity on known risk factors and analysed the risk of cardiovascular events and mortality in UK south Asian and white Europeans patients with type 2 diabetes over a 2 year period. Methods - A total of 1486 south Asian (SA) and 492 white European (WE) subjects with type 2 diabetes were recruited from 25 general practices in Coventry and Birmingham, UK. Baseline data included clinical history, anthropometry and measurements of traditional risk factors – blood pressure, total cholesterol, HbA1c. Multiple linear regression models were used to examine ethnicity differences in individual risk factors. Ten-year cardiovascular risk was estimated using the Framingham and UKPDS equations. All subjects were followed up for 2 years. Cardiovascular events (CVD) and mortality between the two groups were compared. Findings - Significant differences were noted in risk profiles between both groups. After adjustment for clustering and confounding a significant ethnicity effect remained only for higher HbA1c (0.50 [0.22 to 0.77]; P?=?0.0004) and lower HDL (-0.09 [-0.17 to -0.01]; P?=?0.0266). Baseline CVD history was predictive of CVD events during follow-up for SA (P?0.0001) but not WE (P?=?0.189). Mean age at death was 66.8 (11.8) for SA vs. 74.2 (12.1) for WE, a difference of 7.4 years (95% CI 1.0 to 13.7 years), P?=?0.023. The adjusted odds ratio of CVD event or death from CVD was greater but not significantly so in SA than in WE (OR 1.4 [0.9 to 2.2]). Limitations - Fewer events in both groups and short period of follow-up are key limitations. Longer follow-up is required to see if the observed differences between the ethnic groups persist. Conclusion - South Asian patients with type 2 diabetes in the UK have a higher cardiovascular risk and present with cardiovascular events at a significantly younger age than white Europeans. Enhanced and ethnicity specific targets and effective treatments are needed if these inequalities are to be reduced.
Resumo:
Presentation of an abstract
Resumo:
Activated sludge basins (ASBs) are a key-step in wastewater treatment processes that are used to eliminate biodegradable pollution from the water discharged to the natural environment. Bacteria found in the activated sludge consume and assimilate nutrients such as carbon, nitrogen and phosphorous under specific environmental conditions. However, applying the appropriate agitation and aeration regimes to supply the environmental conditions to promote the growth of the bacteria is not easy. The agitation and aeration regimes that are applied to activated sludge basins have a strong influence on the efficacy of wastewater treatment processes. The major aims of agitation by submersible mixers are to improve the contact between biomass and wastewater and the prevention of biomass settling. They induce a horizontal flow in the oxidation ditch, which can be quantified by the mean horizontal velocity. Mean values of 0.3-0.35 m s-1 are recommended as a design criteria to ensure best conditions for mixing and aeration (Da Silva, 1994). To give circulation velocities of this order of magnitude, the positioning and types of mixers are chosen from the plant constructors' experience and the suppliers' data for the impellers. Some case studies of existing plants have shown that measured velocities were not in the range that was specified in the plant design. This illustrates that there is still a need for design and diagnosis approach to improve process reliability by eliminating or reducing the number of short circuits, dead zones, zones of inefficient mixing and poor aeration. The objective of the aeration is to facilitate the quick degradation of pollutants by bacterial growth. To achieve these objectives a wastewater treatment plant must be adequately aerated; thus resulting in 60-80% of all energetic consummation being dedicated to the aeration alone (Juspin and Vasel, 2000). An earlier study (Gillot et al., 1997) has illustrated the influence that hydrodynamics have on the aeration performance as measure by the oxygen transfer coefficient. Therefore, optimising the agitation and aeration systems can enhance the oxygen transfer coefficient and consequently reduce the operating costs of the wastewater treatment plant. It is critically important to correctly estimate the mass transfer coefficient as any errors could result in the simulations of biological activity not being physically representative. Therefore, the transfer process was rigorously examined in several different types of process equipment to determine the impact that different hydrodynamic regimes and liquid-side film transfer coefficients have on the gas phase and the mass transfer of oxygen. To model the biological activity occurring in ASBs, several generic biochemical reaction models have been developed to characterise different biochemical reaction processes that are known as Activated Sludge Models, ASM (Henze et al., 2000). The ASM1 protocol was selected to characterise the impact of aeration on the bacteria consuming and assimilating ammonia and nitrate in the wastewater. However, one drawback of ASM protocols is that the hydrodynamics are assumed to be uniform by the use of perfectly mixed, plug flow reactors or as a number of perfectly mixed reactors in series. This makes it very difficult to identify the influence of mixing and aeration on oxygen mass transfer and biological activity. Therefore, to account for the impact of local gas-liquid mixing regime on the biochemical activity Computational Fluid Dynamics (CFD) was used by applying the individual ASM1 reaction equations as the source terms to a number of scalar equations. Thus, the application of ASM1 to CFD (FLUENT) enabled the investigation of the oxygen transfer efficiency and the carbon & nitrogen biological removal in pilot (7.5 cubic metres) and plant scale (6000 cubic metres) ASBs. Both studies have been used to validate the effect that the hydrodynamic regime has on oxygen mass transfer (the circulation velocity and mass transfer coefficient) and the effect that this had on the biological activity on pollutants such as ammonia and nitrate (Cartland Glover et al., 2005). The work presented here is one part to of an overall approach for improving the understanding of ASBs and the impact that they have in terms of the hydraulic and biological performance on the overall wastewater treatment process. References CARTLAND GLOVER G., PRINTEMPS C., ESSEMIANI K., MEINHOLD J., (2005) Modelling of wastewater treatment plants ? How far shall we go with sophisticated modelling tools? 3rd IWA Leading-Edge Conference & Exhibition on Water and Wastewater Treatment Technologies, 6-8 June 2005, Sapporo, Japan DA SILVA G. (1994). Eléments d'optimisation du transfert d'oxygène par fines bulles et agitateur séparé en chenal d'oxydation. PhD Thesis. CEMAGREF Antony ? France. GILLOT S., DERONZIER G., HEDUIT A. (1997). Oxygen transfer under process conditions in an oxidation ditch equipped with fine bubble diffusers and slow speed mixers. WEFTEC, Chicago, USA. HENZE M., GUJER W., MINO T., van LOOSDRECHT M., (2000). Activated Sludge Models ASM1, ASM2, ASM2D and ASM3, Scientific and Technical Report No. 9. IWA Publishing, London, UK. JUSPIN H., VASEL J.-L. (2000). Influence of hydrodynamics on oxygen transfer in the activated sludge process. IWA, Paris - France.
Resumo:
A klasszikus tételnagyság probléma két fontosabb készletezési költséget ragad meg: rendelési és készlettartási költségek. Ebben a dolgozatban a vállalatok készpénz áramlásának a beszerzési tevékenységre gyakorolt hatását vizsgáljuk. Ebben az elemzésben a készpénzáramlási egyenlőséget használjuk, amely nagyban emlékeztet a készletegyenletekre. Eljárásunkban a beszerzési és rendelési folyamatot diszkontálva vizsgáljuk. A költségfüggvény lineáris készpénztartási, a pénzkiadás haszonlehetőség és lineáris kamatköltségből áll. Bemutatjuk a vizsgált modell optimális megoldását. Az optimális megoldást egy számpéldával illusztráljuk. = The classical economic order quantity model has two types of costs: ordering and inventory holding costs. In this paper we try to investigate the effect of purchasing activity on cash flow of a firm. In the examinations we use a cash flow identity similar to that of in inventory modeling. In our approach we analyze the purchasing and ordering process with discounted costs. The cost function of the model consists of linear cash holding, linear opportunity cost of spending cash, and linear interest costs. We show the optimal solution of the proposed model. The optimal solutions will be presented by numerical examples.
Resumo:
Vegyes oligopóliumoknak nevezzük az olyan piacszerkezeteket, amelyek esetében a magánvállalatok mellett állami vállalatok is tevékenykednek. A vegyes oligopóliumokban az állami vállalatok részben vagy egészében a társadalmi többletet kívánják maximalizálni. Olyan vegyes duopóliumot vizsgálunk, amelyben a vállalatok előbb kiépítik kapacitásaikat, majd meghatározzák termékük kínálati árát. Kreps-Scheinkman [1983] tisztán magánvállalatos duopóliumokra vizsgált ilyen két időszakos modellt, és megállapította, hogy az első időszaki egyensúlyi kapacitások megegyeznek az azonos költségszerkezetű és kínálati viszonyú Cournot-duopólium egyensúlyi kibocsátásaival. Tanulmányunkban Kreps-Scheinkman [1983] eredményét kiterjesztjük a vegyes duopóliumok - lineáris keresleti görbe és konstans egységköltségek melletti - esetére. _____ In mixed oligopolies, private firms compete with a public firm, which at least partially aims to maximize social surplus. The authors investigate mixed duopolies in which the firms first build capacities simultaneously and then set their prices simultaneously as well. For the same two-stage game with purely private firms Kreps and Scheinkman demonstrated in 1983 that the first-stage equilibrium capacities of the two-stage game are identical with the equilibrium outputs of the Cournot duopoly. This paper extends Kreps and Scheinkman's results to mixed duopolies with linear demands and constant unit costs. It is shown that quantity pre-commitment and Bertrand competition also yield to Cournot outcomes when a public firm is involved, not only in the case of private firms.
Resumo:
Highways are generally designed to serve a mixed traffic flow that consists of passenger cars, trucks, buses, recreational vehicles, etc. The fact that the impacts of these different vehicle types are not uniform creates problems in highway operations and safety. A common approach to reducing the impacts of truck traffic on freeways has been to restrict trucks to certain lane(s) to minimize the interaction between trucks and other vehicles and to compensate for their differences in operational characteristics. ^ The performance of different truck lane restriction alternatives differs under different traffic and geometric conditions. Thus, a good estimate of the operational performance of different truck lane restriction alternatives under prevailing conditions is needed to help make informed decisions on truck lane restriction alternatives. This study develops operational performance models that can be applied to help identify the most operationally efficient truck lane restriction alternative on a freeway under prevailing conditions. The operational performance measures examined in this study include average speed, throughput, speed difference, and lane changes. Prevailing conditions include number of lanes, interchange density, free-flow speeds, volumes, truck percentages, and ramp volumes. ^ Recognizing the difficulty of collecting sufficient data for an empirical modeling procedure that involves a high number of variables, the simulation approach was used to estimate the performance values for various truck lane restriction alternatives under various scenarios. Both the CORSIM and VISSIM simulation models were examined for their ability to model truck lane restrictions. Due to a major problem found in the CORSIM model for truck lane modeling, the VISSIM model was adopted as the simulator for this study. ^ The VISSIM model was calibrated mainly to replicate the capacity given in the 2000 Highway Capacity Manual (HCM) for various free-flow speeds under the ideal basic freeway section conditions. Non-linear regression models for average speed, throughput, average number of lane changes, and speed difference between the lane groups were developed. Based on the performance models developed, a simple decision procedure was recommended to select the desired truck lane restriction alternative for prevailing conditions. ^
Resumo:
This dissertation presents dynamic flow experiments with fluorescently labeled platelets to allow for spatial observation of wall attachment in inter-strut spacings, to investigate their relationship to flow patterns. Human blood with fluorescently labeled platelets was circulated through an in vitro system that produced physiologic pulsatile flow in (1) a parallel plate blow chamber that contained two-dimensional (2D) stents that feature completely recirculating flow, partially recirculating flow, and completely reattached flow, and (2) a three-dimensional (3D) cylindrical tube that contained stents of various geometric designs. ^ Flow detachment and reattachment points exhibited very low platelet deposition. Platelet deposition was very low in the recirculation regions in the 3D stents unlike the 2D stents. Deposition distal to a strut was always high in 2D and 3D stents. Spirally recirculating regions were found in 3D unlike in 2D stents, where the deposition was higher than at well-separated regions of recirculation. ^
Resumo:
As traffic congestion exuberates and new roadway construction is severely constrained because of limited availability of land, high cost of land acquisition, and communities' opposition to the building of major roads, new solutions have to be sought to either make roadway use more efficient or reduce travel demand. There is a general agreement that travel demand is affected by land use patterns. However, traditional aggregate four-step models, which are the prevailing modeling approach presently, assume that traffic condition will not affect people's decision on whether to make a trip or not when trip generation is estimated. Existing survey data indicate, however, that differences exist in trip rates for different geographic areas. The reasons for such differences have not been carefully studied, and the success of quantifying the influence of land use on travel demand beyond employment, households, and their characteristics has been limited to be useful to the traditional four-step models. There may be a number of reasons, such as that the representation of influence of land use on travel demand is aggregated and is not explicit and that land use variables such as density and mix and accessibility as measured by travel time and congestion have not been adequately considered. This research employs the artificial neural network technique to investigate the potential effects of land use and accessibility on trip productions. Sixty two variables that may potentially influence trip production are studied. These variables include demographic, socioeconomic, land use and accessibility variables. Different architectures of ANN models are tested. Sensitivity analysis of the models shows that land use does have an effect on trip production, so does traffic condition. The ANN models are compared with linear regression models and cross-classification models using the same data. The results show that ANN models are better than the linear regression models and cross-classification models in terms of RMSE. Future work may focus on finding a representation of traffic condition with existing network data and population data which might be available when the variables are needed to in prediction.
Resumo:
The two-photon exchange phenomenon is believed to be responsible for the discrepancy observed between the ratio of proton electric and magnetic form factors, measured by the Rosenbluth and polarization transfer methods. This disagreement is about a factor of three at Q 2 of 5.6 GeV2. The precise knowledge of the proton form factors is of critical importance in understanding the structure of this nucleon. The theoretical models that estimate the size of the two-photon exchange (TPE) radiative correction are poorly constrained. This factor was found to be directly measurable by taking the ratio of the electron-proton and positron-proton elastic scattering cross sections, as the TPE effect changes sign with respect to the charge of the incident particle. A test run of a modified beamline has been conducted with the CEBAF Large Acceptance Spectrometer (CLAS) at Thomas Jefferson National Accelerator Facility. This test run demonstrated the feasibility of producing a mixed electron/positron beam of good quality. Extensive simulations performed prior to the run were used to reduce the background rate that limits the production luminosity. A 3.3 GeV primary electron beam was used that resulted in an average secondary lepton beam of 1 GeV. As a result, the elastic scattering data of both lepton types were obtained at scattering angles up to 40 degrees for Q2 up to 1.5 GeV2. The cross section ratio displayed an &epsis; dependence that was Q2 dependent at smaller Q2 limits. The magnitude of the average ratio as a function of &epsis; was consistent with the previous measurements, and the elastic (Blunden) model to within the experimental uncertainties. Ultimately, higher luminosity is needed to extend the data range to lower &epsis; where the TPE effect is predicted to be largest.
Resumo:
To provide biological insights into transcriptional regulation, a couple of groups have recently presented models relating the promoter DNA-bound transcription factors (TFs) to downstream gene’s mean transcript level or transcript production rates over time. However, transcript production is dynamic in response to changes of TF concentrations over time. Also, TFs are not the only factors binding to promoters; other DNA binding factors (DBFs) bind as well, especially nucleosomes, resulting in competition between DBFs for binding at same genomic location. Additionally, not only TFs, but also some other elements regulate transcription. Within core promoter, various regulatory elements influence RNAPII recruitment, PIC formation, RNAPII searching for TSS, and RNAPII initiating transcription. Moreover, it is proposed that downstream from TSS, nucleosomes resist RNAPII elongation.
Here, we provide a machine learning framework to predict transcript production rates from DNA sequences. We applied this framework in the S. cerevisiae yeast for two scenarios: a) to predict the dynamic transcript production rate during the cell cycle for native promoters; b) to predict the mean transcript production rate over time for synthetic promoters. As far as we know, our framework is the first successful attempt to have a model that can predict dynamic transcript production rates from DNA sequences only: with cell cycle data set, we got Pearson correlation coefficient Cp = 0.751 and coefficient of determination r2 = 0.564 on test set for predicting dynamic transcript production rate over time. Also, for DREAM6 Gene Promoter Expression Prediction challenge, our fitted model outperformed all participant teams, best of all teams, and a model combining best team’s k-mer based sequence features and another paper’s biologically mechanistic features, in terms of all scoring metrics.
Moreover, our framework shows its capability of identifying generalizable fea- tures by interpreting the highly predictive models, and thereby provide support for associated hypothesized mechanisms about transcriptional regulation. With the learned sparse linear models, we got results supporting the following biological insights: a) TFs govern the probability of RNAPII recruitment and initiation possibly through interactions with PIC components and transcription cofactors; b) the core promoter amplifies the transcript production probably by influencing PIC formation, RNAPII recruitment, DNA melting, RNAPII searching for and selecting TSS, releasing RNAPII from general transcription factors, and thereby initiation; c) there is strong transcriptional synergy between TFs and core promoter elements; d) the regulatory elements within core promoter region are more than TATA box and nucleosome free region, suggesting the existence of still unidentified TAF-dependent and cofactor-dependent core promoter elements in yeast S. cerevisiae; e) nucleosome occupancy is helpful for representing +1 and -1 nucleosomes’ regulatory roles on transcription.
Resumo:
Mixtures of Zellner's g-priors have been studied extensively in linear models and have been shown to have numerous desirable properties for Bayesian variable selection and model averaging. Several extensions of g-priors to Generalized Linear Models (GLMs) have been proposed in the literature; however, the choice of prior distribution of g and resulting properties for inference have received considerably less attention. In this paper, we extend mixtures of g-priors to GLMs by assigning the truncated Compound Confluent Hypergeometric (tCCH) distribution to 1/(1+g) and illustrate how this prior distribution encompasses several special cases of mixtures of g-priors in the literature, such as the Hyper-g, truncated Gamma, Beta-prime, and the Robust prior. Under an integrated Laplace approximation to the likelihood, the posterior distribution of 1/(1+g) is in turn a tCCH distribution, and approximate marginal likelihoods are thus available analytically. We discuss the local geometric properties of the g-prior in GLMs and show that specific choices of the hyper-parameters satisfy the various desiderata for model selection proposed by Bayarri et al, such as asymptotic model selection consistency, information consistency, intrinsic consistency, and measurement invariance. We also illustrate inference using these priors and contrast them to others in the literature via simulation and real examples.
Resumo:
Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.
Resumo:
AIMS: To investigate the local, regulatory role of the mucosa on bladder strip contractility from normal and overactive bladders and to examine the effect of botulinum toxin A (BoNT-A).
METHODS: Bladder strips from spontaneously hyperactive rat (SHR) or normal rats (Sprague Dawley, SD) were dissected for myography as intact or mucosa-free preparations. Spontaneous, neurogenic and agonist-evoked contractions were investigated. SHR strips were incubated in BoNT-A (3 h) to assess effects on contractility.
RESULTS: Spontaneous contraction amplitude, force-integral or frequency were not significantly different in SHR mucosa-free strips compared with intacts. In contrast, spontaneous contraction amplitude and force-integral were smaller in SD mucosa-free strips than in intacts; frequency was not affected by the mucosa. Frequency of spontaneous contractions in SHR strips was significantly greater than in SD strips. Neurogenic contractions in mucosa-free SHR and SD strips at higher frequencies were smaller than in intact strips. The mucosa did not affect carbachol-evoked contractions in intact versus mucosa-free strips from SHR or SD bladders. BoNT-A reduced spontaneous contractions in SHR intact strips; this trend was also observed in mucosa-free strips but was not significant. Neurogenic and carbachol-evoked contractions were reduced by BoNT-A in mucosa-free but not intact strips. Depolarisation-induced contractions were smaller in BoNT-A-treated mucosa-free strips.
CONCLUSIONS: The mucosal layer positively modulates spontaneous contractions in strips from normal SD but not overactive SHR bladder strips. The novel finding of BoNT-A reduction of contractions in SHR mucosa-free strips indicates actions on the detrusor, independent of its classical action on neuronal SNARE complexes.
Resumo:
The influence of two types of graphene nanoplatelets (GNPs) on the physico-mechanical properties of linear low-density polyethylene (LLDPE) was investigated. The addition of these two types of GNPs – designated as grades C and M – enhanced the thermal conductivity of the LLDPE, with a more pronounced improvement resulting from the M-GNPs compared to C-GNPs. Improvement in electrical conductivity and decomposition temperature was also noticed with the addition of GNPs. In contrast to the thermal conductivity, C-GNPs resulted in greater improvements in the electrical conductivity and thermal decomposition temperature. These differences can be attributed to differences in the surface area and dispersion of the two types of GNPs.
Resumo:
[EN]To compare the one year effect of two dietary interventions with MeDiet on GL and GI in the PREDIMED trial. Methods. Participants were older subjects at high risk for cardiovascular disease. This analysis included 2866 nondiabetic subjects. Diet was assessed with a validated 137-item food frequency questionnaire (FFQ). The GI of each FFQ item was assigned by a 5-step methodology using the International Tables of GI and GL Values. Generalized linear models were fitted to assess the relationship between the intervention group and dietary GL and GI at one year of follow-up, using control group as reference.