903 resultados para Generalization of Ehrenfest’s urn Model
Resumo:
The convection-dispersion model and its extended form have been used to describe solute disposition in organs and to predict hepatic availabilities. A range of empirical transit-time density functions has also been used for a similar purpose. The use of the dispersion model with mixed boundary conditions and transit-time density functions has been queried recently by Hisaka and Sugiyanaa in this journal. We suggest that, consistent with soil science and chemical engineering literature, the mixed boundary conditions are appropriate providing concentrations are defined in terms of flux to ensure continuity at the boundaries and mass balance. It is suggested that the use of the inverse Gaussian or other functions as empirical transit-time densities is independent of any boundary condition consideration. The mixed boundary condition solutions of the convection-dispersion model are the easiest to use when linear kinetics applies. In contrast, the closed conditions are easier to apply in a numerical analysis of nonlinear disposition of solutes in organs. We therefore argue that the use of hepatic elimination models should be based on pragmatic considerations, giving emphasis to using the simplest or easiest solution that will give a sufficiently accurate prediction of hepatic pharmacokinetics for a particular application. (C) 2000 Wiley-Liss Inc. and the American Pharmaceutical Association J Pharm Sci 89:1579-1586, 2000.
Resumo:
We consider the electronic properties of layered molecular crystals of the type theta -D(2)A where A is an anion and D is a donor molecule such as bis-(ethylenedithia-tetrathiafulvalene) (BEDT-TTF), which is arranged in the theta -type pattern within the layers. We argue that the simplest strongly correlated electron model that can describe the rich phase diagram of these materials is the extended Hubbard model on the square lattice at one-quarter filling. In the limit where the Coulomb repulsion on a single site is large, the nearest-neighbor Coulomb repulsion V plays a crucial role. When V is much larger than the intermolecular hopping integral t the ground state is an insulator with charge ordering. In this phase antiferromagnetism arises due to a novel fourth-order superexchange process around a plaquette on the square lattice. We argue that the charge ordered phase is destroyed below a critical nonzero value V, of the order of t. Slave-boson theory is used to explicitly demonstrate this for the SU(N) generalization of the model, in the large-N limit. We also discuss the relevance of the model to the all-organic family beta-(BEDT-TTF)(2)SF5YSO3 where Y=CH2CF2, CH2, CHF.
Resumo:
This article presents a proposal of a systemic model composed for the micro and small companies (MSE) of the region of Ribeiro Preto and the agents which influenced their environment. The proposed model was based on Stafford Beer`s (Diagnosing the system for organizations. Chichester, Wiley, 1985) systemic methodologies VSM (Viable System Model) and on Werner Ulrich`s (1983) CSH (Critical Systems Heuristics). The VSM is a model for the diagnosis of the structure of an organization and of its flows of information through the application of the cybernetics concepts (Narvarte, In El Modelo del Sistema Viable-MSV: experiencias de su aplicacin en Chile. Proyecto Cerebro Colectivo del IAS, Santiago, 2001). On the other hand, CSH focus on the context of the social group applied to the systemic vision as a counterpoint to the organizational management view considered by the VSM. MSE of Ribeiro Preto and Sertozinho had been analyzed as organizations inserted in systems that relate and integrate with other systems concerning the public administration, entities of representation and promotion agencies. The research questions: which are the bonds of interaction among the subsystems in this process and who are the agents involved? The systemic approach not only diagnosed a social group, formed by MSE of Ribeiro Preto and Sertozinho, public authorities and support entities, but could also delineate answers that aimed the clarification of obscure questions generating financial assistance to the formularization of efficient actions for the development of this system.
Resumo:
The Brazil consolidated itself as the largest world producer of sugarcane, sugar and ethanol. The creation of the Programa Nacional do Alcool - PROALCOOL and the growing use of cars with flexible motors were some of the factors that helped to motivate still more the production. Evolutions in the agricultural and industrial research did the Brazilian competitiveness in sugar and ethanol globally elevated, what is evidenced when comparing the amount produced at the country and the production costs, which turned a big one differential. Therefore, the administration of costs is of great relevance to the sugar and ethanol companies, for representing a significant rationalization in the production processes, with economy of resources and the reach of better earnings, besides reducing the operational risk pertinent at the fixed costs of production. Thus, the present work has for objective to analyze the costs structure of sugar and ethanol companies of the Center-south area of the country through an empiric-analytical study based in methodologies and concepts extracted of the costs accounting. It is verified that great part of the costs and operational expenses have variable behavior, a positive factor for the sector reducing the operational risk of the activity. The main restraint of this study is the sample of five years and 10% of the number of plants in Brazil that although they represent 30% of the national production, don`t allow the generalization of the model.
Resumo:
In this second paper, the three structural measures which have been developed are used in the modelling of a three stage centrifugal synthesis gas compressor. The goal of this case study is to determine the essential mathematical structure which must be incorporated into the compressor model to accurately model the shutdown of this system. A simple, accurate and functional model of the system is created via three structural measures. It was found that the model can be correctly reduced into its basic modes and that the order of the differential system can be reduced from 51(st) to 20(th). Of the 31 differential equational 21 reduce to algebraic relations, 8 become constants and 2 can be deleted thereby increasing the algebraic set from 70 to 91 equations. An interpretation is also obtained as to which physical phenomena are dominating the dynamics of the compressor add whether the compressor will enter surge during the shutdown. Comparisons of the reduced model performance against the full model are given, showing the accuracy and applicability of the approach. Copyright (C) 1996 Elsevier Science Ltd
Resumo:
A number of theoretical and experimental investigations have been made into the nature of purlin-sheeting systems over the past 30 years. These systems commonly consist of cold-formed zed or channel section purlins, connected to corrugated sheeting. They have proven difficult to model due to the complexity of both the purlin deformation and the restraint provided to the purlin by the sheeting. Part 1 of this paper presented a non-linear elasto plastic finite element model which, by incorporating both the purlin and the sheeting in the analysis, allowed the interaction between the two components of the system to be modelled. This paper presents a simplified version of the first model which has considerably decreased requirements in terms of computer memory, running time and data preparation. The Simplified Model includes only the purlin but allows for the sheeting's shear and rotational restraints by modelling these effects as springs located at the purlin-sheeting connections. Two accompanying programs determine the stiffness of these springs numerically. As in the Full Model, the Simplified Model is able to account for the cross-sectional distortion of the purlin, the shear and rotational restraining effects of the sheeting, and failure of the purlin by local buckling or yielding. The model requires no experimental or empirical input and its validity is shown by its goon con elation with experimental results. (C) 1997 Elsevier Science Ltd.
Resumo:
Background Meta-analysis is increasingly being employed as a screening procedure in large-scale association studies to select promising variants for follow-up studies. However, standard methods for meta-analysis require the assumption of an underlying genetic model, which is typically unknown a priori. This drawback can introduce model misspecifications, causing power to be suboptimal, or the evaluation of multiple genetic models, which augments the number of false-positive associations, ultimately leading to waste of resources with fruitless replication studies. We used simulated meta-analyses of large genetic association studies to investigate naive strategies of genetic model specification to optimize screenings of genome-wide meta-analysis signals for further replication. Methods Different methods, meta-analytical models and strategies were compared in terms of power and type-I error. Simulations were carried out for a binary trait in a wide range of true genetic models, genome-wide thresholds, minor allele frequencies (MAFs), odds ratios and between-study heterogeneity (tau(2)). Results Among the investigated strategies, a simple Bonferroni-corrected approach that fits both multiplicative and recessive models was found to be optimal in most examined scenarios, reducing the likelihood of false discoveries and enhancing power in scenarios with small MAFs either in the presence or in absence of heterogeneity. Nonetheless, this strategy is sensitive to tau(2) whenever the susceptibility allele is common (MAF epsilon 30%), resulting in an increased number of false-positive associations compared with an analysis that considers only the multiplicative model. Conclusion Invoking a simple Bonferroni adjustment and testing for both multiplicative and recessive models is fast and an optimal strategy in large meta-analysis-based screenings. However, care must be taken when examined variants are common, where specification of a multiplicative model alone may be preferable.
Resumo:
Vitamin D (VD), is a steroid hormone with multiple functions in the central nervous system (CNS), producing numerous physiological effects mediated by its receptor (VDR). Clinical and experimental studies have shown a link between VD dysfunction and epilepsy. Along these lines, the purpose of our work was to analyze the relative expression of VDR mRNA in the hippocampal formation of rats during the three periods of pilocarpine-induced epilepsy. Male Wistar rats were divided into five groups: (1) control group; rats that received saline 0.9%, i.p. and were killed 7 days after its administration (CTRL, n = 8), (2) SE group; rats that received pilocarpine and were killed 4 h after SE (SE, n = 8), (3) Silent group-7 days; rats that received pilocarpine and were killed 7 days after SE (SIL 7d, n = 8), (4) Silent group-14 days; rats that received pilocarpine and were killed 14 days after SE (SIL 14d, n = 8), (5) Chronic group; rats that received pilocarpine and were killed 60 days after the first spontaneous seizure, (chronic, n = 8). The relative expression of VDR mRNA was determined by real-time PCR. Our results showed an increase of the relative expression of VDR mRNA in the SIL 7 days, SIL 14 days and Chronic groups, respectively (0.060 +/- 0.024; 0.052 +/- 0.035; 0.085 +/- 0.055) when compared with the CTRL and SE groups (0.019 +/- 0.017; 0.019 +/- 0.025). These data suggest the VDR as a possible candidate participating in the epileptogenesis process of the pilocarpine model of epilepsy. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Obstetric complications play a role in the pathophysiology of schizophrenia. However, the biological consequences during neurodevelopment until adulthood are unknown. Microarrays have been used for expression profiling in four brain regions of a rat model of neonatal hypoxia as a common factor of obstetric complications. Animals were repeatedly exposed to chronic hypoxia from postnatal (PD) day 4 through day 8 and killed at the age of 150 days. Additional groups of rats were treated with clozapine from PD 120-150. Self-spotted chips containing 340 cDNAs related to the glutamate system (""glutamate chips"") were used. The data show differential (up and down) regulations of numerous genes in frontal (FR), temporal (TE) and parietal cortex (PAR), and in caudate putamen (CPU), but evidently many more genes are upregulated in frontal and temporal cortex, whereas in parietal cortex the majority of genes are downregulated. Because of their primary presynaptic occurrence, five differentially expressed genes (CPX1, NPY, NRXN1, SNAP-25, and STX1A) have been selected for comparisons with clozapine-treated animals by qRT-PCR. Complexin 1 is upregulated in FR and TE cortex but unchanged in PAR by hypoxic treatment. Clozapine downregulates it in FR but upregulates it in PAR cortex. Similarly, syntaxin 1A was upregulated in FR, but downregulated in TE and unchanged in PAR cortex, whereas clozapine downregulated it in FR but upregulated it in PAR cortex. Hence, hypoxia alters gene expression regionally specific, which is in agreement with reports on differentially expressed presynaptic genes in schizophrenia. Chronic clozapine treatment may contribute to normalize synaptic connectivity.
Resumo:
Historically, the cure rate model has been used for modeling time-to-event data within which a significant proportion of patients are assumed to be cured of illnesses, including breast cancer, non-Hodgkin lymphoma, leukemia, prostate cancer, melanoma, and head and neck cancer. Perhaps the most popular type of cure rate model is the mixture model introduced by Berkson and Gage [1]. In this model, it is assumed that a certain proportion of the patients are cured, in the sense that they do not present the event of interest during a long period of time and can found to be immune to the cause of failure under study. In this paper, we propose a general hazard model which accommodates comprehensive families of cure rate models as particular cases, including the model proposed by Berkson and Gage. The maximum-likelihood-estimation procedure is discussed. A simulation study analyzes the coverage probabilities of the asymptotic confidence intervals for the parameters. A real data set on children exposed to HIV by vertical transmission illustrates the methodology.
Resumo:
Ligaments undergo finite strain displaying hyperelastic behaviour as the initially tangled fibrils present straighten out, combined with viscoelastic behaviour (strain rate sensitivity). In the present study the anterior cruciate ligament of the human knee joint is modelled in three dimensions to gain an understanding of the stress distribution over the ligament due to motion imposed on the ends, determined from experimental studies. A three dimensional, finite strain material model of ligaments has recently been proposed by Pioletti in Ref. [2]. It is attractive as it separates out elastic stress from that due to the present strain rate and that due to the past history of deformation. However, it treats the ligament as isotropic and incompressible. While the second assumption is reasonable, the first is clearly untrue. In the present study an alternative model of the elastic behaviour due to Bonet and Burton (Ref. [4]) is generalized. Bonet and Burton consider finite strain with constant modulii for the fibres and for the matrix of a transversely isotropic composite. In the present work, the fibre modulus is first made to increase exponentially from zero with an invariant that provides a measure of the stretch in the fibre direction. At 12% strain in the fibre direction, a new reference state is then adopted, after which the material modulus is made constant, as in Bonet and Burton's model. The strain rate dependence can be added, either using Pioletti's isotropic approximation, or by making the effect depend on the strain rate in the fibre direction only. A solid model of a ligament is constructed, based on experimentally measured sections, and the deformation predicted using explicit integration in time. This approach simplifies the coding of the material model, but has a limitation due to the detrimental effect on stability of integration of the substantial damping implied by the nonlinear dependence of stress on strain rate. At present, an artificially high density is being used to provide stability, while the dynamics are being removed from the solution using artificial viscosity. The result is a quasi-static solution incorporating the effect of strain rate. Alternate approaches to material modelling and integration are discussed, that may result in a better model.
Resumo:
Pasminco Century Mine has developed a geophysical logging system to provide new data for ore mining/grade control and the generation of Short Term Models for mine planning. Previous work indicated the applicability of petrophysical logging for lithology prediction, however, the automation of the method was not considered reliable enough for the development of a mining model. A test survey was undertaken using two diamond drilled control holes and eight percussion holes. All holes were logged with natural gamma, magnetic susceptibility and density. Calibration of the LogTrans auto-interpretation software using only natural gamma and magnetic susceptibility indicated that both lithology and stratigraphy could be predicted. Development of a capability to enforce stratigraphic order within LogTrans increased the reliability and accuracy of interpretations. After the completion of a feasibility program, Century Mine has invested in a dedicated logging vehicle to log blast holes as well as for use in in-fill drilling programs. Future refinement of the system may lead to the development of GPS controlled excavators for mining ore.
Resumo:
The aim of this study was to determine how well Gray's model of personality [Gray, J.A. (1982). The neuropsychology of anxiety: an enquiry into the functions of the septo-hippocampal system. Oxfords: Oxford University Press, Gray, J.A. (1987). The psychology of fear and stress. Cambridge: Cambridge University Press], as measured by the Gray Wilson Personality Questionnaire (GWPQ), can provide a full description of personality as measured by the primary scales of the Eysenck Personality Profiler (EPP) and the type scales of the short version or the EPQ-R. Factor analysis of the GWPQ the Anxiety and linpulsivity scales of the EPP and the Learning Styles Questionnaire (LSQ) showed that the GWPQ seemed to measure general activation and inhibition factors, but not the finer features of Gray's theory. When the GWPQ scales were regressed against each scale of the EPP., it was round that they generally provide only a reasonable explanation of the EPP primary scales. It is concluded that the GWPQ measures general propel-ties of Gray's model, that the linpulsivity and Anxiety scales of the EPP also scent related to the GWPQ scales, and that Gray's model of personality provides only a partial explanation of personality in general. (C) 2002 Published by Elsevier Science Ltd. All rights reserved.
Resumo:
This paper addresses robust model-order reduction of a high dimensional nonlinear partial differential equation (PDE) model of a complex biological process. Based on a nonlinear, distributed parameter model of the same process which was validated against experimental data of an existing, pilot-scale BNR activated sludge plant, we developed a state-space model with 154 state variables in this work. A general algorithm for robustly reducing the nonlinear PDE model is presented and based on an investigation of five state-of-the-art model-order reduction techniques, we are able to reduce the original model to a model with only 30 states without incurring pronounced modelling errors. The Singular perturbation approximation balanced truncating technique is found to give the lowest modelling errors in low frequency ranges and hence is deemed most suitable for controller design and other real-time applications. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.