903 resultados para Generalization of Ehrenfest’s urn Model
Resumo:
Management systems standards (MSSs) have developed in an unprecedented manner in the last few years. These MSS cover a wide array of different disciplines, aims and activities of organisations. Also, organisations are populated with an enormous diversity of independent management systems (MSs). An integrated management system (IMS) tends to integrate some or all components of the business. Maximising their integration in one coherent and efficient MS is increasingly a strategic priority and constitutes an opportunity for businesses to be more competitive and consequently, promote its sustainable success. Those organisations that are quicker and more efficient in their integration and continuous improvement will have a competitive advantage in obtaining sustainable value in our global and competitive business world. Several scholars have proposed various theoretical approaches regarding the integration of management sub-systems, leading to the conclusion that there is no common practice for all organisations as they encompass different characteristics. One other author shows that several tangible and intangible gains for organisations, as well as to their internal and external stakeholders, are achieved with the integration of the individual standardised MSs. The purpose of this work was to conceive a model, Flexible, Integrator and Lean for IMSs, according to ISO 9001 for quality; ISO 14001 for environment and OHSAS 18001 for occupational health and safety (IMS–QES), that can be adapted and progressively assimilate other MSs, such as, SA 8000/ISO 26000 for social accountability, ISO 31000 for risk management and ISO/IEC 27001 for information security management, among others. The IMS–QES model was designed in the real environment of an industrial Portuguese small and medium enterprise, that over the years has been adopting, gradually, in whole or in part, individual MSSs. The developed model is based on a preliminary investigation conducted through a questionnaire. The strategy and research methods have taken into consideration the case study. Among the main findings of the survey we highlight: the creation of added value for the business through the elimination of several organisational wastes; the integrated management of the sustainability components; the elimination of conflicts between independent MS; dialogue with the main stakeholders and commitment to their ongoing satisfaction and increased contribution to the company’s competitiveness; and greater valorisation and motivation of employees as a result of the expansion of their skill base, actions and responsibilities, with their consequent empowerment. A set of key performance indicators (KPIs) constitute the support, in a perspective of business excellence, to the follow up of the organisation’s progress towards the vision and achievement of the defined objectives in the context of each component of the IMS model. The conceived model had many phases and the one presented in this work is the last required for the integration of quality, environment, safety and others individual standardised MSs. Globally, the investigation results, by themselves, justified and prioritised the conception of an IMS–QES model, to be implemented at the company where the investigation was conducted, but also a generic model of an IMS, which may be more flexible, integrator and lean as possible, potentiating the efficiency, added value both in the present and, fundamentally, for future.
Resumo:
The aim of this paper is to analyze the forecasting ability of the CARR model proposed by Chou (2005) using the S&P 500. We extend the data sample, allowing for the analysis of different stock market circumstances and propose the use of various range estimators in order to analyze their forecasting performance. Our results show that there are two range-based models that outperform the forecasting ability of the GARCH model. The Parkinson model is better for upward trends and volatilities which are higher and lower than the mean while the CARR model is better for downward trends and mean volatilities.
Resumo:
This paper describes the implementation of a distributed model predictive approach for automatic generation control. Performance results are discussed by comparing classical techniques (based on integral control) with model predictive control solutions (centralized and distributed) for different operational scenarios with two interconnected networks. These scenarios include variable load levels (ranging from a small to a large unbalance generated power to power consumption ratio) and simultaneously variable distance between the interconnected networks systems. For the two networks the paper also examines the impact of load variation in an island context (a network isolated from each other).
Resumo:
In this work, we present the explicit series solution of a specific mathematical model from the literature, the Deng bursting model, that mimics the glucose-induced electrical activity of pancreatic beta-cells (Deng, 1993). To serve to this purpose, we use a technique developed to find analytic approximate solutions for strongly nonlinear problems. This analytical algorithm involves an auxiliary parameter which provides us with an efficient way to ensure the rapid and accurate convergence to the exact solution of the bursting model. By using the homotopy solution, we investigate the dynamical effect of a biologically meaningful bifurcation parameter rho, which increases with the glucose concentration. Our analytical results are found to be in excellent agreement with the numerical ones. This work provides an illustration of how our understanding of biophysically motivated models can be directly enhanced by the application of a newly analytic method.
Resumo:
In this article we analytically solve the Hindmarsh-Rose model (Proc R Soc Lond B221:87-102, 1984) by means of a technique developed for strongly nonlinear problems-the step homotopy analysis method. This analytical algorithm, based on a modification of the standard homotopy analysis method, allows us to obtain a one-parameter family of explicit series solutions for the studied neuronal model. The Hindmarsh-Rose system represents a paradigmatic example of models developed to qualitatively reproduce the electrical activity of cell membranes. By using the homotopy solutions, we investigate the dynamical effect of two chosen biologically meaningful bifurcation parameters: the injected current I and the parameter r, representing the ratio of time scales between spiking (fast dynamics) and resting (slow dynamics). The auxiliary parameter involved in the analytical method provides us with an elegant way to ensure convergent series solutions of the neuronal model. Our analytical results are found to be in excellent agreement with the numerical simulations.
Resumo:
Motivated by the dark matter and the baryon asymmetry problems, we analyze a complex singlet extension of the Standard Model with a Z(2) symmetry (which provides a dark matter candidate). After a detailed two-loop calculation of the renormalization group equations for the new scalar sector, we study the radiative stability of the model up to a high energy scale (with the constraint that the 126 GeV Higgs boson found at the LHC is in the spectrum) and find it requires the existence of a new scalar state mixing with the Higgs with a mass larger than 140 GeV. This bound is not very sensitive to the cutoff scale as long as the latter is larger than 10(10) GeV. We then include all experimental and observational constraints/measurements from collider data, from dark matter direct detection experiments, and from the Planck satellite and in addition force stability at least up to the grand unified theory scale, to find that the lower bound is raised to about 170 GeV, while the dark matter particle must be heavier than about 50 GeV.
Resumo:
The efficacy of flucytosine (5-FC) and fluconazole (FLU) association in the treatment of a murine experimental model of cryptococcosis, was evaluated. Seven groups of 10 Balb C mice each, were intraperitoneally inoculated with 10(7) cells of Cryptococcus neoformans. Six groups were allocated to receive 5-FC (300 mg/kg) and FLU (16 mg/ kg), either combined and individually, by daily gavage beginning 5 days after the infection, for 2 and 4 weeks. One group received distilled water and was used as control. The evaluation of treatments was based on: survival time; macroscopic examination of brain, lungs, liver and spleen at autopsy; presence of capsulated yeasts in microscopic examination of wet preparations of these organs and cultures of brain homogenate. 5-FC and FLU, individually or combined, significantly prolonged the survival time of the treated animals with respect to the control group (p<0.01). Animals treated for 4 weeks survived significantly longer than those treated for 2 weeks (p<0.01). No significant differences between the animals treated with 5-FC and FLU combined or separately were observed in the survival time and morphological parameters. The association of 5-FC and FLU does not seem to be more effective than 5-FC or FLU alone, in the treatment of this experimental model of cryptococcosis.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitiveenvironment of electricity markets impose the use of new approaches in several domains. The networkcost allocation, traditionally used in transmission networks, should be adapted and used in the distribu-tion networks considering the specifications of the connected resources. The main goal is to develop afairer methodology trying to distribute the distribution network use costs to all players which are usingthe network in each period. In this paper, a model considering different type of costs (fixed, losses, andcongestion costs) is proposed comprising the use of a large set of DER, namely distributed generation(DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehi-cles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). Theproposed model includes three distinct phases of operation. The first phase of the model consists in aneconomic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen’s andBialek’s tracing algorithms are used and compared to evaluate the impact of each resource in the net-work. Finally, the MW-mile method is used in the third phase of the proposed model. A distributionnetwork of 33 buses with large penetration of DER is used to illustrate the application of the proposedmodel.
Resumo:
3rd Workshop on High-performance and Real-time Embedded Systems (HIRES 2015). 21, Jan, 2015. Amsterdam, Netherlands.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). The proposed model includes three distinct phases of operation. The first phase of the model consists in an economic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen's and Bialek's tracing algorithms are used and compared to evaluate the impact of each resource in the network. Finally, the MW-mile method is used in the third phase of the proposed model. A distribution network of 33 buses with large penetration of DER is used to illustrate the application of the proposed model.
Resumo:
We develop a new a coinfection model for hepatitis C virus (HCV) and the human immunodeficiency virus (HIV). We consider treatment for both diseases, screening, unawareness and awareness of HIV infection, and the use of condoms. We study the local stability of the disease-free equilibria for the full model and for the two submodels (HCV only and HIV only submodels). We sketch bifurcation diagrams for different parameters, such as the probabilities that a contact will result in a HIV or an HCV infection. We present numerical simulations of the full model where the HIV, HCV and double endemic equilibria can be observed. We also show numerically the qualitative changes of the dynamical behavior of the full model for variation of relevant parameters. We extrapolate the results from the model for actual measures that could be implemented in order to reduce the number of infected individuals.
Resumo:
Paracoccidioidomycosis (PCM) is caused by the dimorphic fungus Paracoccidioides brasiliensis (Pb) and corresponds to prevalent systemic mycosis in Latin America. The aim of the present work was to evaluate the dose response effect of the fungal yeast phase for the standardization of an experimental model of septic arthritis. The experiments were performed with groups of 14 rats that received doses of 103, 104 or 105 P. brasiliensis (Pb18) cells. The fungi were injected in 50 µL of phosphate-buffered saline (PBS) directly into the knee joints of the animals. The following parameters were analyzed in this work: the formation of swelling in knees infused with yeast cells and the radiological and anatomopathological alterations, besides antibody titer by ELISA. After 15 days of infection, signs of inflammation were evident. At 45 days, some features of damage and necrosis were observed in the articular cartilage. The systemic dissemination of the fungus was observed in 11% of the inoculated animals, and it was concluded that the experimental model is able to mimic articular PCM in humans and that the dose of 105 yeast cells can be used as standard in this model.
Resumo:
INTRODUCTION: Insulin resistance is the pathophysiological key to explain metabolic syndrome. Although clearly useful, the Homeostasis Model Assessment index (an insulin resistance measurement) hasn't been systematically applied in clinical practice. One of the main reasons is the discrepancy in cut-off values reported in different populations. We sought to evaluate in a Portuguese population the ideal cut-off for Homeostasis Model Assessment index and assess its relationship with metabolic syndrome. MATERIAL AND METHODS: We selected a cohort of individuals admitted electively in a Cardiology ward with a BMI < 25 Kg/m2 and no abnormalities in glucose metabolism (fasting plasma glucose < 100 mg/dL and no diabetes). The 90th percentile of the Homeostasis Model Assessment index distribution was used to obtain the ideal cut-off for insulin resistance. We also selected a validation cohort of 300 individuals (no exclusion criteria applied). RESULTS: From 7 000 individuals, and after the exclusion criteria, there were left 1 784 individuals. The 90th percentile for Homeostasis Model Assessment index was 2.33. In the validation cohort, applying that cut-off, we have 49.3% of individuals with insulin resistance. However, only 69.9% of the metabolic syndrome patients had insulin resistance according to that cut-off. By ROC curve analysis, the ideal cut-off for metabolic syndrome is 2.41. Homeostasis Model Assessment index correlated with BMI (r = 0.371, p < 0.001) and is an independent predictor of the presence of metabolic syndrome (OR 19.4, 95% CI 6.6 - 57.2, p < 0.001). DISCUSSION: Our study showed that in a Portuguese population of patients admitted electively in a Cardiology ward, 2.33 is the Homeostasis Model Assessment index cut-off for insulin resistance and 2.41 for metabolic syndrome. CONCLUSION: Homeostasis Model Assessment index is directly correlated with BMI and is an independent predictor of metabolic syndrome.
Resumo:
This work proposes a constitutive model to simulate nonlinear behaviour of cement based materials subjected to different loading paths. The model incorporates a multidirectional fixed smeared crack approach to simulate crack initiation and propagation, whereas the inelastic behaviour of material between cracks is treated by a numerical strategy that combines plasticity and damage theories. For capturing more realistically the shear stress transfer between the crack surfaces, a softening diagram is assumed for modelling the crack shear stress versus crack shear strain. The plastic damage model is based on the yield function, flow rule and evolution law for hardening variable, and includes an explicit isotropic damage law to simulate the stiffness degradation and the softening behaviour of cement based materials in compression. This model was implemented into the FEMIX computer program, and experimental tests at material scale were simulated to appraise the predictive performance of this constitutive model. The applicability of the model for simulating the behaviour of reinforced concrete shear wall panels submitted to biaxial loading conditions, and RC beams failing in shear is investigated.
Resumo:
Genome-scale metabolic models are valuable tools in the metabolic engineering process, based on the ability of these models to integrate diverse sources of data to produce global predictions of organism behavior. At the most basic level, these models require only a genome sequence to construct, and once built, they may be used to predict essential genes, culture conditions, pathway utilization, and the modifications required to enhance a desired organism behavior. In this chapter, we address two key challenges associated with the reconstruction of metabolic models: (a) leveraging existing knowledge of microbiology, biochemistry, and available omics data to produce the best possible model; and (b) applying available tools and data to automate the reconstruction process. We consider these challenges as we progress through the model reconstruction process, beginning with genome assembly, and culminating in the integration of constraints to capture the impact of transcriptional regulation. We divide the reconstruction process into ten distinct steps: (1) genome assembly from sequenced reads; (2) automated structural and functional annotation; (3) phylogenetic tree-based curation of genome annotations; (4) assembly and standardization of biochemistry database; (5) genome-scale metabolic reconstruction; (6) generation of core metabolic model; (7) generation of biomass composition reaction; (8) completion of draft metabolic model; (9) curation of metabolic model; and (10) integration of regulatory constraints. Each of these ten steps is documented in detail.