818 resultados para competence network model
Resumo:
Experimental models of infection are good tools for establishing immunological parameters that have an effect on the host-pathogen relationship and also for designing new vaccines and immune therapies. In this work, we evaluated the evolution of experimental tuberculosis in mice infected with increasing bacterial doses or via distinct routes. We showed that mice infected with low bacterial doses by the intratracheal route were able to develop a progressive infection that was proportional to the inoculum size. In the initial phase of disease, mice developed a specific Th1-driven immune response independent of inoculum concentration. However, in the late phase, mice infected with higher concentrations exhibited a mixed Th1/Th2 response, while mice infected with lower concentrations sustained the Th1 pattern. Significant IL-10 concentrations and a more preeminent T regulatory cell recruitment were also detected at 70 days post-infection with high bacterial doses. These results suggest that mice infected with higher concentrations of bacilli developed an immune response similar to the pattern described for human tuberculosis wherein patients with progressive tuberculosis exhibit a down modulation of IFN-gamma production accompanied by increased levels of IL-4. Thus, these data indicate that the experimental model is important in evaluating the protective efficacy of new vaccines and therapies against tuberculosis. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Background: Traffic accidents constitute the main cause of death in the first decades of life. Traumatic brain injury is the event most responsible for the severity of these accidents. The SBN started an educational program for the prevention of traffic accidents, adapted from the American model ""Think First"" to the Brazilian environment, since 1995, with special effort devoted to the prevention of TBI by using seat belts and motorcycle helmets. The objective of the present study was to set up a traffic accident prevention program based on the adapted Think First and to evaluate its impact by comparing epidemiological variables before and after the beginning of the program. Methods: The program was executed in Maringa city, from September 2004 to August 2005, with educational actions targeting the entire population, especially teenagers and young adults. The program was implemented by building a network of information facilitators and multipliers inside the organized civil society, with widespread population dissemination. To measure the impact of the program, a specific software was developed for the storage and processing of the epidemiological variables. Results: The results showed a reduction of trauma severity due to traffic accidents after the execution of the program, mainly TBI. Conclusions: The adapted Think First was systematically implemented and its impact measured for the first time in Brazil, revealing the usefulness of the program for reducing trauma and TBI severity in traffic accidents through public education and representing a standardized model of implementation in a developing country. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
This paper proposed a novel model for short term load forecast in the competitive electricity market. The prior electricity demand data are treated as time series. The forecast model is based on wavelet multi-resolution decomposition by autocorrelation shell representation and neural networks (multilayer perceptrons, or MLPs) modeling of wavelet coefficients. To minimize the influence of noisy low level coefficients, we applied the practical Bayesian method Automatic Relevance Determination (ARD) model to choose the size of MLPs, which are then trained to provide forecasts. The individual wavelet domain forecasts are recombined to form the accurate overall forecast. The proposed method is tested using Queensland electricity demand data from the Australian National Electricity Market. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.
Resumo:
A simple percolation theory-based method for determination of the pore network connectivity using liquid phase adsorption isotherm data combined with a density functional theory (DFT)-based pore size distribution is presented in this article. The liquid phase adsorption experiments have been performed using eight different esters as adsorbates and microporous-mesoporous activated carbons Filtrasorb-400, Norit ROW 0.8 and Norit ROX 0.8 as adsorbents. The density functional theory (DFT)-based pore size distributions of the carbons were obtained using DFT analysis of argon adsorption data. The mean micropore network coordination numbers, Z, of the carbons were determined based on DR characteristic plots and fitted saturation capacities using percolation theory. Based on this method, the critical molecular sizes of the model compounds used in this study were also obtained. The incorporation of percolation concepts in the prediction of multicomponent adsorption equilibria is also investigated, and found to improve the performance of the ideal adsorbed solution theory (IAST) model for the large molecules utilized in this study. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The haploid NK model developed by Kauffman can be extended to diploid genomes and to incorporate gene-by-environment interaction effects in combination with epistasis. To provide the flexibility to include a wide range of forms of gene-by-environment interactions, a target population of environment types (TPE) is defined. The TPE consists of a set of E different environment types, each with their own frequency of occurrence. Each environment type conditions a different NK gene network structure or series of gene effects for a given network structure, providing the framework for defining gene-by-environment interactions. Thus, different NK models can be partially or completely nested within the E environment types of a TPE, giving rise to the E(NK) model for a biological system. With this model it is possible to examine how populations of genotypes evolve in context with properties of the environment that influence the contributions of genes to the fitness values of genotypes. We are using the E(NK) model to investigate how both epistasis and gene-by-environment interactions influence the genetic improvement of quantitative traits by plant breeding strategies applied to agricultural systems. © 2002 Wiley Periodicals, Inc.
Resumo:
Value has been defined in different theoretical contexts as need, desire, interest, standard /criteria, beliefs, attitudes, and preferences. The creation of value is key to any business, and any business activity is about exchanging some tangible and/or intangible good or service and having its value accepted and rewarded by customers or clients, either inside the enterprise or collaborative network or outside. “Perhaps surprising then is that firms often do not know how to define value, or how to measure it” (Anderson and Narus, 1998 cited by [1]). Woodruff echoed that we need “richer customer value theory” for providing an “important tool for locking onto the critical things that managers need to know”. In addition, he emphasized, “we need customer value theory that delves deeply into customer’s world of product use in their situations” [2]. In this sense, we proposed and validated a novel “Conceptual Model for Decomposing the Value for the Customer”. To this end, we were aware that time has a direct impact on customer perceived value, and the suppliers’ and customers’ perceptions change from the pre-purchase to the post-purchase phases, causing some uncertainty and doubts.We wanted to break down value into all its components, as well as every built and used assets (both endogenous and/or exogenous perspectives). This component analysis was then transposed into a mathematical formulation using the Fuzzy Analytic Hierarchy Process (AHP), so that the uncertainty and vagueness of value perceptions could be embedded in this model that relates used and built assets in the tangible and intangible deliverable exchange among the involved parties, with their actual value perceptions.
Resumo:
We introduce a microscopic model for particles with dissimilar patches which displays an unconventional "pinched'' phase diagram, similar to the one predicted by Tlusty and Safran in the context of dipolar fluids [Science 290, 1328 (2000)]. The model-based on two types of patch interactions, which account, respectively, for chaining and branching of the self-assembled networks-is studied both numerically via Monte Carlo simulations and theoretically via first-order perturbation theory. The dense phase is rich in junctions, while the less-dense phase is rich in chain ends. The model provides a reference system for a deep understanding of the competition between condensation and self-assembly into equilibrium-polymer chains.
Resumo:
Energy resource scheduling becomes increasingly important, as the use of distributed resources is intensified and massive gridable vehicle use is envisaged. The present paper proposes a methodology for dayahead energy resource scheduling for smart grids considering the intensive use of distributed generation and of gridable vehicles, usually referred as Vehicle- o-Grid (V2G). This method considers that the energy resources are managed by a Virtual Power Player (VPP) which established contracts with V2G owners. It takes into account these contracts, the user´s requirements subjected to the VPP, and several discharge price steps. Full AC power flow calculation included in the model allows taking into account network constraints. The influence of the successive day requirements on the day-ahead optimal solution is discussed and considered in the proposed model. A case study with a 33 bus distribution network and V2G is used to illustrate the good performance of the proposed method.
Resumo:
Natural gas industry has been confronted with big challenges: great growth in demand, investments on new GSUs – gas supply units, and efficient technical system management. The right number of GSUs, their best location on networks and the optimal allocation to loads is a decision problem that can be formulated as a combinatorial programming problem, with the objective of minimizing system expenses. Our emphasis is on the formulation, interpretation and development of a solution algorithm that will analyze the trade-off between infrastructure investment expenditure and operating system costs. The location model was applied to a 12 node natural gas network, and its effectiveness was tested in five different operating scenarios.
Fuzzy Monte Carlo mathematical model for load curtailment minimization in transmission power systems
Resumo:
This paper presents a methodology which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states by Monte Carlo simulation. This is followed by a remedial action algorithm, based on optimal power flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. In order to illustrate the application of the proposed methodology to a practical case, the paper will include a case study for the Reliability Test System (RTS) 1996 IEEE 24 BUS.
Resumo:
This paper present a methodology to choose the distribution networks reconfiguration that presents the lower power losses. The proposed methodology is based on statistical failure and repair data of the distribution power system components and uses fuzzy-probabilistic modeling for system component outage parameters. The proposed hybrid method using fuzzy sets and Monte Carlo simulation based on the fuzzyprobabilistic models allows catching both randomness and fuzziness of component outage parameters. A logic programming algorithm is applied, once obtained the system states by Monte Carlo Simulation, to get all possible reconfigurations for each system state. To evaluate the line flows and bus voltages and to identify if there is any overloading, and/or voltage violation an AC load flow has been applied to select the feasible reconfiguration with lower power losses. To illustrate the application of the proposed methodology, the paper includes a case study that considers a 115 buses distribution network.
Resumo:
In general, modern networks are analysed by taking several Key Performance Indicators (KPIs) into account, their proper balance being required in order to guarantee a desired Quality of Service (QoS), particularly, cellular wireless heterogeneous networks. A model to integrate a set of KPIs into a single one is presented, by using a Cost Function that includes these KPIs, providing for each network node a single evaluation parameter as output, and reflecting network conditions and common radio resource management strategies performance. The proposed model enables the implementation of different network management policies, by manipulating KPIs according to users' or operators' perspectives, allowing for a better QoS. Results show that different policies can in fact be established, with a different impact on the network, e.g., with median values ranging by a factor higher than two.
Resumo:
In a heterogeneous cellular networks environment, users behaviour and network deployment configuration parameters have an impact on the overall Quality of Service. This paper proposes a new and simple model that, on the one hand, explores the users behaviour impact on the network by having mobility, multi-service usage and traffic generation profiles as inputs, and on the other, enables the network setup configuration evaluation impact on the Joint Radio Resource Management (JRRM), assessing some basic JRRM performance indicators, like Vertical Handover (VHO) probabilities, average bit rates, and number of active users, among others. VHO plays an important role in fulfilling seamless users sessions transfer when mobile terminals cross different Radio Access Technologies (RATs) boundaries. Results show that high bit rate RATs suffer and generate more influence from/on other RATs, by producing additional signalling traffic to a JRRM entity. Results also show that the VHOs probability can range from 5 up to 65%, depending on RATs cluster radius and users mobility profile.
Resumo:
This thesis presents the Fuzzy Monte Carlo Model for Transmission Power Systems Reliability based studies (FMC-TRel) methodology, which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states. This is followed by a remedial action algorithm, based on Optimal Power Flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. For the system states that cause load curtailment, an optimization approach is applied to reduce the probability of occurrence of these states while minimizing the costs to achieve that reduction. This methodology is of most importance for supporting the transmission system operator decision making, namely in the identification of critical components and in the planning of future investments in the transmission power system. A case study based on Reliability Test System (RTS) 1996 IEEE 24 Bus is presented to illustrate with detail the application of the proposed methodology.