993 resultados para LIFETIME DISTRIBUTION
Resumo:
This paper proposes a comprehensive approach to the planning of distribution networks and the control of microgrids. Firstly, a Modified Discrete Particle Swarm Optimization (MDPSO) method is used to optimally plan a distribution system upgrade over a 20 year planning period. The optimization is conducted at different load levels according to the anticipated load duration curve and integrated over the system lifetime in order to minimize its total lifetime cost. Since the optimal solution contains Distributed Generators (DGs) to maximize reliability, the DG must be able to operate in islanded mode and this leads to the concept of microgrids. Thus the second part of the paper reviews some of the challenges of microgrid control in the presence of both inertial (rotating direct connected) and non-inertial (converter interfaced) DGs. More specifically enhanced control strategies based on frequency droop are proposed for DGs to improve the smooth synchronization and real power sharing minimizing transient oscillations in the microgrid. Simulation studies are presented to show the effectiveness of the control.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
In this paper, motivated by observations of non-exponential decay times in the stochastic binding and release of ligand-receptor systems, exemplified by the work of Rogers et al on optically trapped DNA-coated colloids (Rogers et al 2013 Soft Matter 9 6412), we explore the general problem of polymer-mediated surface adhesion using a simplified model of the phenomenon in which a single polymer molecule, fixed at one end, binds through a ligand at its opposite end to a flat surface a fixed distance L away and uniformly covered with receptor sites. Working within the Wilemski-Fixman approximation to diffusion-controlled reactions, we show that for a flexible Gaussian chain, the predicted distribution of times f(t) for which the ligand and receptor are bound is given, for times much shorter than the longest relaxation time of the polymer, by a power law of the form t(-1/4). We also show when the effects of chain stiffness are incorporated into this model (approximately), the structure of f(t) is altered to t(-1/2). These results broadly mirror the experimental trends in the work cited above.
Resumo:
5 p.
Resumo:
The reminiscence bump is the tendency to recall more autobiographical memories from adolescence and early adulthood than from adjacent lifetime periods. In this online study, the robustness of the reminiscence bump was examined by looking at participants' judgements about the quality of football players. Dutch participants (N = 619) were asked who they thought the five best players of all time were. The participants could select the names from a list or enter the names when their favourite players were not on the list. Johan Cruijff, Pelé, and Diego Maradona were the three most often mentioned players. Participants frequently named football players who reached the midpoint of their career when the participants were adolescents (mode = 17). The results indicate that the reminiscence bump can also be identified outside the autobiographical memory domain.
Resumo:
When autobiographical memories are elicited with word cues, personal events from middle childhood to early adulthood are overrepresented compared to events from other periods. It is, however, unclear whether these memories are also associated with greater recollection. In this online study, we examined whether autobiographical memories from adolescence and early adulthood are recollected more than memories from other lifetime periods. Participants rated personal events that were elicited with cue words on reliving or vividness. Consistent with previous studies, most memories came from the period in which the participants were between 6 and 20 years old. The memories from this period were not relived more or recalled more vividly than memories from other lifetime periods, suggesting that they do not involve more recollection. Recent events had higher levels of reliving and vividness than remote events, and older adults reported a stronger recollective experience than younger adults.
Resumo:
A numerical modeling method for the prediction of the lifetime of solder joints of relatively large solder area under cyclic thermal-mechanical loading conditions has been developed. The method is based on the Miner's linear damage accumulation rule and the properties of the accumulated plastic strain in front of the crack in large area solder joint. The nonlinear distribution of the damage indicator in the solder joints have been taken into account. The method has been used to calculate the lifetime of the solder interconnect in a power module under mixed cyclic loading conditions found in railway traction control applications. The results show that the solder thickness is a parameter that has a strong influence on the damage and therefore the lifetime of the solder joint while the substrate width and the thickness of the baseplate are much less important for the lifetime
Resumo:
The features of two popular models used to describe the observed response characteristics of typical oxygen optical sensors based on luminescence quenching are examined critically. The models are the 'two-site' and 'Gaussian distribution in natural lifetime, tau(o),' models. These models are used to characterise the response features of typical optical oxygen sensors; features which include: downward curving Stern-Volmer plots and increasingly non-first order luminescence decay kinetics with increasing partial pressures of oxygen, pO(2). Neither model appears able to unite these latter features, let alone the observed disparate array of response features exhibited by the myriad optical oxygen sensors reported in the literature, and still maintain any level of physical plausibility. A model based on a Gaussian distribution in quenching rate constant, k(q), is developed and, although flawed by a limited breadth in distribution, rho, does produce Stern-Volmer plots which would cover the range in curvature seen with real optical oxygen sensors. A new 'log-Gaussian distribution in tau(o) or k(q)' model is introduced which has the advantage over a Gaussian distribution model of placing no limitation on the value of rho. Work on a 'log-Gaussian distribution in tau(o)' model reveals that the Stern-Volmer quenching plots would show little degree in curvature, even at large rho values and the luminescence decays would become increasingly first order with increasing pO(2). In fact, with real optical oxygen sensors, the opposite is observed and thus the model appears of little value. In contrast, a 'log-Gaussian distribution in k(o)' model does produce the trends observed with real optical oxygen sensors; although it is technically restricted in use to those in which the kinetics of luminescence decay are good first order in the absence of oxygen. The latter model gives a good fit to the major response features of sensors which show the latter feature, most notably the [Ru(dpp)(3)(2+)(Ph4B-)(2)] in cellulose optical oxygen sensors. The scope of a log-Gaussian model for further expansion and, therefore, application to optical oxygen sensors, by combining both a log-Gaussian distribution in k(o) with one in tau(o) is briefly discussed.
Resumo:
The common cuttlefish, Sepia officinalis, is a necto-benthic cephalopod that can live in coastal ecosystems, with high influence of anthropogenic pressures and thus be vulnerable to exposure to various types of contaminants. The cuttlefish is a species of great importance to the local economy of Aveiro, considering the global data of catches of this species in the Ria de Aveiro. However, studies on this species in Ria de Aveiro are scarce, so the present study aims to fill this information gap about the cuttlefish in the Ria de Aveiro. The cuttlefish enters Ria de Aveiro in the spring and summer to reproduce, returning to deeper waters in the winter. In terms of abundance, the eastern and center regions of the lagoon, closer to the sea, showed the highest values of abundance, while the northern and southern regions of the main channel had the lowest abundance. This fact may be related to abiotic factors, as well as depth, salinity and temperature. In the most southern point of the Ria de Aveiro (Areão) no cuttlefish was caught. This site had the lowest values of salinity and depth. The cuttlefish has an allometric the females being heavier than males to mantle lengths greater than 82.4 mm. Males reach sexual maturity first than females. In Ria de Aveiro in a generation of parents was found. The cuttlefish, presents itself as opportunistic predators, consuming a wide variety of prey from different taxa. The diet was similar in different sampling locations observing significant differences for the seasons. S. officinalis was captured at 10 sites in the Ria de Aveiro with different anthropogenic sources of contamination. Thus, levels of metals analyzed were similar at all sampling sites, with the exception of a restricted area, Laranjo, which showed higher values. The cuttlefish has the ability to accumulate metals in your body. The levels of Fe, Zn, Cu, Cd, Pb and Hg found in the digestive gland and mantle reflect a differential accumulation of metals in the tissues. This accumulation is related to the type and function of tissue analyzed and the type of metal analysis (essential and non-essential). The metal concentrations in the digestive gland are higher than in the mantle, with the exception of mercury. This may be due to the high affinity of the mantle for the incorporation of methylmercury (MeHg), the most abundant form of mercury. The accumulation of metals can vary over a lifetime, depending on the metal. The concentrations of Zn, Cd and Hg increases throughout life, while Pb decreases and essential metals such as Fe and Cu remain constant. The data collected suggest that the cuttlefish (Sepia officinalis) can be used as a bioindicator of environmental contamination for some metals.
Resumo:
Reliability analysis is a well established branch of statistics that deals with the statistical study of different aspects of lifetimes of a system of components. As we pointed out earlier that major part of the theory and applications in connection with reliability analysis were discussed based on the measures in terms of distribution function. In the beginning chapters of the thesis, we have described some attractive features of quantile functions and the relevance of its use in reliability analysis. Motivated by the works of Parzen (1979), Freimer et al. (1988) and Gilchrist (2000), who indicated the scope of quantile functions in reliability analysis and as a follow up of the systematic study in this connection by Nair and Sankaran (2009), in the present work we tried to extend their ideas to develop necessary theoretical framework for lifetime data analysis. In Chapter 1, we have given the relevance and scope of the study and a brief outline of the work we have carried out. Chapter 2 of this thesis is devoted to the presentation of various concepts and their brief reviews, which were useful for the discussions in the subsequent chapters .In the introduction of Chapter 4, we have pointed out the role of ageing concepts in reliability analysis and in identifying life distributions .In Chapter 6, we have studied the first two L-moments of residual life and their relevance in various applications of reliability analysis. We have shown that the first L-moment of residual function is equivalent to the vitality function, which have been widely discussed in the literature .In Chapter 7, we have defined percentile residual life in reversed time (RPRL) and derived its relationship with reversed hazard rate (RHR). We have discussed the characterization problem of RPRL and demonstrated with an example that the RPRL for given does not determine the distribution uniquely
Resumo:
Di Crescenzo and Longobardi (2002) introduced a measure of uncertainty in past lifetime distributions and studied its relationship with residual entropy function. In the present paper, we introduce a quantile version of the entropy function in past lifetime and study its properties. Unlike the measure of uncertainty given in Di Crescenzo and Longobardi (2002) the proposed measure uniquely determines the underlying probability distribution. The measure is used to study two nonparametric classes of distributions. We prove characterizations theorems for some well known quantile lifetime distributions
Resumo:
Heterogeneity in lifetime data may be modelled by multiplying an individual's hazard by an unobserved frailty. We test for the presence of frailty of this kind in univariate and bivariate data with Weibull distributed lifetimes, using statistics based on the ordered Cox-Snell residuals from the null model of no frailty. The form of the statistics is suggested by outlier testing in the gamma distribution. We find through simulation that the sum of the k largest or k smallest order statistics, for suitably chosen k , provides a powerful test when the frailty distribution is assumed to be gamma or positive stable, respectively. We provide recommended values of k for sample sizes up to 100 and simple formulae for estimated critical values for tests at the 5% level.
Resumo:
In this paper, we formulate a flexible density function from the selection mechanism viewpoint (see, for example, Bayarri and DeGroot (1992) and Arellano-Valle et al. (2006)) which possesses nice biological and physical interpretations. The new density function contains as special cases many models that have been proposed recently in the literature. In constructing this model, we assume that the number of competing causes of the event of interest has a general discrete distribution characterized by its probability generating function. This function has an important role in the selection procedure as well as in computing the conditional personal cure rate. Finally, we illustrate how various models can be deduced as special cases of the proposed model. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The generalized Birnbaum-Saunders distribution pertains to a class of lifetime models including both lighter and heavier tailed distributions. This model adapts well to lifetime data, even when outliers exist, and has other good theoretical properties and application perspectives. However, statistical inference tools may not exist in closed form for this model. Hence, simulation and numerical studies are needed, which require a random number generator. Three different ways to generate observations from this model are considered here. These generators are compared by utilizing a goodness-of-fit procedure as well as their effectiveness in predicting the true parameter values by using Monte Carlo simulations. This goodness-of-fit procedure may also be used as an estimation method. The quality of this estimation method is studied here. Finally, through a real data set, the generalized and classical Birnbaum-Saunders models are compared by using this estimation method.
Resumo:
The two-parameter Birnbaum-Saunders distribution has been used successfully to model fatigue failure times. Although censoring is typical in reliability and survival studies, little work has been published on the analysis of censored data for this distribution. In this paper, we address the issue of performing testing inference on the two parameters of the Birnbaum-Saunders distribution under type-II right censored samples. The likelihood ratio statistic and a recently proposed statistic, the gradient statistic, provide a convenient framework for statistical inference in such a case, since they do not require to obtain, estimate or invert an information matrix, which is an advantage in problems involving censored data. An extensive Monte Carlo simulation study is carried out in order to investigate and compare the finite sample performance of the likelihood ratio and the gradient tests. Our numerical results show evidence that the gradient test should be preferred. Further, we also consider the generalized Birnbaum-Saunders distribution under type-II right censored samples and present some Monte Carlo simulations for testing the parameters in this class of models using the likelihood ratio and gradient tests. Three empirical applications are presented. (C) 2011 Elsevier B.V. All rights reserved.