907 resultados para Unconstrained minimization
Resumo:
Australia ranks high internationally in the prevalence of cannabis and other illicit drug use, with the prevalence of all illicit drug use increasing since the 1970s. There are two distinctive features associated with harms from injecting drug use-high rates of death from heroin overdose and low rates of HIV infection. Australia has largely avoided a punitive and moralistic drug policy, developing instead harm minimization strategies and a robust treatment framework embedded in a strong law enforcement regime. Two illustrations of Australian drug policy are presented: legislation that provides for the expiation of simple cannabis offences by payment of a fine and the widespread implementation of agonist maintenance treatment for heroin dependence.
Fuzzy Monte Carlo mathematical model for load curtailment minimization in transmission power systems
Resumo:
This paper presents a methodology which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states by Monte Carlo simulation. This is followed by a remedial action algorithm, based on optimal power flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. In order to illustrate the application of the proposed methodology to a practical case, the paper will include a case study for the Reliability Test System (RTS) 1996 IEEE 24 BUS.
Resumo:
OBJECTIVE: To evaluate dispersal of Aedes aegypti females in an area with no container manipulation and no geographic barriers to constrain mosquito flight. METHODS: A mark-release-recapture experiment was conducted in December 2006, in the dengue endemic urban district of Olaria in Rio de Janeiro, Southeastern Brazil, where there is no evident obstacle to the dispersal of Ae. aegypti females. Mosquito traps were installed in 192 houses (96 Adultraps and 96 MosquiTRAPs). RESULTS: A total of 725 dust-marked gravid females were released and recapture rate was 6.3%. Ae. aegypti females traveled a mean distance of 288.12 m and their maximum displacement was 690 m; 50% and 90% of females flew up to 350 m and 500.2 m, respectively. CONCLUSIONS: Dispersal of Ae. aegypti females in Olaria was higher than in areas with physical and geographical barriers. There was no evidence of a preferred direction during mosquito flight, which was considered random or uniform from the release point.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
In this paper we address an order processing optimization problem known as the Minimization of Open Stacks Problem (MOSP). This problem consists in finding the best sequence for manufacturing the different products required by costumers, in a setting where only one product can be made at a time. The objective is to minimize the maximum number of incomplete orders from costumers that are being processed simultaneously. We present an integer programming model, based on the existence of a perfect elimination order in interval graphs, which finds an optimal sequence for the costumers orders. Among other economic advantages, manufacturing the products in this optimal sequence reduces the amount of space needed to store incomplete orders.
Resumo:
Most of distribution generation and smart grid research works are dedicated to the study of network operation parameters, reliability among others. However, many of this research works usually uses traditional test systems such as IEEE test systems. This work proposes a voltage magnitude study in presence of fault conditions considering the realistic specifications found in countries like Brazil. The methodology considers a hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzyprobabilistic models and a remedial action algorithm which is based on optimal power flow. To illustrate the application of the proposed method, the paper includes a case study that considers a real 12 bus sub-transmission network.
Resumo:
Most of distributed generation and smart grid research works are dedicated to network operation parameters studies, reliability, etc. However, many of these works normally uses traditional test systems, for instance, IEEE test systems. This paper proposes voltage magnitude and reliability studies in presence of fault conditions, considering realistic conditions found in countries like Brazil. The methodology considers a hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models and a remedial action algorithm which is based on optimal power flow. To illustrate the application of the proposed method, the paper includes a case study that considers a real 12-bus sub-transmission network.
Resumo:
The authors propose a mathematical model to minimize the project total cost where there are multiple resources constrained by maximum availability. They assume the resources as renewable and the activities can use any subset of resources requiring any quantity from a limited real interval. The stochastic nature is inferred by means of a stochastic work content defined per resource within an activity and following a known distribution and the total cost is the sum of the resource allocation cost with the tardiness cost or earliness bonus in case the project finishes after or before the due date, respectively. The model was computationally implemented relying upon an interchange of two global optimization metaheuristics – the electromagnetism-like mechanism and the evolutionary strategies. Two experiments were conducted testing the implementation to projects with single and multiple resources, and with or without maximum availability constraints. The set of collected results shows good behavior in general and provide a tool to further assist project manager decision making in the planning phase.
Resumo:
Quenching process, TRIP, J2-plasticity theory, phase transition, distortion
Resumo:
The Whitehead minimization problem consists in finding a minimum size element in the automorphic orbit of a word, a cyclic word or a finitely generated subgroup in a finite rank free group. We give the first fully polynomial algorithm to solve this problem, that is, an algorithm that is polynomial both in the length of the input word and in the rank of the free group. Earlier algorithms had an exponential dependency in the rank of the free group. It follows that the primitivity problem – to decide whether a word is an element of some basis of the free group – and the free factor problem can also be solved in polynomial time.
Resumo:
It is established that the ratio between step length (SL) and step frequency (SF) is constant over a large range of walking speed. However, few data are available about the spontaneous variability of this ratio during unconstrained outdoor walking, in particular over a sufficient number of steps. The purpose of the present study was to assess the inter- and intra-subject variability of spatio-temporal gait characteristics [SL, SF and walk ratio (WR=SL/SF)] while walking at different freely selected speeds. Twelve healthy subjects walked three times along a 100-m athletic track at: (1). a slower than preferred speed, (2). preferred speed and (3). a faster than preferred speed. Two professional GPS receivers providing 3D positions assessed the walking speed and SF with high precision (less than 0.5% error). Intra-subject variability was calculated as the variation among eight consecutive 5-s samples. WR was found to be constant at preferred and fast speeds [0.41 (0.04) m.s and 0.41 (0.05) m.s respectively] but was higher at slow speeds [0.44 (0.05) m.s]. In other words, between slow and preferred speed, the speed increase was mediated more by a change in SF than SL. The intra-subject variability of WR was low under preferred [CV, coefficient of variation = 1.9 (0.6)%] and fast [CV=1.8 (0.5)%] speed conditions, but higher under low speed condition [CV=4.1 (1.5)%]. On the other hand, the inter-subject variability of WR was 11%, 10% and 12% at slow, preferred and fast walking speeds respectively. It is concluded that the GPS method is able to capture basic gait parameters over a short period of time (5 s). A specific gait pattern for slow walking was observed. Furthermore, it seems that the walking patterns in free-living conditions exhibit low intra-individual variability, but that there is substantial variability between subjects.
Resumo:
Long-term outcomes after kidney transplantation remain suboptimal, despite the great achievements observed in recent years with the use of modern immunosuppressive drugs. Currently, the calcineurin inhibitors (CNI) cyclosporine and tacrolimus remain the cornerstones of immunosuppressive regimens in many centers worldwide, regardless of their well described side-effects, including nephrotoxicity. In this article, we review recent CNI-minimization strategies in kidney transplantation, while emphasizing on the importance of long-term follow-up and patient monitoring. Finally, accumulating data indicate that low-dose CNI-based regimens would provide an interesting balance between efficacy and toxicity.
Resumo:
This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.
Resumo:
Body accelerations during human walking were recorded by a portable measuring device. A new method for parameterizing body accelerations and finding the pattern of walking is outlined. Two neural networks were designed to recognize each pattern and estimate the speed and incline of walking. Six subjects performed treadmill walking followed by self-paced walking on an outdoor test circuit involving roads of various inclines. The neural networks were first "trained" by known patterns of treadmill walking. Then the inclines, the speeds, and the distance covered during overground walking (outdoor circuit) were estimated. The results show a good agreement between actual and predicted variables. The standard deviation of estimated incline was less than 2.6% and the maximum of the coefficient of variation of speed estimation is 6%. To the best of our knowledge, these results constitute the first assessment of speed, incline and distance covered during level and slope walking and offer investigators a new tool for assessing levels of outdoor physical activity.