947 resultados para Dependent variable problem
Resumo:
This paper is on the problem of short-term hydro, scheduling, particularly concerning head-dependent cascaded hydro systems. We propose a novel mixed-integer quadratic programming approach, considering not only head-dependency, but also discontinuous operating regions and discharge ramping constraints. Thus, an enhanced short-term hydro scheduling is provided due to the more realistic modeling presented in this paper. Numerical results from two case studies, based on Portuguese cascaded hydro systems, illustrate the proficiency of the proposed approach.
Resumo:
An improved class of Boussinesq systems of an arbitrary order using a wave surface elevation and velocity potential formulation is derived. Dissipative effects and wave generation due to a time-dependent varying seabed are included. Thus, high-order source functions are considered. For the reduction of the system order and maintenance of some dispersive characteristics of the higher-order models, an extra O(mu 2n+2) term (n ??? N) is included in the velocity potential expansion. We introduce a nonlocal continuous/discontinuous Galerkin FEM with inner penalty terms to calculate the numerical solutions of the improved fourth-order models. The discretization of the spatial variables is made using continuous P2 Lagrange elements. A predictor-corrector scheme with an initialization given by an explicit RungeKutta method is also used for the time-variable integration. Moreover, a CFL-type condition is deduced for the linear problem with a constant bathymetry. To demonstrate the applicability of the model, we considered several test cases. Improved stability is achieved.
Resumo:
Energy efficiency plays an important role to the CO2 emissions reduction, combating climate change and improving the competitiveness of the economy. The problem presented here is related to the use of stand-alone diesel gen-sets and its high specific fuel consumptions when operates at low loads. The variable speed gen-set concept is explained as an energy-saving solution to improve this system efficiency. This paper details how an optimum fuel consumption trajectory based on experimentally Diesel engine power map is obtained.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Information systems are widespread and used by anyone with computing devices as well as corporations and governments. It is often the case that security leaks are introduced during the development of an application. Reasons for these security bugs are multiple but among them one can easily identify that it is very hard to define and enforce relevant security policies in modern software. This is because modern applications often rely on container sharing and multi-tenancy where, for instance, data can be stored in the same physical space but is logically mapped into different security compartments or data structures. In turn, these security compartments, to which data is classified into in security policies, can also be dynamic and depend on runtime data. In this thesis we introduce and develop the novel notion of dependent information flow types, and focus on the problem of ensuring data confidentiality in data-centric software. Dependent information flow types fit within the standard framework of dependent type theory, but, unlike usual dependent types, crucially allow the security level of a type, rather than just the structural data type itself, to depend on runtime values. Our dependent function and dependent sum information flow types provide a direct, natural and elegant way to express and enforce fine grained security policies on programs. Namely programs that manipulate structured data types in which the security level of a structure field may depend on values dynamically stored in other fields The main contribution of this work is an efficient analysis that allows programmers to verify, during the development phase, whether programs have information leaks, that is, it verifies whether programs protect the confidentiality of the information they manipulate. As such, we also implemented a prototype typechecker that can be found at http://ctp.di.fct.unl.pt/DIFTprototype/.
Resumo:
PURPOSE: To determine the frequency of coronary artery disease, microalbuminuria and the relation to lipid profile disorders, blood pressure and clinical and metabolic features. METHODS: Fifty-five type 2 diabetic patients (32 females, 23 males), aged 59.9±9 years and with known diabetes duration of 11±7.3 years were studied. Coronary artery disease (CAD) was defined as a positive history of myocardial infarction, typical angina, myocardial revascularization or a positive stress testing. Microalbuminuria was defined when two out of three overnight urine samples had a urinary albumin excretion ranging 20 - 200µg/min. RESULTS: CAD was present in 24 patients (43,6%). High blood pressure (HBP) present in 32 patients (58.2%) and was more frequent in CAD group (p=0.05) HBP. Increased the risk of CAD 3.7 times (CI[1.14-12]). Microalbuminuria was present in 25 patients (45.5%) and tended to associate with higher systolic blood pressure (SBP) (p = 0.06), presence of hypertension (p = 0.06) and know diabetes duration (p = 0.08). In the stepwise multiple logistic regression the systolic blood pressure was the only variable that influenced UAE (r = 0.39, r² = 0.14, p = 0.01). The h ypertensive patients had higher cholesterol levels (p = 0.04). CONCLUSION: In our sample the frequency of microalbuminuria, hypertension, hypercholesterolemia and CHD was high. Since diabetes is an independent risk factor for cardiovascular disease, the association of others risk factors suggest the need for an intensive therapeutic intervention in primary and in secundary prevention.
Resumo:
OBJECTIVE: To assess the association between microalbuminuria with ambulatory blood pressure monitoring in normotensive individuals with insulin-dependent diabetes mellitus. METHODS: Thirty-seven patients underwent determination of the rate of urinary excretion of albumin through radioimmunoassay and ambulatory blood pressure monitoring. Their mean age was 26.5±6.7 years, and the mean duration of their disease was 8 (1-34) years. Microalbuminuria was defined as urinary excretion of albumin > or = 20 and <200µg/min in at least 2 out of 3 urine samples. RESULTS: Nine (24.3%) patients were microalbuminuric. Ambulatory blood pressure monitoring in the microalbuminuric patients had higher mean pressure values, mainly the systolic pressure, during sleep as compared with that in the normoalbuminuric patients (120.1±8.3 vs 110.8±7.1 mmHg; p=0.007). The pressure load was higher in the microalbuminuric individuals, mainly the systolic pressure load during wakefulness [6.3 (2.9-45.9) vs 1.6 (0-16%); p=0.001]. This was the variable that better correlated with the urinary excretion of albumin (rS=0.61; p<0.001). Systolic pressure load >50% and diastolic pressure load > 30% during sleep was associated with microalbuminuria (p=0.008). The pressure drop during sleep did not differ between the groups. CONCLUSION: Microalbuminuric normotensive insulin-dependent diabetic patients show greater mean pressure value and pressure load during ambulatory blood pressure monitoring, and these variables correlate with urinary excretion of albumin.
Resumo:
We say the endomorphism problem is solvable for an element W in a free group F if it can be decided effectively whether, given U in F, there is an endomorphism Φ of F sending W to U. This work analyzes an approach due to C. Edmunds and improved by C. Sims. Here we prove that the approach provides an efficient algorithm for solving the endomorphism problem when W is a two- generator word. We show that when W is a two-generator word this algorithm solves the problem in time polynomial in the length of U. This result gives a polynomial-time algorithm for solving, in free groups, two-variable equations in which all the variables occur on one side of the equality and all the constants on the other side.
Resumo:
BACKGROUND: The mutation status of the BRAF and KRAS genes has been proposed as prognostic biomarker in colorectal cancer. Of them, only the BRAF V600E mutation has been validated independently as prognostic for overall survival and survival after relapse, while the prognostic value of KRAS mutation is still unclear. We investigated the prognostic value of BRAF and KRAS mutations in various contexts defined by stratifications of the patient population. METHODS: We retrospectively analyzed a cohort of patients with stage II and III colorectal cancer from the PETACC-3 clinical trial (N = 1,423), by assessing the prognostic value of the BRAF and KRAS mutations in subpopulations defined by all possible combinations of the following clinico-pathological variables: T stage, N stage, tumor site, tumor grade and microsatellite instability status. In each such subpopulation, the prognostic value was assessed by log rank test for three endpoints: overall survival, relapse-free survival, and survival after relapse. The significance level was set to 0.01 for Bonferroni-adjusted p-values, and a second threshold for a trend towards statistical significance was set at 0.05 for unadjusted p-values. The significance of the interactions was tested by Wald test, with significance level of 0.05. RESULTS: In stage II-III colorectal cancer, BRAF mutation was confirmed a marker of poor survival only in subpopulations involving microsatellite stable and left-sided tumors, with higher effects than in the whole population. There was no evidence for prognostic value in microsatellite instable or right-sided tumor groups. We found that BRAF was also prognostic for relapse-free survival in some subpopulations. We found no evidence that KRAS mutations had prognostic value, although a trend was observed in some stratifications. We also show evidence of heterogeneity in survival of patients with BRAF V600E mutation. CONCLUSIONS: The BRAF mutation represents an additional risk factor only in some subpopulations of colorectal cancers, in others having limited prognostic value. However, in the subpopulations where it is prognostic, it represents a marker of much higher risk than previously considered. KRAS mutation status does not seem to represent a strong prognostic variable.
Resumo:
In this paper we consider extensions of smooth transition autoregressive (STAR) models to situations where the threshold is a time-varying function of variables that affect the separation of regimes of the time series under consideration. Our specification is motivated by the observation that unusually high/low values for an economic variable may sometimes be best thought of in relative terms. State-dependent logistic STAR and contemporaneous-threshold STAR models are introduced and discussed. These models are also used to investigate the dynamics of U.S. short-term interest rates, where the threshold is allowed to be a function of past output growth and inflation.
Resumo:
The main result is a proof of the existence of a unique viscosity solution for Hamilton-Jacobi equation, where the hamiltonian is discontinuous with respect to variable, usually interpreted as the spatial one. Obtained generalized solution is continuous, but not necessarily differentiable.
Resumo:
BACKGROUND: A central question for understanding the evolutionary responses of plant species to rapidly changing environments is the assessment of their potential for short-term (in one or a few generations) genetic change. In our study, we consider the case of Pinus pinaster Aiton (maritime pine), a widespread Mediterranean tree, and (i) test, under different experimental conditions (growth chamber and semi-natural), whether higher recruitment in the wild from the most successful mothers is due to better performance of their offspring; and (ii) evaluate genetic change in quantitative traits across generations at two different life stages (mature trees and seedlings) that are known to be under strong selection pressure in forest trees. RESULTS: Genetic control was high for most traits (h2 = 0.137-0.876) under the milder conditions of the growth chamber, but only for ontogenetic change (0.276), total height (0.415) and survival (0.719) under the more stressful semi-natural conditions. Significant phenotypic selection gradients were found in mature trees for traits related to seed quality (germination rate and number of empty seeds). Moreover, female relative reproductive success was significantly correlated with offspring performance for specific leaf area (SLA) in the growth chamber experiment, and stem mass fraction (SMF) in the experiment under semi-natural conditions, two adaptive traits related to abiotic stress-response in pines. Selection gradients based on genetic covariance of seedling traits and responses to selection at this stage involved traits related to biomass allocation (SMF) and growth (as decomposed by a Gompertz model) or delayed ontogenetic change, depending also on the testing environment. CONCLUSIONS: Despite the evidence of microevolutionary change in adaptive traits in maritime pine, directional or disruptive changes are difficult to predict due to variable selection at different life stages and environments. At mature-tree stages, higher female effective reproductive success can be explained by differences in their production of offspring (due to seed quality) and, to a lesser extent, by seemingly better adapted seedlings. Selection gradients and responses to selection for seedlings also differed across experimental conditions. The distinct processes involved at the two life stages (mature trees or seedlings) together with environment-specific responses advice caution when predicting likely evolutionary responses to environmental change in Mediterranean forest trees.
Resumo:
Recirculating virgin CD4+ T cells spend their life migrating between the T zones of secondary lymphoid tissues where they screen the surface of interdigitating dendritic cells. T-cell priming starts when processed peptides or superantigen associated with class II MHC molecules are recognised. Those primed T cells that remain within the lymphoid tissue move to the outer T zone, where they interact with B cells that have taken up and processed antigen. Cognate interaction between these cells initiates immunoglobulin (Ig) class switch-recombination and proliferation of both B and T cells; much of this growth occurs outside the T zones B cells migrate to follicles, where they form germinal centres, and to extrafollicular sites of B-cell growth, where they differentiate into mainly short-lived plasma cells. T cells do not move to the extrafollicular foci, but to the follicles; there they proliferate and are subsequently involved in the selection of B cells that have mutated their Ig variable-region genes. During primary antibody responses T-cell proliferation in follicles produces many times the peak number of T cells found in that site: a substantial proportion of the CD4+ memory T-cell pool may originate from growth in follicles.