981 resultados para Weingarten-type linear map
Resumo:
In this work we present a classification of some of the existing Penalty Methods (denominated the Exact Penalty Methods) and describe some of its limitations and estimated. With these methods we can solve problems of optimization with continuous, discrete and mixing constrains, without requiring continuity, differentiability or convexity. The boarding consists of transforming the original problem, in a sequence of problems without constrains, derivate of the initial, making possible its resolution for the methods known for this type of problems. Thus, the Penalty Methods can be used as the first step for the resolution of constrained problems for methods typically used in by unconstrained problems. The work finishes discussing a new class of Penalty Methods, for nonlinear optimization, that adjust the penalty parameter dynamically.
Resumo:
MSC Dissertation in Computer Engineering
Resumo:
We consider the two-Higgs-doublet model as a framework in which to evaluate the viability of scenarios in which the sign of the coupling of the observed Higgs boson to down-type fermions (in particular, b-quark pairs) is opposite to that of the Standard Model (SM), while at the same time all other tree-level couplings are close to the SM values. We show that, whereas such a scenario is consistent with current LHC observations, both future running at the LHC and a future e(+)e(-) linear collider could determine the sign of the Higgs coupling to b-quark pairs. Discrimination is possible for two reasons. First, the interference between the b-quark and the t-quark loop contributions to the ggh coupling changes sign. Second, the charged-Higgs loop contribution to the gamma gamma h coupling is large and fairly constant up to the largest charged-Higgs mass allowed by tree-level unitarity bounds when the b-quark Yukawa coupling has the opposite sign from that of the SM (the change in sign of the interference terms between the b-quark loop and the W and t loops having negligible impact).
Resumo:
An improved class of Boussinesq systems of an arbitrary order using a wave surface elevation and velocity potential formulation is derived. Dissipative effects and wave generation due to a time-dependent varying seabed are included. Thus, high-order source functions are considered. For the reduction of the system order and maintenance of some dispersive characteristics of the higher-order models, an extra O(mu 2n+2) term (n ??? N) is included in the velocity potential expansion. We introduce a nonlocal continuous/discontinuous Galerkin FEM with inner penalty terms to calculate the numerical solutions of the improved fourth-order models. The discretization of the spatial variables is made using continuous P2 Lagrange elements. A predictor-corrector scheme with an initialization given by an explicit RungeKutta method is also used for the time-variable integration. Moreover, a CFL-type condition is deduced for the linear problem with a constant bathymetry. To demonstrate the applicability of the model, we considered several test cases. Improved stability is achieved.
Resumo:
Consider scheduling of real-time tasks on a multiprocessor where migration is forbidden. Specifically, consider the problem of determining a task-to-processor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct types of processors. For this problem, we propose a new algorithm, LPC (task assignment based on solving a Linear Program with Cutting planes). The algorithm offers the following guarantee: for a given task set and a platform, if there exists a feasible task-to-processor assignment, then LPC succeeds in finding such a feasible task-to-processor assignment as well but on a platform in which each processor is 1.5 × faster and has three additional processors. For systems with a large number of processors, LPC has a better approximation ratio than state-of-the-art algorithms. To the best of our knowledge, this is the first work that develops a provably good real-time task assignment algorithm using cutting planes.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
In this manuscript we tackle the problem of semidistributed user selection with distributed linear precoding for sum rate maximization in multiuser multicell systems. A set of adjacent base stations (BS) form a cluster in order to perform coordinated transmission to cell-edge users, and coordination is carried out through a central processing unit (CU). However, the message exchange between BSs and the CU is limited to scheduling control signaling and no user data or channel state information (CSI) exchange is allowed. In the considered multicell coordinated approach, each BS has its own set of cell-edge users and transmits only to one intended user while interference to non-intended users at other BSs is suppressed by signal steering (precoding). We use two distributed linear precoding schemes, Distributed Zero Forcing (DZF) and Distributed Virtual Signalto-Interference-plus-Noise Ratio (DVSINR). Considering multiple users per cell and the backhaul limitations, the BSs rely on local CSI to solve the user selection problem. First we investigate how the signal-to-noise-ratio (SNR) regime and the number of antennas at the BSs impact the effective channel gain (the magnitude of the channels after precoding) and its relationship with multiuser diversity. Considering that user selection must be based on the type of implemented precoding, we develop metrics of compatibility (estimations of the effective channel gains) that can be computed from local CSI at each BS and reported to the CU for scheduling decisions. Based on such metrics, we design user selection algorithms that can find a set of users that potentially maximizes the sum rate. Numerical results show the effectiveness of the proposed metrics and algorithms for different configurations of users and antennas at the base stations.
Resumo:
We show that the waterbed effect, i.e. the pass-through of a change in one price of a firm to its other prices, is much stronger if the latter include subscription rather than only usage fees. In particular, in mobile network competition with a fixed number of customers, the waterbed effect is full under two-part tariffs, while it is only partial under linear tariffs.
Resumo:
Maximal-length binary sequences have been known for a long time. They have many interesting properties, one of them is that when taken in blocks of n consecutive positions they form 2ⁿ-1 different codes in a closed circular sequence. This property can be used for measuring absolute angular positions as the circle can be divided in as many parts as different codes can be retrieved. This paper describes how can a closed binary sequence with arbitrary length be effectively designed with the minimal possible block-length, using linear feedback shift registers (LFSR). Such sequences can be used for measuring a specified exact number of angular positions, using the minimal possible number of sensors that linear methods allow.
Resumo:
This paper introduces local distance-based generalized linear models. These models extend (weighted) distance-based linear models firstly with the generalized linear model concept, then by localizing. Distances between individuals are the only predictor information needed to fit these models. Therefore they are applicable to mixed (qualitative and quantitative) explanatory variables or when the regressor is of functional type. Models can be fitted and analysed with the R package dbstats, which implements several distancebased prediction methods.
Resumo:
The aim of this work is to establish a relationship between schistosomiasis prevalence and social-environmental variables, in the state of Minas Gerais, Brazil, through multiple linear regression. The final regression model was established, after a variables selection phase, with a set of spatial variables which contains the summer minimum temperature, human development index, and vegetation type variables. Based on this model, a schistosomiasis risk map was built for Minas Gerais.
Resumo:
A version of Matheron’s discrete Gaussian model is applied to cell composition data.The examples are for map patterns of felsic metavolcanics in two different areas. Q-Qplots of the model for cell values representing proportion of 10 km x 10 km cell areaunderlain by this rock type are approximately linear, and the line of best fit can be usedto estimate the parameters of the model. It is also shown that felsic metavolcanics in theAbitibi area of the Canadian Shield can be modeled as a fractal
Resumo:
Obesity is associated with a low-grade chronic inflammation state. As a consequence, adipose tissue expresses pro-inflammatory cytokines that propagate inflammatory responses systemically elsewhere, promoting whole-body insulin resistance and consequential islet β-cell exhaustation. Thus, insulin resistance is considered the early stage of type 2 diabetes. However, there is evidence of obese individuals that never develop diabetes indicating that the mechanisms governing the association between the increase of inflammatory factors and type 2 diabetes are much more complex and deserve further investigation. We studied for the first time the differences in insulin signalling and inflammatory pathways in blood and visceral adipose tissue (VAT) of 20 lean healthy donors and 40 equal morbidly obese (MO) patients classified in high insulin resistance (high IR) degree and diabetes state. We studied the changes in proinflammatory markers and lipid content from serum; macrophage infiltration, mRNA expression of inflammatory cytokines and transcription factors, activation of kinases involved in inflammation and expression of insulin signalling molecules in VAT. VAT comparison of these experimental groups revealed that type 2 diabetic-MO subjects exhibit the same pro-inflammatory profile than the high IR-MO patients, characterized by elevated levels of IL-1β, IL-6, TNFα, JNK1/2, ERK1/2, STAT3 and NFκB. Our work rules out the assumption that the inflammation should be increased in obese people with type 2 diabetes compared to high IR obese. These findings indicate that some mechanisms, other than systemic and VAT inflammation must be involved in the development of type 2 diabetes in obesity.
Resumo:
BACKGROUND In previous meta-analyses, tea consumption has been associated with lower incidence of type 2 diabetes. It is unclear, however, if tea is associated inversely over the entire range of intake. Therefore, we investigated the association between tea consumption and incidence of type 2 diabetes in a European population. METHODOLOGY/PRINCIPAL FINDINGS The EPIC-InterAct case-cohort study was conducted in 26 centers in 8 European countries and consists of a total of 12,403 incident type 2 diabetes cases and a stratified subcohort of 16,835 individuals from a total cohort of 340,234 participants with 3.99 million person-years of follow-up. Country-specific Hazard Ratios (HR) for incidence of type 2 diabetes were obtained after adjustment for lifestyle and dietary factors using a Cox regression adapted for a case-cohort design. Subsequently, country-specific HR were combined using a random effects meta-analysis. Tea consumption was studied as categorical variable (0, >0-<1, 1-<4, ≥ 4 cups/day). The dose-response of the association was further explored by restricted cubic spline regression. Country specific medians of tea consumption ranged from 0 cups/day in Spain to 4 cups/day in United Kingdom. Tea consumption was associated inversely with incidence of type 2 diabetes; the HR was 0.84 [95%CI 0.71, 1.00] when participants who drank ≥ 4 cups of tea per day were compared with non-drinkers (p(linear trend) = 0.04). Incidence of type 2 diabetes already tended to be lower with tea consumption of 1-<4 cups/day (HR = 0.93 [95%CI 0.81, 1.05]). Spline regression did not suggest a non-linear association (p(non-linearity) = 0.20). CONCLUSIONS/SIGNIFICANCE A linear inverse association was observed between tea consumption and incidence of type 2 diabetes. People who drink at least 4 cups of tea per day may have a 16% lower risk of developing type 2 diabetes than non-tea drinkers.
Resumo:
Aquesta tesi explora la possibilitat de fer servir enllaços inductius per a una aplicació de l’automòbil on el cablejat entre la centraleta (ECU) i els sensors o detectors és difícil o impossible. S’han proposat dos mètodes: 1) el monitoratge de sensors commutats (dos possibles estats) via acoblament inductiu i 2) la transmissió mitjançant el mateix principi físic de la potència necessària per alimentar els sensors autònoms remots. La detecció d'ocupació i del cinturó de seguretat per a seients desmuntables pot ser implementada amb sistemes sense fils passius basats en circuits ressonants de tipus LC on l'estat dels sensors determina el valor del condensador i, per tant, la freqüència de ressonància. Els canvis en la freqüència són detectats per una bobina situada en el terra del vehicle. S’ha conseguit provar el sistema en un marge entre 0.5 cm i 3 cm. Els experiments s’han dut a terme fent servir un analitzador d’impedàncies connectat a una bobina primària i sensors comercials connectats a un circuit remot. La segona proposta consisteix en transmetre remotament la potència des d’una bobina situada en el terra del vehicle cap a un dispositiu autònom situat en el seient. Aquest dispositiu monitorarà l'estat dels detectors (d'ocupació i de cinturó) i transmetrà les dades mitjançant un transceptor comercial de radiofreqüència o pel mateix enllaç inductiu. S’han avaluat les bobines necessàries per a una freqüència de treball inferior a 150 kHz i s’ha estudiat quin és el regulador de tensió més apropiat per tal d’aconseguir una eficiència global màxima. Quatre tipus de reguladors de tensió s’han analitzat i comparat des del punt de vista de l’eficiència de potència. Els reguladors de tensió de tipus lineal shunt proporcionen una eficiència de potència millor que les altres alternatives, els lineals sèrie i els commutats buck o boost. Les eficiències aconseguides han estat al voltant del 40%, 25% i 10% per les bobines a distàncies 1cm, 1.5cm, i 2cm. Les proves experimentals han mostrat que els sensors autònoms han estat correctament alimentats fins a distàncies de 2.5cm.