816 resultados para penalty-based aggregation functions
Resumo:
Lack of access to insurance exacerbates the impact of climate variability on smallholder famers in Africa. Unlike traditional insurance, which compensates proven agricultural losses, weather index insurance (WII) pays out in the event that a weather index is breached. In principle, WII could be provided to farmers throughout Africa. There are two data-related hurdles to this. First, most farmers do not live close enough to a rain gauge with sufficiently long record of observations. Second, mismatches between weather indices and yield may expose farmers to uncompensated losses, and insurers to unfair payouts – a phenomenon known as basis risk. In essence, basis risk results from complexities in the progression from meteorological drought (rainfall deficit) to agricultural drought (low soil moisture). In this study, we use a land-surface model to describe the transition from meteorological to agricultural drought. We demonstrate that spatial and temporal aggregation of rainfall results in a clearer link with soil moisture, and hence a reduction in basis risk. We then use an advanced statistical method to show how optimal aggregation of satellite-based rainfall estimates can reduce basis risk, enabling remotely sensed data to be utilized robustly for WII.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
Asystematic study on the surface-enhanced Raman scattering (SERS) for 3,6-bi-2-pyridyl-1,2,4,5-tetrazine (bptz) adsorbed onto citrate-modified gold nanoparticles (cit-AuNps) was carried out based on electronic and vibrational spectroscopy and density functional methods. The citrate/bptz exchange was carefully controlled by the stepwise addition of bptz to the cit-AuNps, inducing flocculation and leading to the rise of a characteristic plasmon coupling band in the visible region. Such stepwise procedure led to a uniform decrease of the citrate SERS signals and to the rise of characteristic peaks of bptz, consistent with surface binding via the N heterocyclic atoms. In contrast, single addition of a large amount of bptz promoted complete aggregation of the nanoparticles, leading to a strong enhancement of the SERS signals. In this case, from the distinct Raman profiles involved, the formation of a new SERS environment became apparent, conjugating the influence of the local hot spots and charge-transfer (CT) effects. The most strongly enhanced vibrations belong to a(1) and b(2) representations, and were interpreted in terms of the electromagnetic and the CT mechanisms: the latter involving significant contribution of vibronic coupling in the system. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Soil aggregation is an index of soil structure measured by mean weight diameter (MWD) or scaling factors often interpreted as fragmentation fractal dimensions (D-f). However, the MWD provides a biased estimate of soil aggregation due to spurious correlations among aggregate-size fractions and scale-dependency. The scale-invariant D-f is based on weak assumptions to allow particle counts and sensitive to the selection of the fractal domain, and may frequently exceed a value of 3, implying that D-f is a biased estimate of aggregation. Aggregation indices based on mass may be computed without bias using compositional analysis techniques. Our objective was to elaborate compositional indices of soil aggregation and to compare them to MWD and D-f using a published dataset describing the effect of 7 cropping systems on aggregation. Six aggregate-size fractions were arranged into a sequence of D-1 balances of building blocks that portray the process of soil aggregation. Isometric log-ratios (ilrs) are scale-invariant and orthogonal log contrasts or balances that possess the Euclidean geometry necessary to compute a distance between any two aggregation states, known as the Aitchison distance (A(x,y)). Close correlations (r>0.98) were observed between MWD, D-f, and the ilr when contrasting large and small aggregate sizes. Several unbiased embedded ilrs can characterize the heterogeneous nature of soil aggregates and be related to soil properties or functions. Soil bulk density and penetrater resistance were closely related to A(x,y) with reference to bare fallow. The A(x,y) is easy to implement as unbiased index of soil aggregation using standard sieving methods and may allow comparisons between studies. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Function approximation is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. Neural networks and wavenets have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. In this paper, it is shown how feedforward neural networks can be built using a different type of activation function referred to as the PPS-wavelet. An algorithm is presented to generate a family of PPS-wavelets that can be used to efficiently construct feedforward networks for function approximation.
Resumo:
The aggregation theory of mathematical programming is used to study decentralization in convex programming models. A two-level organization is considered and a aggregation-disaggregation scheme is applied to such a divisionally organized enterprise. In contrast to the known aggregation techniques, where the decision variables/production planes are aggregated, it is proposed to aggregate resources allocated by the central planning department among the divisions. This approach results in a decomposition procedure, in which the central unit has no optimization problem to solve and should only average local information provided by the divisions.
Resumo:
In this article, the fuzzy Lyapunov function approach is considered for stabilising continuous-time Takagi-Sugeno fuzzy systems. Previous linear matrix inequality (LMI) stability conditions are relaxed by exploring further the properties of the time derivatives of premise membership functions and by introducing slack LMI variables into the problem formulation. The relaxation conditions given can also be used with a class of fuzzy Lyapunov functions which also depends on the membership function first-order time-derivative. The stability results are thus extended to systems with large number of rules under membership function order relations and used to design parallel-distributed compensation (PDC) fuzzy controllers which are also solved in terms of LMIs. Numerical examples illustrate the efficiency of the new stabilising conditions presented. © 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
Die Zinkendopeptidasen Meprin α und β sind Schlüsselkomponenten in patho(physiologischen) Prozessen wie Entzündung, Kollagenassemblierung und Angiogenese. Nach ihrer Entdeckung in murinen Bürstensaummembranen und humanen Darmepithelien, wurden weitere Expressionsorte identifiziert, z.B. Leukozyten, Krebszellen und die humane Haut. Tiermodelle, Zellkulturen und biochemische Analysen weisen auf Funktionen der Meprine in der Epithelialdifferenzierung, Zellmigration, Matrixmodellierung, Angiogenese, Bindegewebsausbildung und immunologische Prozesse hin. Dennoch sind ihre physiologischen Substrate weitgehend noch unbekannt. Massenspektrometrisch basierte Proteomics-Analysen enthüllten eine einzigartige Spaltspezifität für saure Aminosäurereste in der P1´ Position und identifizierten neue biologische Substratkandidaten. Unter den 269 extrazellulären Proteinen, die in einem Substratscreen identifiziert wurden, stellten sich das amyloid precursor protein (APP) and ADAM10 (a disintegrin and metalloprotease 10) als sehr vielversprechende Kandidaten heraus. Mehrere Schnittstellen innerhalb des APP Proteins, hervorgerufen durch verschiedenen Proteasen, haben unterschiedlichen Auswirkungen zur Folge. Die β-Sekretase BACE (β-site APP cleaving enzyme) prozessiert APP an einer Schnittstelle, welche als initialer Schritt in der Entwicklung der Alzheimer Erkrankung gilt. Toxische Aβ (Amyloid β)-Peptide werden in den extrazellulären Raum freigesetzt und aggregieren dort zu senilen Plaques. Membran verankertes Meprin β hat eine β-Sekretase Aktivität, die in einem Zellkultur-basierten System bestätigt werden konnte. Die proteolytische Effizienz von Meprin β wurde in FRET (Fluorescence Resonance Energy Transfer)-Analysen bestimmt und war um den Faktor 104 höher als die von BACE1. Weiterhin konnte gezeigt werden, dass Meprin β die ersten zwei Aminosäuren prozessiert und somit aminoterminal einen Glutamatrest freisetzt, welcher nachfolgend durch die Glutaminylzyklase in ein Pyroglutamat zykliert werden kann. Trunkierte Aβ-Peptide werden nur in Alzheimer Patienten generiert. Aufgrund einer erhöhten Hydrophobie weisen diese Peptide eine höhere Tendenz zur Aggregation auf und somit eine erhöhte Toxizität. Bis heute wurde keine Protease identifiziert, welche diese Schnittstelle prozessiert. Die Bildung der Meprin vermittelten N-terminalen APP Fragmenten wurde in vitro und in vivo detektiert. Diese N-APP Peptide hatten keine cytotoxischen Auswirkungen auf murine und humane Gehirnzellen, obwohl zuvor N-APP als Ligand für den death receptor (DR) 6 identifiziert wurde, der für axonale Degenerationsprozesse verantwortlich ist. rnIm nicht-amyloidogenen Weg prozessiert ADAM10 APP und entlässt die Ektodomäne von der Zellmembran. Wir konnten das ADAM10 Propeptid als Substrat von Meprin β identifizieren und in FRET Analysen, in vitro und in vivo zeigen, dass die Meprin vermittelte Prozessierung zu einer erhöhten ADAM10 Aktivität führt. Darüber hinaus wurde ADAM10 als Sheddase für Meprin β identifiziert. Shedding konnte durch Phorbol 12-myristate 13-acetate (PMA) oder durch das Ionophor A23187 hervorgerufen werden, sowie durch ADAM10 Inhibitoren blockiert werden. rnDiese Arbeit konnte somit ein komplexes proteolytisches Netwerk innerhalb der Neurophysiologie aufdecken, welches für die Entwicklung der Alzheimer Demenz wichtig sein kann.rn
Resumo:
Artificial neural networks are based on computational units that resemble basic information processing properties of biological neurons in an abstract and simplified manner. Generally, these formal neurons model an input-output behaviour as it is also often used to characterize biological neurons. The neuron is treated as a black box; spatial extension and temporal dynamics present in biological neurons are most often neglected. Even though artificial neurons are simplified, they can show a variety of input-output relations, depending on the transfer functions they apply. This unit on transfer functions provides an overview of different transfer functions and offers a simulation that visualizes the input-output behaviour of an artificial neuron depending on the specific combination of transfer functions.
Resumo:
Given a reproducing kernel Hilbert space (H,〈.,.〉)(H,〈.,.〉) of real-valued functions and a suitable measure μμ over the source space D⊂RD⊂R, we decompose HH as the sum of a subspace of centered functions for μμ and its orthogonal in HH. This decomposition leads to a special case of ANOVA kernels, for which the functional ANOVA representation of the best predictor can be elegantly derived, either in an interpolation or regularization framework. The proposed kernels appear to be particularly convenient for analyzing the effect of each (group of) variable(s) and computing sensitivity indices without recursivity.
Resumo:
The selection of predefined analytic grids (partitions of the numeric ranges) to represent input and output functions as histograms has been proposed as a mechanism of approximation in order to control the tradeoff between accuracy and computation times in several áreas ranging from simulation to constraint solving. In particular, the application of interval methods for probabilistic function characterization has been shown to have advantages over other methods based on the simulation of random samples. However, standard interval arithmetic has always been used for the computation steps. In this paper, we introduce an alternative approximate arithmetic aimed at controlling the cost of the interval operations. Its distinctive feature is that grids are taken into account by the operators. We apply the technique in the context of probability density functions in order to improve the accuracy of the probability estimates. Results show that this approach has advantages over existing approaches in some particular situations, although computation times tend to increase significantly when analyzing large functions.