971 resultados para Set-Valued Functions


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work deals with the solvability near the characteristic set Sigma = {0} x S-1 of operators of the form L = partial derivative/partial derivative t+(x(n) a(x)+ ix(m) b(x))partial derivative/partial derivative x, b not equivalent to 0 and a(0) not equal 0, defined on Omega(epsilon) = (-epsilon, epsilon) x S-1, epsilon > 0, where a and b are real-valued smooth functions in (-epsilon, epsilon) and m >= 2n. It is shown that given f belonging to a subspace of finite codimension of C-infinity (Omega(epsilon)) there is a solution u is an element of L-infinity of the equation Lu = f in a neighborhood of Sigma; moreover, the L-infinity regularity is sharp.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

procera (pro) is a tall tomato (Solanum lycopersicum) mutant carrying a point mutation in the GRAS region of the gene encoding SlDELLA, a repressor in the gibberellin (GA) signaling pathway. Consistent with the SlDELLA loss of function, pro plants display a GA-constitutive response phenotype, mimicking wild-type plants treated with GA(3). The ovaries from both nonemasculated and emasculated pro flowers had very strong parthenocarpic capacity, associated with enhanced growth of preanthesis ovaries due to more and larger cells. pro parthenocarpy is facultative because seeded fruits were obtained by manual pollination. Most pro pistils had exserted stigmas, thus preventing self-pollination, similar to wild-type pistils treated with GA(3) or auxins. However, Style2.1, a gene responsible for long styles in noncultivated tomato, may not control the enhanced style elongation of pro pistils, because its expression was not higher in pro styles and did not increase upon GA(3) application. Interestingly, a high percentage of pro flowers had meristic alterations, with one additional petal, sepal, stamen, and carpel at each of the four whorls, respectively, thus unveiling a role of SlDELLA in flower organ development. Microarray analysis showed significant changes in the transcriptome of preanthesis pro ovaries compared with the wild type, indicating that the molecular mechanism underlying the parthenocarpic capacity of pro is complex and that it is mainly associated with changes in the expression of genes involved in GA and auxin pathways. Interestingly, it was found that GA activity modulates the expression of cell division and expansion genes and an auxin signaling gene (tomato AUXIN RESPONSE FACTOR7) during fruit-set.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Moment invariants have been thoroughly studied and repeatedly proposed as one of the most powerful tools for 2D shape identification. In this paper a set of such descriptors is proposed, being the basis functions discontinuous in a finite number of points. The goal of using discontinuous functions is to avoid the Gibbs phenomenon, and therefore to yield a better approximation capability for discontinuous signals, as images. Moreover, the proposed set of moments allows the definition of rotation invariants, being this the other main design concern. Translation and scale invariance are achieved by means of standard image normalization. Tests are conducted to evaluate the behavior of these descriptors in noisy environments, where images are corrupted with Gaussian noise up to different SNR values. Results are compared to those obtained using Zernike moments, showing that the proposed descriptor has the same performance in image retrieval tasks in noisy environments, but demanding much less computational power for every stage in the query chain.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Given an n-ary k-valued function f, gap(f) denotes the essential arity gap of f which is the minimal number of essential variables in f which become fictive when identifying any two distinct essential variables in f. In the present paper we study the properties of the symmetric function with non-trivial arity gap (2 ≤ gap(f)). We prove several results concerning decomposition of the symmetric functions with non-trivial arity gap with its minors or subfunctions. We show that all non-empty sets of essential variables in symmetric functions with non-trivial arity gap are separable. ACM Computing Classification System (1998): G.2.0.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study we set out to dissociate the developmental time course of automatic symbolic number processing and cognitive control functions in grade 1-3 British primary school children. Event-related potential (ERP) and behavioral data were collected in a physical size discrimination numerical Stroop task. Task-irrelevant numerical information was processed automatically already in grade 1. Weakening interference and strengthening facilitation indicated the parallel development of general cognitive control and automatic number processing. Relationships among ERP and behavioral effects suggest that control functions play a larger role in younger children and that automaticity of number processing increases from grade 1 to 3.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power system restoration after a large area outage involves many factors, and the procedure is usually very complicated. A decision-making support system could then be developed so as to find the optimal black-start strategy. In order to evaluate candidate black-start strategies, some indices, usually both qualitative and quantitative, are employed. However, it may not be possible to directly synthesize these indices, and different extents of interactions may exist among these indices. In the existing black-start decision-making methods, qualitative and quantitative indices cannot be well synthesized, and the interactions among different indices are not taken into account. The vague set, an extended version of the well-developed fuzzy set, could be employed to deal with decision-making problems with interacting attributes. Given this background, the vague set is first employed in this work to represent the indices for facilitating the comparisons among them. Then, a concept of the vague-valued fuzzy measure is presented, and on that basis a mathematical model for black-start decision-making developed. Compared with the existing methods, the proposed method could deal with the interactions among indices and more reasonably represent the fuzzy information. Finally, an actual power system is served for demonstrating the basic features of the developed model and method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sequences with optimal correlation properties are much sought after for applications in communication systems. In 1980, Alltop (\emph{IEEE Trans. Inf. Theory} 26(3):350-354, 1980) described a set of sequences based on a cubic function and showed that these sequences were optimal with respect to the known bounds on auto and crosscorrelation. Subsequently these sequences were used to construct mutually unbiased bases (MUBs), a structure of importance in quantum information theory. The key feature of this cubic function is that its difference function is a planar function. Functions with planar difference functions have been called \emph{Alltop functions}. This paper provides a new family of Alltop functions and establishes the use of Alltop functions for construction of sequence sets and MUBs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a method for designing set-point regulation controllers for a class of underactuated mechanical systems in Port-Hamiltonian System (PHS) form. A new set of potential shape variables in closed loop is proposed, which can replace the set of open loop shape variables-the configuration variables that appear in the kinetic energy. With this choice, the closed-loop potential energy contains free functions of the new variables. By expressing the regulation objective in terms of these new potential shape variables, the desired equilibrium can be assigned and there is freedom to reshape the potential energy to achieve performance whilst maintaining the PHS form in closed loop. This complements contemporary results in the literature, which preserve the open-loop shape variables. As a case study, we consider a robotic manipulator mounted on a flexible base and compensate for the motion of the base while positioning the end effector with respect to the ground reference. We compare the proposed control strategy with special cases that correspond to other energy shaping strategies previously proposed in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper translates the concepts of sustainable production to three dimensions of economic, environmental and ecological sustainability to analyze optimal production scales by solving optimizing problems. Economic optimization seeks input-output combinations to maximize profits. Environmental optimization searches for input-output combinations that minimize the polluting effects of materials balance on the surrounding environment. Ecological optimization looks for input-output combinations that minimize the cumulative destruction of the entire ecosystem. Using an aggregate space, the framework illustrates that these optimal scales are often not identical because markets fail to account for all negative externalities. Profit-maximizing firms normally operate at the scales which are larger than optimal scales from the viewpoints of environmental and ecological sustainability; hence policy interventions are favoured. The framework offers a useful tool for efficiency studies and policy implication analysis. The paper provides an empirical investigation using a data set of rice farms in South Korea.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Selection of features that will permit accurate pattern classification is a difficult task. However, if a particular data set is represented by discrete valued features, it becomes possible to determine empirically the contribution that each feature makes to the discrimination between classes. This paper extends the discrimination bound method so that both the maximum and average discrimination expected on unseen test data can be estimated. These estimation techniques are the basis of a backwards elimination algorithm that can be use to rank features in order of their discriminative power. Two problems are used to demonstrate this feature selection process: classification of the Mushroom Database, and a real-world, pregnancy related medical risk prediction task - assessment of risk of perinatal death.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article is motivated by a lung cancer study where a regression model is involved and the response variable is too expensive to measure but the predictor variable can be measured easily with relatively negligible cost. This situation occurs quite often in medical studies, quantitative genetics, and ecological and environmental studies. In this article, by using the idea of ranked-set sampling (RSS), we develop sampling strategies that can reduce cost and increase efficiency of the regression analysis for the above-mentioned situation. The developed method is applied retrospectively to a lung cancer study. In the lung cancer study, the interest is to investigate the association between smoking status and three biomarkers: polyphenol DNA adducts, micronuclei, and sister chromatic exchanges. Optimal sampling schemes with different optimality criteria such as A-, D-, and integrated mean square error (IMSE)-optimality are considered in the application. With set size 10 in RSS, the improvement of the optimal schemes over simple random sampling (SRS) is great. For instance, by using the optimal scheme with IMSE-optimality, the IMSEs of the estimated regression functions for the three biomarkers are reduced to about half of those incurred by using SRS.