866 resultados para Reproducing kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Zeaxanthin, along with its isomer lutein, are the major carotenoids contributing to the characteristic colour of yellow sweet-corn. From a human health perspective, these two carotenoids are also specifically accumulated in the human macula, and are thought to protect the photoreceptor cells of the eye from blue light oxidative damage and to improve visual acuity. As humans cannot synthesise these compounds, they must be accumulated from dietary components containing zeaxanthin and lutein. In comparison to most dietary sources, yellow sweet-corn (Zea mays var. rugosa) is a particularly good source of zeaxanthin, although the concentration of zeaxanthin is still fairly low in comparison to what is considered a supplementary dose to improve macular pigment concentration (2 mg/person/day). In our present project, we have increased zeaxanthin concentration in sweet-corn kernels from 0.2 to 0.3 mg/100 g FW to greater than 2.0 mg/100 g FW at sweet-corn eating-stage, substantially reducing the amount of corn required to provide the same dosage of zeaxanthin. This was achieved by altering the carotenoid synthesis pathway to more than double total carotenoid synthesis and to redirect carotenoid synthesis towards the beta-arm of the pathway where zeaxanthin is synthesised. This resulted in a proportional increase of zeaxanthin from 22% to 70% of the total carotenoid present. As kernels increase in physiological maturity, carotenoid concentration also significantly increases, mainly due to increased synthesis but also due to a decline in moisture content of the kernels. When fully mature, dried kernels can reach zeaxanthin and carotene concentrations of 8.7 mg/100 g and 2.6 mg/100 g, respectively. Although kernels continue to increase in zeaxanthin when harvested past their normal harvest maturity stage, the texture of these 'over-mature' kernels is tough, making them less appealing for fresh consumption. Increase in zeaxanthin concentration and other orange carotenoids such as p-carotene also results in a decline in kernel hue angle of fresh sweet-corn from approximately 90 (yellow) to as low as 75 (orange-yellow). This enables high-zeaxanthin sweet-corn to be visually-distinguishable from standard yellow sweet-corn, which is predominantly pigmented by lutein.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The monograph dissertation deals with kernel integral operators and their mapping properties on Euclidean domains. The associated kernels are weakly singular and examples of such are given by Green functions of certain elliptic partial differential equations. It is well known that mapping properties of the corresponding Green operators can be used to deduce a priori estimates for the solutions of these equations. In the dissertation, natural size- and cancellation conditions are quantified for kernels defined in domains. These kernels induce integral operators which are then composed with any partial differential operator of prescribed order, depending on the size of the kernel. The main object of study in this dissertation being the boundedness properties of such compositions, the main result is the characterization of their Lp-boundedness on suitably regular domains. In case the aforementioned kernels are defined in the whole Euclidean space, their partial derivatives of prescribed order turn out to be so called standard kernels that arise in connection with singular integral operators. The Lp-boundedness of singular integrals is characterized by the T1 theorem, which is originally due to David and Journé and was published in 1984 (Ann. of Math. 120). The main result in the dissertation can be interpreted as a T1 theorem for weakly singular integral operators. The dissertation deals also with special convolution type weakly singular integral operators that are defined on Euclidean spaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of an atomic decomposition was introduced by Coifman and Rochberg (1980) for weighted Bergman spaces on the unit disk. By the Riemann mapping theorem, functions in every simply connected domain in the complex plane have an atomic decomposition. However, a decomposition resulting from a conformal mapping of the unit disk tends to be very implicit and often lacks a clear connection to the geometry of the domain that it has been mapped into. The lattice of points, where the atoms of the decomposition are evaluated, usually follows the geometry of the original domain, but after mapping the domain into another this connection is easily lost and the layout of points becomes seemingly random. In the first article we construct an atomic decomposition directly on a weighted Bergman space on a class of regulated, simply connected domains. The construction uses the geometric properties of the regulated domain, but does not explicitly involve any conformal Riemann map from the unit disk. It is known that the Bergman projection is not bounded on the space L-infinity of bounded measurable functions. Taskinen (2004) introduced the locally convex spaces LV-infinity consisting of measurable and HV-infinity of analytic functions on the unit disk with the latter being a closed subspace of the former. They have the property that the Bergman projection is continuous from LV-infinity onto HV-infinity and, in some sense, the space HV-infinity is the smallest possible substitute to the space H-infinity of analytic functions. In the second article we extend the above result to a smoothly bounded strictly pseudoconvex domain. Here the related reproducing kernels are usually not known explicitly, and thus the proof of continuity of the Bergman projection is based on generalised Forelli-Rudin estimates instead of integral representations. The minimality of the space LV-infinity is shown by using peaking functions first constructed by Bell (1981). Taskinen (2003) showed that on the unit disk the space HV-infinity admits an atomic decomposition. This result is generalised in the third article by constructing an atomic decomposition for the space HV-infinity on a smoothly bounded strictly pseudoconvex domain. In this case every function can be presented as a linear combination of atoms such that the coefficient sequence belongs to a suitable Köthe co-echelon space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The phosphine distribution in a cylindrical silo containing grain is predicted. A three-dimensional mathematical model, which accounts for multicomponent gas phase transport and the sorption of phosphine into the grain kernel is developed. In addition, a simple model is presented to describe the death of insects within the grain as a function of their exposure to phosphine gas. The proposed model is solved using the commercially available computational fluid dynamics (CFD) software, FLUENT, together with our own C code to customize the solver in order to incorporate the models for sorption and insect extinction. Two types of fumigation delivery are studied, namely, fan- forced from the base of the silo and tablet from the top of the silo. An analysis of the predicted phosphine distribution shows that during fan forced fumigation, the position of the leaky area is very important to the development of the gas flow field and the phosphine distribution in the silo. If the leak is in the lower section of the silo, insects that exist near the top of the silo may not be eradicated. However, the position of a leak does not affect phosphine distribution during tablet fumigation. For such fumigation in a typical silo configuration, phosphine concentrations remain low near the base of the silo. Furthermore, we find that half-life pressure test readings are not an indicator of phosphine distribution during tablet fumigation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Natural selection generally operates at the level of the individual, or more specifically at the level of the gene. As a result, individual selection does not always favour traits which benefit the population or species as a whole. The spread of an individual gene may even act to the detriment of the organism in which it finds. Thus selection at the level of the individual can affect processes at the level of the organism, group or even at the level of the species. As most behaviours ultimately affect births, deaths and the distribution of individuals, it seems inevitable that behavioural decisions will have an impact on population dynamics and population densities. Behavioural decisions can often involve costs through allocation of energy into behavioural strategies, such as the investment into armaments involved in fighting over resources or increased mortality due to injury or increased predation risk. Similarly, behaviour may act o to benefit the population, in terms of higher survival and increased fecundity. Examples include increased investment through parental care, choosing a mate based on the nuptial gifts they may supply and choosing territories in the face of competition. Investigating the impact of behaviour on population ecology may seem like a trivial task, but it is likely to have important consequences at different levels. For example, antagonistic behaviour may occasionally become so extreme that it increases the risk of extinction, and such extinction risk may have important implications for conservation. As a corollary, any such behaviour may also act as a macroevolutionary force, weeding out populations with traits which, whilst beneficial to the individuals in the short term, ultimately result in population extinction. In this thesis, I examine how behaviours, specifically conflict and competition over a resource and aspects of behaviour involved in sexual selection, can affect population densities, and what the implications are for the evolution and ecology of the populations in question. It is found that both behaviours related to individual conflict and mating strategies can have an effect at the level of the population, but that various factors, such as a feedback between selection and population densities or macroevolution caused by species extinctions, may act to limit the intensity of conflicts that we observe in nature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anadromous whitefish is one of the most important fish species in the Finnish coastal fisheries in the Gulf fo Bothnia. To compensate the lost reproduction due to river damming and to support the fisheries, several million one-summer old whitefish are released yearly into the Gulf of Bothnia. Since there are naturally reproducing whitefish in the Gulf as well, and the wild and stocked fish can not be separated in the catch, stocking impact can only be estimated by marking the stocked fish. Due to the small size and large number of released whitefish, the scattered fishery and large area where the whitefish migrate, most of the traditionally used fish marking methods were either unsuitable (e.g. Carlin-tags) or proved to be too expensive (e.g. coded wire tags). Fluorescent pigment spraying method offers a fast and cost-effective method to mass-mark young fish. However, the results are not always satisfactory due to low long-time retention of the marks in some species. The method has to be tested and proper marking conditions and methods determined for each species. This thesis is based on work that was accomplished while developing the fluorescent pigment spraying method for marking one-summer old whitefish fingerlings, and it draws together the results of mass-marking whitefish fingerlings that were released in the Gulf of Bothnia. Fluorescent pigment spraying method is suitable for one-summer old whitefish larger than 8 cm total length. The water temperature during the marking should not exceed 10o C. Suitable spraying pressure is 6 bars measured in the compressor outlet, and the distance of the spraying gun nozzle should be ca 20 cm from the fish. Under such conditions, the marking results in long-term retention of the mark with low or no mortality. The stress level of the fish (measured as muscle water content) rises during the marking procedure, but if the fish are allowed to recover after marking, the overall stress level remains within the limits observed in normal fish handling during the capture-loading-transport-stocking procedure. The marked whitefish fingerlings are released into the sea at larger size and later in the season than the wild whitefish. However, the stocked individuals migrate to the southern feeding grounds in a similar pattern to the wild ones. The catch produced by whitefish stocking in the Gulf of Bothnia varied between released fingerling groups, but was within the limits reported elsewhere in Finland. The releases in the southern Bothnian Bay resulted in a larger catch than those made in the northern Bothnian Bay. The size of the released fingerlings seemed to have some effect on survival of the fish during the first winter in the sea. However, when the different marking groups were compared, the mean fingerling size was not related to stocking success.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Emerging embedded applications are based on evolving standards (e.g., MPEG2/4, H.264/265, IEEE802.11a/b/g/n). Since most of these applications run on handheld devices, there is an increasing need for a single chip solution that can dynamically interoperate between different standards and their derivatives. In order to achieve high resource utilization and low power dissipation, we propose REDEFINE, a polymorphic ASIC in which specialized hardware units are replaced with basic hardware units that can create the same functionality by runtime re-composition. It is a ``future-proof'' custom hardware solution for multiple applications and their derivatives in a domain. In this article, we describe a compiler framework and supporting hardware comprising compute, storage, and communication resources. Applications described in high-level language (e.g., C) are compiled into application substructures. For each application substructure, a set of compute elements on the hardware are interconnected during runtime to form a pattern that closely matches the communication pattern of that particular application. The advantage is that the bounded CEs are neither processor cores nor logic elements as in FPGAs. Hence, REDEFINE offers the power and performance advantage of an ASIC and the hardware reconfigurability and programmability of that of an FPGA/instruction set processor. In addition, the hardware supports custom instruction pipelining. Existing instruction-set extensible processors determine a sequence of instructions that repeatedly occur within the application to create custom instructions at design time to speed up the execution of this sequence. We extend this scheme further, where a kernel is compiled into custom instructions that bear strong producer-consumer relationship (and not limited to frequently occurring sequences of instructions). Custom instructions, realized as hardware compositions effected at runtime, allow several instances of the same to be active in parallel. A key distinguishing factor in majority of the emerging embedded applications is stream processing. To reduce the overheads of data transfer between custom instructions, direct communication paths are employed among custom instructions. In this article, we present the overview of the hardware-aware compiler framework, which determines the NoC-aware schedule of transports of the data exchanged between the custom instructions on the interconnect. The results for the FFT kernel indicate a 25% reduction in the number of loads/stores, and throughput improves by log(n) for n-point FFT when compared to sequential implementation. Overall, REDEFINE offers flexibility and a runtime reconfigurability at the expense of 1.16x in power and 8x in area when compared to an ASIC. REDEFINE implementation consumes 0.1x the power of an FPGA implementation. In addition, the configuration overhead of the FPGA implementation is 1,000x more than that of REDEFINE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of unsupervised anomaly detection arises in a wide variety of practical applications. While one-class support vector machines have demonstrated their effectiveness as an anomaly detection technique, their ability to model large datasets is limited due to their memory and time complexity for training. To address this issue for supervised learning of kernel machines, there has been growing interest in random projection methods as an alternative to the computationally expensive problems of kernel matrix construction and sup-port vector optimisation. In this paper we leverage the theory of nonlinear random projections and propose the Randomised One-class SVM (R1SVM), which is an efficient and scalable anomaly detection technique that can be trained on large-scale datasets. Our empirical analysis on several real-life and synthetic datasets shows that our randomised 1SVM algorithm achieves comparable or better accuracy to deep auto encoder and traditional kernelised approaches for anomaly detection, while being approximately 100 times faster in training and testing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

State-of-the-art image-set matching techniques typically implicitly model each image-set with a Gaussian distribution. Here, we propose to go beyond these representations and model image-sets as probability distribution functions (PDFs) using kernel density estimators. To compare and match image-sets, we exploit Csiszar´ f-divergences, which bear strong connections to the geodesic distance defined on the space of PDFs, i.e., the statistical manifold. Furthermore, we introduce valid positive definite kernels on the statistical manifold, which let us make use of more powerful classification schemes to match image-sets. Finally, we introduce a supervised dimensionality reduction technique that learns a latent space where f-divergences reflect the class labels of the data. Our experiments on diverse problems, such as video-based face recognition and dynamic texture classification, evidence the benefits of our approach over the state-of-the-art image-set matching methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many conventional statistical machine learning al- gorithms generalise poorly if distribution bias ex- ists in the datasets. For example, distribution bias arises in the context of domain generalisation, where knowledge acquired from multiple source domains need to be used in a previously unseen target domains. We propose Elliptical Summary Randomisation (ESRand), an efficient domain generalisation approach that comprises of a randomised kernel and elliptical data summarisation. ESRand learns a domain interdependent projection to a la- tent subspace that minimises the existing biases to the data while maintaining the functional relationship between domains. In the latent subspace, ellipsoidal summaries replace the samples to enhance the generalisation by further removing bias and noise in the data. Moreover, the summarisation enables large-scale data processing by significantly reducing the size of the data. Through comprehensive analysis, we show that our subspace-based approach outperforms state-of-the-art results on several activity recognition benchmark datasets, while keeping the computational complexity significantly low.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive a very general expression of the survival probability and the first passage time distribution for a particle executing Brownian motion in full phase space with an absorbing boundary condition at a point in the position space, which is valid irrespective of the statistical nature of the dynamics. The expression, together with the Jensen's inequality, naturally leads to a lower bound to the actual survival probability and an approximate first passage time distribution. These are expressed in terms of the position-position, velocity-velocity, and position-velocity variances. Knowledge of these variances enables one to compute a lower bound to the survival probability and consequently the first passage distribution function. As examples, we compute these for a Gaussian Markovian process and, in the case of non-Markovian process, with an exponentially decaying friction kernel and also with a power law friction kernel. Our analysis shows that the survival probability decays exponentially at the long time irrespective of the nature of the dynamics with an exponent equal to the transition state rate constant.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamic systems involving convolution integrals with decaying kernels, of which fractionally damped systems form a special case, are non-local in time and hence infinite dimensional. Straightforward numerical solution of such systems up to time t needs O(t(2)) computations owing to the repeated evaluation of integrals over intervals that grow like t. Finite-dimensional and local approximations are thus desirable. We present here an approximation method which first rewrites the evolution equation as a coupled in finite-dimensional system with no convolution, and then uses Galerkin approximation with finite elements to obtain linear, finite-dimensional, constant coefficient approximations for the convolution. This paper is a broad generalization, based on a new insight, of our prior work with fractional order derivatives (Singh & Chatterjee 2006 Nonlinear Dyn. 45, 183-206). In particular, the decaying kernels we can address are now generalized to the Laplace transforms of known functions; of these, the power law kernel of fractional order differentiation is a special case. The approximation can be refined easily. The local nature of the approximation allows numerical solution up to time t with O(t) computations. Examples with several different kernels show excellent performance. A key feature of our approach is that the dynamic system in which the convolution integral appears is itself approximated using another system, as distinct from numerically approximating just the solution for the given initial values; this allows non-standard uses of the approximation, e. g. in stability analyses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Support Vector Machines(SVMs) are hyperplane classifiers defined in a kernel induced feature space. The data size dependent training time complexity of SVMs usually prohibits its use in applications involving more than a few thousands of data points. In this paper we propose a novel kernel based incremental data clustering approach and its use for scaling Non-linear Support Vector Machines to handle large data sets. The clustering method introduced can find cluster abstractions of the training data in a kernel induced feature space. These cluster abstractions are then used for selective sampling based training of Support Vector Machines to reduce the training time without compromising the generalization performance. Experiments done with real world datasets show that this approach gives good generalization performance at reasonable computational expense.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the last decades mean-field models, in which large-scale magnetic fields and differential rotation arise due to the interaction of rotation and small-scale turbulence, have been enormously successful in reproducing many of the observed features of the Sun. In the meantime, new observational techniques, most prominently helioseismology, have yielded invaluable information about the interior of the Sun. This new information, however, imposes strict conditions on mean-field models. Moreover, most of the present mean-field models depend on knowledge of the small-scale turbulent effects that give rise to the large-scale phenomena. In many mean-field models these effects are prescribed in ad hoc fashion due to the lack of this knowledge. With large enough computers it would be possible to solve the MHD equations numerically under stellar conditions. However, the problem is too large by several orders of magnitude for the present day and any foreseeable computers. In our view, a combination of mean-field modelling and local 3D calculations is a more fruitful approach. The large-scale structures are well described by global mean-field models, provided that the small-scale turbulent effects are adequately parameterized. The latter can be achieved by performing local calculations which allow a much higher spatial resolution than what can be achieved in direct global calculations. In the present dissertation three aspects of mean-field theories and models of stars are studied. Firstly, the basic assumptions of different mean-field theories are tested with calculations of isotropic turbulence and hydrodynamic, as well as magnetohydrodynamic, convection. Secondly, even if the mean-field theory is unable to give the required transport coefficients from first principles, it is in some cases possible to compute these coefficients from 3D numerical models in a parameter range that can be considered to describe the main physical effects in an adequately realistic manner. In the present study, the Reynolds stresses and turbulent heat transport, responsible for the generation of differential rotation, were determined along the mixing length relations describing convection in stellar structure models. Furthermore, the alpha-effect and magnetic pumping due to turbulent convection in the rapid rotation regime were studied. The third area of the present study is to apply the local results in mean-field models, which task we start to undertake by applying the results concerning the alpha-effect and turbulent pumping in mean-field models describing the solar dynamo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automatic identification of software faults has enormous practical significance. This requires characterizing program execution behavior and the use of appropriate data mining techniques on the chosen representation. In this paper, we use the sequence of system calls to characterize program execution. The data mining tasks addressed are learning to map system call streams to fault labels and automatic identification of fault causes. Spectrum kernels and SVM are used for the former while latent semantic analysis is used for the latter The techniques are demonstrated for the intrusion dataset containing system call traces. The results show that kernel techniques are as accurate as the best available results but are faster by orders of magnitude. We also show that latent semantic indexing is capable of revealing fault-specific features.