998 resultados para commercial representation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapid screening tests and an appreciation of the simple genetic control of Alternaria brown spot (ABS) susceptibility have existed for many years, and yet the application of this knowledge to commercial-scale breeding programs has been limited. Detached leaf assays were first demonstrated more than 40 years ago and reliable data suggesting a single gene determining susceptibility has been emerging for at least 20 years. However it is only recently that the requirement for genetic resistance in new hybrids has become a priority, following increased disease prevalence in Australian mandarin production areas previously considered too dry for the pathogen. Almost all of the high-fruit-quality parents developed so far by the Queensland-based breeding program are susceptible to ABS necessitating the screening of their progeny to avoid commercialisation of susceptible hybrids. This is done effectively and efficiently by spraying 3-6 month old hybrid seedlings with a spore suspension derived from a toxin-producing field isolate of Alternaria alternate, then incubating these seedlings in a cool room at 25°C and high humidity for 5 days. Susceptible seedlings show clear disease symptoms and are discarded. Analysis of observed and expected segregation ratios loosely support the hypothesis for a single dominant gene for susceptibility, but do not rule out the possibility of alternative genetic models. After implementing the routine screening for ABS resistance for three seasons we now have more than 20,000 hybrids growing in field progeny blocks that have been screened for resistance to the ABS disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Planar curves arise naturally as interfaces between two regions of the plane. An important part of statistical physics is the study of lattice models. This thesis is about the interfaces of 2D lattice models. The scaling limit is an infinite system limit which is taken by letting the lattice mesh decrease to zero. At criticality, the scaling limit of an interface is one of the SLE curves (Schramm-Loewner evolution), introduced by Oded Schramm. This family of random curves is parametrized by a real variable, which determines the universality class of the model. The first and the second paper of this thesis study properties of SLEs. They contain two different methods to study the whole SLE curve, which is, in fact, the most interesting object from the statistical physics point of view. These methods are applied to study two symmetries of SLE: reversibility and duality. The first paper uses an algebraic method and a representation of the Virasoro algebra to find common martingales to different processes, and that way, to confirm the symmetries for polynomial expected values of natural SLE data. In the second paper, a recursion is obtained for the same kind of expected values. The recursion is based on stationarity of the law of the whole SLE curve under a SLE induced flow. The third paper deals with one of the most central questions of the field and provides a framework of estimates for describing 2D scaling limits by SLE curves. In particular, it is shown that a weak estimate on the probability of an annulus crossing implies that a random curve arising from a statistical physics model will have scaling limits and those will be well-described by Loewner evolutions with random driving forces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This greenhouse study investigated the efficacy of acibenzolar-S-methyl (Bion®) treatment of lower leaves of passionfruit, (Passiflora edulis f. sp. flavicarpa), on Passionfruit woodiness disease and activities of two pathogenesis-related proteins, chitinase and β-1,3-glucanase after inoculation with passionfruit woodiness virus (PWV). All Bion® concentrations reduced disease symptoms, but the concentration of 0.025 g active ingredient (a.i.)/l was the most effective, reducing disease severity in systemic leaves by 23, 29 and 30 compared with water-treated controls at 30, 40 and 50 days post inoculation (dpi) with PWV, respectively. Correspondingly, relative virus concentration as determined by DAS-ELISA in the upper, untreated leaves (new growth) above the site of inoculation at 50 dpi was reduced by 17 and 22 in plants treated with 0.025 and 0.05 g a.i./l, respectively. Bion® treatment and subsequent inoculation with PWV increased chitinase and β-1,3-glucanase activities in the new leaves above the site of inoculation at 30 dpi with PWV. It was concluded that optimal protective Bion® treatment concentrations were 0.025 and 0.05 g a.i./l.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This PhD Thesis is about certain infinite-dimensional Grassmannian manifolds that arise naturally in geometry, representation theory and mathematical physics. From the physics point of view one encounters these infinite-dimensional manifolds when trying to understand the second quantization of fermions. The many particle Hilbert space of the second quantized fermions is called the fermionic Fock space. A typical element of the fermionic Fock space can be thought to be a linear combination of the configurations m particles and n anti-particles . Geometrically the fermionic Fock space can be constructed as holomorphic sections of a certain (dual)determinant line bundle lying over the so called restricted Grassmannian manifold, which is a typical example of an infinite-dimensional Grassmannian manifold one encounters in QFT. The construction should be compared with its well-known finite-dimensional analogue, where one realizes an exterior power of a finite-dimensional vector space as the space of holomorphic sections of a determinant line bundle lying over a finite-dimensional Grassmannian manifold. The connection with infinite-dimensional representation theory stems from the fact that the restricted Grassmannian manifold is an infinite-dimensional homogeneous (Kähler) manifold, i.e. it is of the form G/H where G is a certain infinite-dimensional Lie group and H its subgroup. A central extension of G acts on the total space of the dual determinant line bundle and also on the space its holomorphic sections; thus G admits a (projective) representation on the fermionic Fock space. This construction also induces the so called basic representation for loop groups (of compact groups), which in turn are vitally important in string theory / conformal field theory. The Thesis consists of three chapters: the first chapter is an introduction to the backround material and the other two chapters are individually written research articles. The first article deals in a new way with the well-known question in Yang-Mills theory, when can one lift the action of the gauge transformation group on the space of connection one forms to the total space of the Fock bundle in a compatible way with the second quantized Dirac operator. In general there is an obstruction to this (called the Mickelsson-Faddeev anomaly) and various geometric interpretations for this anomaly, using such things as group extensions and bundle gerbes, have been given earlier. In this work we give a new geometric interpretation for the Faddeev-Mickelsson anomaly in terms of differentiable gerbes (certain sheaves of categories) and central extensions of Lie groupoids. The second research article deals with the question how to define a Dirac-like operator on the restricted Grassmannian manifold, which is an infinite-dimensional space and hence not in the landscape of standard Dirac operator theory. The construction relies heavily on infinite-dimensional representation theory and one of the most technically demanding challenges is to be able to introduce proper normal orderings for certain infinite sums of operators in such a way that all divergences will disappear and the infinite sum will make sense as a well-defined operator acting on a suitable Hilbert space of spinors. This research article was motivated by a more extensive ongoing project to construct twisted K-theory classes in Yang-Mills theory via a Dirac-like operator on the restricted Grassmannian manifold.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an effective feature representation method in the context of activity recognition. Efficient and effective feature representation plays a crucial role not only in activity recognition, but also in a wide range of applications such as motion analysis, tracking, 3D scene understanding etc. In the context of activity recognition, local features are increasingly popular for representing videos because of their simplicity and efficiency. While they achieve state-of-the-art performance with low computational requirements, their performance is still limited for real world applications due to a lack of contextual information and models not being tailored to specific activities. We propose a new activity representation framework to address the shortcomings of the popular, but simple bag-of-words approach. In our framework, first multiple instance SVM (mi-SVM) is used to identify positive features for each action category and the k-means algorithm is used to generate a codebook. Then locality-constrained linear coding is used to encode the features into the generated codebook, followed by spatio-temporal pyramid pooling to convey the spatio-temporal statistics. Finally, an SVM is used to classify the videos. Experiments carried out on two popular datasets with varying complexity demonstrate significant performance improvement over the base-line bag-of-feature method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many problems in analysis have been solved using the theory of Hodge structures. P. Deligne started to treat these structures in a categorical way. Following him, we introduce the categories of mixed real and complex Hodge structures. Category of mixed Hodge structures over the field of real or complex numbers is a rigid abelian tensor category, and in fact, a neutral Tannakian category. Therefore it is equivalent to the category of representations of an affine group scheme. The direct sums of pure Hodge structures of different weights over real or complex numbers can be realized as a representation of the torus group, whose complex points is the Cartesian product of two punctured complex planes. Mixed Hodge structures turn out to consist of information of a direct sum of pure Hodge structures of different weights and a nilpotent automorphism. Therefore mixed Hodge structures correspond to the representations of certain semidirect product of a nilpotent group and the torus group acting on it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analysis of sequential data is required in many diverse areas such as telecommunications, stock market analysis, and bioinformatics. A basic problem related to the analysis of sequential data is the sequence segmentation problem. A sequence segmentation is a partition of the sequence into a number of non-overlapping segments that cover all data points, such that each segment is as homogeneous as possible. This problem can be solved optimally using a standard dynamic programming algorithm. In the first part of the thesis, we present a new approximation algorithm for the sequence segmentation problem. This algorithm has smaller running time than the optimal dynamic programming algorithm, while it has bounded approximation ratio. The basic idea is to divide the input sequence into subsequences, solve the problem optimally in each subsequence, and then appropriately combine the solutions to the subproblems into one final solution. In the second part of the thesis, we study alternative segmentation models that are devised to better fit the data. More specifically, we focus on clustered segmentations and segmentations with rearrangements. While in the standard segmentation of a multidimensional sequence all dimensions share the same segment boundaries, in a clustered segmentation the multidimensional sequence is segmented in such a way that dimensions are allowed to form clusters. Each cluster of dimensions is then segmented separately. We formally define the problem of clustered segmentations and we experimentally show that segmenting sequences using this segmentation model, leads to solutions with smaller error for the same model cost. Segmentation with rearrangements is a novel variation to the segmentation problem: in addition to partitioning the sequence we also seek to apply a limited amount of reordering, so that the overall representation error is minimized. We formulate the problem of segmentation with rearrangements and we show that it is an NP-hard problem to solve or even to approximate. We devise effective algorithms for the proposed problem, combining ideas from dynamic programming and outlier detection algorithms in sequences. In the final part of the thesis, we discuss the problem of aggregating results of segmentation algorithms on the same set of data points. In this case, we are interested in producing a partitioning of the data that agrees as much as possible with the input partitions. We show that this problem can be solved optimally in polynomial time using dynamic programming. Furthermore, we show that not all data points are candidates for segment boundaries in the optimal solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sirex woodwasp was detected in Queensland in 2009 and rapidly established in softwood plantations (Pinus radiata and P. taeda) in southern border regions. Biocontrol inoculations of Deladenus siricidicola began soon after, and adults were monitored to assess the success of the programme. Wasp size, sex ratios, emergence phenology and nematode parasitism rates were recorded, along with the assessment of wild-caught females. Patterns varied within and among seasons, but overall, P. taeda appeared to be a less suitable host than P. radiata, producing smaller adults, lower fat body content and fewer females. Sirex emerging from P. taeda also showed lower levels of nematode parasitism, possibly due to interactions with the more abundant blue-stain fungus in this host. Sirex adults generally emerged between November and March, with distinct peaks in January and March, separated by a marked drop in emergence in early February. Temperature provided the best correlate of seasonal emergence, with fortnights with higher mean minimum temperatures having higher numbers of Sirex emerging. This has implications for the anticipated northward spread of Sirex into sub-tropical coastal plantation regions. Following four seasons of inundative release of nematodes in Queensland, parasitism rates remain low and have resulted in only partial sterilization of infected females.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Limitations in quality bedding material have resulted in the growing need to re-use litter during broiler farming in some countries, which can be of concern from a food-safety perspective. The aim of this study was to compare the Campylobacter levels in ceca and litter across three litter treatments under commercial farming conditions. The litter treatments were (a) the use of new litter after each farming cycle; (b) an Australian partial litter re-use practice; and (c) a full litter re-use practice. The study was carried out on two farms over two years (Farm 1, from 2009–2010 and Farm 2, from 2010–2011), across three sheds (35,000 to 40,000 chickens/shed) on each farm, adopting three different litter treatments across six commercial cycles. A random sampling design was adopted to test litter and ceca for Campylobacter and Escherichia coli, prior to commercial first thin-out and final pick-up. Campylobacter levels varied little across litter practices and farming cycles on each farm and were in the range of log 8.0–9.0 CFU/g in ceca and log 4.0–6.0 MPN/g for litter. Similarly the E. coli in ceca were ∼log 7.0 CFU/g. At first thin-out and final pick-up, the statistical analysis for both litter and ceca showed that the three-way interaction (treatments by farms by times) was highly significant (P < 0.01), indicating that the patterns of Campylobacter emergence/presence across time vary between the farms, cycles and pickups. The emergence and levels of both organisms were not influenced by litter treatments across the six farming cycles on both farms. Either C. jejuni or C. coli could be the dominant species across litter and ceca, and this phenomenon could not be attributed to specific litter treatments. Irrespective of the litter treatments in place, cycle 2 on Farm 2 remained campylobacter-free. These outcomes suggest that litter treatments did not directly influence the time of emergence and levels of Campylobacter and E. coli during commercial farming.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland (QLD) fishery for spanner crabs primarily lands live crab for export overseas, with gross landings valued around A$5 million per year. Quota setting rules are used to assess and adjust total allowable harvest (quota) around an agreed target harvest of 1631 t and capped at a maximum of 2000 t. The quota varies based on catch rate indicators from the commercial fishery and a fishery independent survey. Quota management applies only to ‘Managed Area A’ which includes waters between Rockhampton and the New South Wales (NSW) border. This report has been prepared to inform Fisheries Queensland (Department of Agriculture and Fisheries) and stakeholders of catch trends and the estimated quota of spanner crabs in Managed Area A for the forthcoming annual quota periods (1 June 2016–31 May 2018). The quota calculations followed the methodology developed by the crab fishery Scientific Advisory Group (SAG) between November 2007 and March 2008. The QLD total reported spanner crab harvest was 1170 t for the 2015 calendar year. In 2015, a total of 55 vessels were active in the QLD fishery, down from 262 vessels at the fishery’s peak activity in 1994. Recent spanner crab harvests from NSW waters average about 125 t per year, but fell to 80 t in 2014–2015. The spanner crab Managed Area A commercial standardised catch rate averaged 0.818 kg per net-lift in 2015, 22.5% below the target level of 1.043. Compared to 2014, mean catch rates in 2015 were marginally improved south of Fraser Island. The NSW–QLD survey catch rate in 2015 was 20.541 crabs per ground-line, 33% above the target level of 13.972. This represented an increase in survey catch rates of about four crabs per groundline, compared to the 2014 survey. The QLD spanner crab total allowable harvest (quota) was set at 1923 t in the 2012-13 and 2013-14 fishing years, 1777 t in 2014-15 and 1631 t in 2015-16. The results from the current analysis rules indicate that the quota for the next two fishing years be retained at the base quota of 1631 t.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Australian fishery for spanner crabs is the largest in the world, with the larger Queensland (QLD) sector’s landings primarily exported live overseas and GVP valued ~A$5 million per year. Spanner crabs are unique in that they may live up to 15 years, significantly more than blue swimmer crabs (Portunus armatus) and mud crabs (Scylla serrata), the two other important crab species caught in Queensland. Spanner crabs are caught using a flat net called a dilly, on which the crabs becoming entangled via the swimming legs. Quota setting rules are used to assess and adjust total allowable harvest (quota) around an agreed target harvest of 1631 t and capped at a maximum of 2000 t. The quota varies based on catch rate indicators from the commercial fishery and a fishery-independent survey from the previous two years, compared to target reference points. Quota management applies only to ‘Managed Area A’ which includes waters between Rockhampton and the New South Wales (NSW) border. This report has been prepared to inform Fisheries Queensland (Department of Agriculture and Fisheries) and stakeholders of catch trends and the estimated quota of spanner crabs in Managed Area A for the forthcoming quota period (1 June 2015–31 May 2016). The quota calculations followed the methodology developed by the crab fishery Scientific Advisory Group (SAG) between November 2007 and March 2008. The total reported spanner crab harvest was 917 t for the 2014 calendar year, almost all of which was taken from Managed Area A. In 2014, a total of 59 vessels were active in the QLD fishery, the lowest number since the peak in 1994 of 262 vessels. Recent spanner crab harvests from NSW waters have been about 125 t per year. The spanner crab Managed Area A commercial standardised catch rate averaged 0.739 kg per net-lift in 2014, 24% below the target level of 1.043. Mean catch rates declined in the commercial fishery in 2014, although the magnitude of the decreases was highest in the area north of Fraser Island. The NSW–QLD survey catch rate in 2014 was 16.849 crabs per ground-line, 22% above the target level of 13.972. This represented a decrease in survey catch rates of 0.366 crabs per ground-line, compared to the 2013 survey. The Queensland spanner crab total allowable harvest (quota) was set at 1923 t in 2012 and 2013. In 2014, the quota was calculated at the base level of 1631 t. However, given that the 2012 fisheryindependent survey was not undertaken for financial reasons, stakeholders proposed that the total allowable commercial catch (TACC) be decreased to 1777 t; a level that was halfway between the 2012/13 quota of 1923 t and the recommended base quota of 1631 t. The results from the current analysis indicate that the quota for the 2015-2016 financial year be decreased from 1777 t to the base quota of 1631 t.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The usual task in music information retrieval (MIR) is to find occurrences of a monophonic query pattern within a music database, which can contain both monophonic and polyphonic content. The so-called query-by-humming systems are a famous instance of content-based MIR. In such a system, the user's hummed query is converted into symbolic form to perform search operations in a similarly encoded database. The symbolic representation (e.g., textual, MIDI or vector data) is typically a quantized and simplified version of the sampled audio data, yielding to faster search algorithms and space requirements that can be met in real-life situations. In this thesis, we investigate geometric approaches to MIR. We first study some musicological properties often needed in MIR algorithms, and then give a literature review on traditional (e.g., string-matching-based) MIR algorithms and novel techniques based on geometry. We also introduce some concepts from digital image processing, namely the mathematical morphology, which we will use to develop and implement four algorithms for geometric music retrieval. The symbolic representation in the case of our algorithms is a binary 2-D image. We use various morphological pre- and post-processing operations on the query and the database images to perform template matching / pattern recognition for the images. The algorithms are basically extensions to classic image correlation and hit-or-miss transformation techniques used widely in template matching applications. They aim to be a future extension to the retrieval engine of C-BRAHMS, which is a research project of the Department of Computer Science at University of Helsinki.