960 resultados para Syntactic Projection
Resumo:
The United States is the world s single biggest market area, where the demand for graphic papers has increased by 80 % during the last three decades. However, during the last two decades there have been very big unpredictable changes in the graphic paper markets. For example, the consumption of newsprint started to decline from the late 1980 s, which was surprising compared to the historical consumption and projections. The consumption has declined since. The aim of this study was to see how magazine paper consumption will develop in the United States until 2030. The long-term consumption projection was made using mainly two methods. The first method was to use trend analysis to see how and if the consumption has changed since 1980. The second method was to use qualitative estimate. These estimates are then compared to the so-called classical model projections, which are usually mentioned and used in forestry literature. The purpose of the qualitative analysis is to study magazine paper end-use purposes and to analyze how and with what intensity the changes in society will effect to magazine paper consumption in the long-term. The framework of this study covers theories such as technology adaptation, electronic substitution, electronic publishing and Porter s threat of substitution. Because this study deals with markets, which have showed signs of structural change, a very substantial part of this study covers recent development and newest possible studies and statistics. The following were among the key findings of this study. Different end-uses have very different kinds of future. Electronic substitution is very likely in some end-use purposes, but not in all. Young people i.e. future consumers have very different manners, habits and technological opportunities than our parents did. These will have substantial effects in magazine paper consumption in the long-term. This study concludes to the fact that the change in magazine paper consumption is more likely to be gradual (evolutionary) than sudden collapse (revolutionary). It is also probable that the years of fast growing consumption of magazine papers are behind. Besides the decelerated growth, the consumption of magazine papers will decline slowly in the long-term. The decline will be faster depending on how far in the future we ll extend the study to.
Resumo:
Climate projections over the next two to four decades indicate that most of Australia’s wheat-belt is likely to become warmer and drier. Here we used a shire scale, dynamic stress-index model that accounts for the impacts of rainfall and temperature on wheat yield, and a range of climate change projections from global circulation models to spatially estimate yield changes assuming no adaptation and no CO2 fertilisation effects. We modelled five scenarios, a baseline climate (climatology, 1901–2007), and two emission scenarios (“low” and “high” CO2) for two time horizons, namely 2020 and 2050. The potential benefits from CO2 fertilisation were analysed separately using a point level functional simulation model. Irrespective of the emissions scenario, the 2020 projection showed negligible changes in the modelled yield relative to baseline climate, both using the shire or functional point scale models. For the 2050-high emissions scenario, changes in modelled yield relative to the baseline ranged from −5 % to +6 % across most of Western Australia, parts of Victoria and southern New South Wales, and from −5 to −30 % in northern NSW, Queensland and the drier environments of Victoria, South Australia and in-land Western Australia. Taking into account CO2 fertilisation effects across a North–south transect through eastern Australia cancelled most of the yield reductions associated with increased temperatures and reduced rainfall by 2020, and attenuated the expected yield reductions by 2050.
Resumo:
Abstract The paper evaluates the effect of future climate change (as per the CSIRO Mk3.5 A1FI future climate projection) on cotton yield in Southern Queensland and Northern NSW, eastern Australia by using of the biophysical simulation model APSIM (Agricultural Production Systems sIMulator). The simulations of cotton production show that changes in the influential meteorological parameters caused by climate change would lead to decreased future cotton yields without the effect of CO2 fertilisation. By 2050 the yields would decrease by 17 %. Including the effects of CO2 fertilisation ameliorates the effect of decreased water availability and yields increase by 5.9 % by 2030, but then decrease by 3.6 % in 2050. Importantly, it was necessary to increase irrigation amounts by almost 50 % to maintain adequate soil moisture levels. The effect of CO2 was found to have an important positive impact of the yield in spite of deleterious climate change. This implies that the physiological response of plants to climate change needs to be thoroughly understood to avoid making erroneous projections of yield and potentially stifling investment or increasing risk.
Resumo:
This paper describes the application of vector spaces over Galois fields, for obtaining a formal description of a picture in the form of a very compact, non-redundant, unique syntactic code. Two different methods of encoding are described. Both these methods consist in identifying the given picture as a matrix (called picture matrix) over a finite field. In the first method, the eigenvalues and eigenvectors of this matrix are obtained. The eigenvector expansion theorem is then used to reconstruct the original matrix. If several of the eigenvalues happen to be zero this scheme results in a considerable compression. In the second method, the picture matrix is reduced to a primitive diagonal form (Hermite canonical form) by elementary row and column transformations. These sequences of elementary transformations constitute a unique and unambiguous syntactic code-called Hermite code—for reconstructing the picture from the primitive diagonal matrix. A good compression of the picture results, if the rank of the matrix is considerably lower than its order. An important aspect of this code is that it preserves the neighbourhood relations in the picture and the primitive remains invariant under translation, rotation, reflection, enlargement and replication. It is also possible to derive the codes for these transformed pictures from the Hermite code of the original picture by simple algebraic manipulation. This code will find extensive applications in picture compression, storage, retrieval, transmission and in designing pattern recognition and artificial intelligence systems.
Resumo:
The topic of this dissertation is the geometric and isometric theory of Banach spaces. This work is motivated by the known Banach-Mazur rotation problem, which asks whether each transitive separable Banach space is isometrically a Hilbert space. A Banach space X is said to be transitive if the isometry group of X acts transitively on the unit sphere of X. In fact, some weaker symmetry conditions than transitivity are studied in the dissertation. One such condition is an almost isometric version of transitivity. Another investigated condition is convex-transitivity, which requires that the closed convex hull of the orbit of any point of the unit sphere under the rotation group is the whole unit ball. Following the tradition developed around the rotation problem, some contemporary problems are studied. Namely, we attempt to characterize Hilbert spaces by using convex-transitivity together with the existence of a 1-dimensional bicontractive projection on the space, and some mild geometric assumptions. The convex-transitivity of some vector-valued function spaces is studied as well. The thesis also touches convex-transitivity of Banach lattices and resembling geometric cases.
Resumo:
The concept of an atomic decomposition was introduced by Coifman and Rochberg (1980) for weighted Bergman spaces on the unit disk. By the Riemann mapping theorem, functions in every simply connected domain in the complex plane have an atomic decomposition. However, a decomposition resulting from a conformal mapping of the unit disk tends to be very implicit and often lacks a clear connection to the geometry of the domain that it has been mapped into. The lattice of points, where the atoms of the decomposition are evaluated, usually follows the geometry of the original domain, but after mapping the domain into another this connection is easily lost and the layout of points becomes seemingly random. In the first article we construct an atomic decomposition directly on a weighted Bergman space on a class of regulated, simply connected domains. The construction uses the geometric properties of the regulated domain, but does not explicitly involve any conformal Riemann map from the unit disk. It is known that the Bergman projection is not bounded on the space L-infinity of bounded measurable functions. Taskinen (2004) introduced the locally convex spaces LV-infinity consisting of measurable and HV-infinity of analytic functions on the unit disk with the latter being a closed subspace of the former. They have the property that the Bergman projection is continuous from LV-infinity onto HV-infinity and, in some sense, the space HV-infinity is the smallest possible substitute to the space H-infinity of analytic functions. In the second article we extend the above result to a smoothly bounded strictly pseudoconvex domain. Here the related reproducing kernels are usually not known explicitly, and thus the proof of continuity of the Bergman projection is based on generalised Forelli-Rudin estimates instead of integral representations. The minimality of the space LV-infinity is shown by using peaking functions first constructed by Bell (1981). Taskinen (2003) showed that on the unit disk the space HV-infinity admits an atomic decomposition. This result is generalised in the third article by constructing an atomic decomposition for the space HV-infinity on a smoothly bounded strictly pseudoconvex domain. In this case every function can be presented as a linear combination of atoms such that the coefficient sequence belongs to a suitable Köthe co-echelon space.
Resumo:
Syntheses and structural characterization of Ni(II) chelates of a new series of symmetric and unsymmetric tetradentate linear ligands are described. Preparative routes involve either the direct reaction between a metal complex and arene diazonium diazonium salts or a simple metal incorporation into the independently synthesized ligands. Recent X-ray structure determination of 4,9-dimethyl-5,8-diazadodeca-4,8-diene-2,11-dione-3,10-di(4′-methyl phenyl) hydrazonatonickel(II) complex reveals the geometry around the Ni(II) to be very close to square planar. The expected distortion because of the disposition of bulky aromatic groups on the neighbouring nitrogens is minimized by their projection in the opposite directions from the plane. PMP, IR and electronic spectral data for the complexes are quite in agreement with this structure.
Resumo:
The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.
Resumo:
With the aim of increasing peanut production in Australia, the Australian peanut industry has recently considered growing peanuts in rotation with maize at Katherine in the Northern Territory—a location with a semi-arid tropical climate and surplus irrigation capacity. We used the well-validated APSIM model to examine potential agronomic benefits and long-term risks of this strategy under the current and warmer climates of the new region. Yield of the two crops, irrigation requirement, total soil organic carbon (SOC), nitrogen (N) losses and greenhouse gas (GHG) emissions were simulated. Sixteen climate stressors were used; these were generated by using global climate models ECHAM5, GFDL2.1, GFDL2.0 and MRIGCM232 with a median sensitivity under two Special Report of Emissions Scenarios over the 2030 and 2050 timeframes plus current climate (baseline) for Katherine. Effects were compared at three levels of irrigation and three levels of N fertiliser applied to maize grown in rotations of wet-season peanut and dry-season maize (WPDM), and wet-season maize and dry-season peanut (WMDP). The climate stressors projected average temperature increases of 1°C to 2.8°C in the dry (baseline 24.4°C) and wet (baseline 29.5°C) seasons for the 2030 and 2050 timeframes, respectively. Increased temperature caused a reduction in yield of both crops in both rotations. However, the overall yield advantage of WPDM increased from 41% to up to 53% compared with the industry-preferred sequence of WMDP under the worst climate projection. Increased temperature increased the irrigation requirement by up to 11% in WPDM, but caused a smaller reduction in total SOC accumulation and smaller increases in N losses and GHG emission compared with WMDP. We conclude that although increased temperature will reduce productivity and total SOC accumulation, and increase N losses and GHG emissions in Katherine or similar northern Australian environments, the WPDM sequence should be preferable over the industry-preferred sequence because of its overall yield and sustainability advantages in warmer climates. Any limitations of irrigation resulting from climate change could, however, limit these advantages.
Resumo:
This thesis examines the mythology in and social reality behind a group of texts from the Nag Hammadi and related literature, to which certain leaders of the early church attached the label, Ophite, i.e., snake people. In the mythology, which essentially draws upon and rewrites the Genesis paradise story, the snake's advice to eat from the tree of knowledge is positive, the creator and his angels are demonic beasts and the true godhead is depicted as an androgynous heavenly projection of Adam and Eve. It will be argued that this unique mythology is attested in certain Coptic texts from the Nag Hammadi and Berlin 8502 Codices (On the Origin of the World, Hypostasis of the Archons, Apocryphon of John, Eugnostos, Sophia of Jesus Christ), as well as in reports by Irenaeus (Adversus Haereses 1.30), Origen (Contra Celsum 6.24-38) and Epiphanius (Panarion 26). It will also be argued that this so-called Ophite evidence is essential for a proper understanding of Sethian Gnosticism, often today considered one of the earliest forms of Gnosticism; there seems to have occurred a Sethianization of Ophite mythology. I propose that we replace the current Sethian Gnostic category by a new one that not only adds texts that draw upon the Ophite mythology alongside these Sethian texts, but also arranges the material in smaller typological units. I also propose we rename this remodelled and expanded Sethian corpus "Classic Gnostic." I have divided the thesis into four parts: (I) Introduction; (II) Myth and Innovation; (III) Ritual; and (IV) Conclusion. In Part I, the sources and previous research on Ophites and Sethians will be examined, and the new Classic Gnostic category will be introduced to provide a framework for the study of the Ophite evidence. Chapters in Part II explore key themes in the mythology of our texts, first by text comparison (to show that certain texts represent the Ophite mythology and that this mythology is different from Sethianism), and then by attempting to unveil social circumstances that may have given rise to such myths. Part III assesses heresiological claims of Ophite rituals, and Part IV is the conclusion.
An FETI-preconditioned conjuerate gradient method for large-scale stochastic finite element problems
Resumo:
In the spectral stochastic finite element method for analyzing an uncertain system. the uncertainty is represented by a set of random variables, and a quantity of Interest such as the system response is considered as a function of these random variables Consequently, the underlying Galerkin projection yields a block system of deterministic equations where the blocks are sparse but coupled. The solution of this algebraic system of equations becomes rapidly challenging when the size of the physical system and/or the level of uncertainty is increased This paper addresses this challenge by presenting a preconditioned conjugate gradient method for such block systems where the preconditioning step is based on the dual-primal finite element tearing and interconnecting method equipped with a Krylov subspace reusage technique for accelerating the iterative solution of systems with multiple and repeated right-hand sides. Preliminary performance results on a Linux Cluster suggest that the proposed Solution method is numerically scalable and demonstrate its potential for making the uncertainty quantification Of realistic systems tractable.
Resumo:
The problem of unsupervised anomaly detection arises in a wide variety of practical applications. While one-class support vector machines have demonstrated their effectiveness as an anomaly detection technique, their ability to model large datasets is limited due to their memory and time complexity for training. To address this issue for supervised learning of kernel machines, there has been growing interest in random projection methods as an alternative to the computationally expensive problems of kernel matrix construction and sup-port vector optimisation. In this paper we leverage the theory of nonlinear random projections and propose the Randomised One-class SVM (R1SVM), which is an efficient and scalable anomaly detection technique that can be trained on large-scale datasets. Our empirical analysis on several real-life and synthetic datasets shows that our randomised 1SVM algorithm achieves comparable or better accuracy to deep auto encoder and traditional kernelised approaches for anomaly detection, while being approximately 100 times faster in training and testing.
Resumo:
Many conventional statistical machine learning al- gorithms generalise poorly if distribution bias ex- ists in the datasets. For example, distribution bias arises in the context of domain generalisation, where knowledge acquired from multiple source domains need to be used in a previously unseen target domains. We propose Elliptical Summary Randomisation (ESRand), an efficient domain generalisation approach that comprises of a randomised kernel and elliptical data summarisation. ESRand learns a domain interdependent projection to a la- tent subspace that minimises the existing biases to the data while maintaining the functional relationship between domains. In the latent subspace, ellipsoidal summaries replace the samples to enhance the generalisation by further removing bias and noise in the data. Moreover, the summarisation enables large-scale data processing by significantly reducing the size of the data. Through comprehensive analysis, we show that our subspace-based approach outperforms state-of-the-art results on several activity recognition benchmark datasets, while keeping the computational complexity significantly low.
Resumo:
Technological development of fast multi-sectional, helical computed tomography (CT) scanners has allowed computed tomography perfusion (CTp) and angiography (CTA) in evaluating acute ischemic stroke. This study focuses on new multidetector computed tomography techniques, namely whole-brain and first-pass CT perfusion plus CTA of carotid arteries. Whole-brain CTp data is acquired during slow infusion of contrast material to achieve constant contrast concentration in the cerebral vasculature. From these data quantitative maps are constructed of perfused cerebral blood volume (pCBV). The probability curve of cerebral infarction as a function of normalized pCBV was determined in patients with acute ischemic stroke. Normalized pCBV, expressed as a percentage of contralateral normal brain pCBV, was determined in the infarction core and in regions just inside and outside the boundary between infarcted and noninfarcted brain. Corresponding probabilities of infarction were 0.99, 0.96, and 0.11, R² was 0.73, and differences in perfusion between core and inner and outer bands were highly significant. Thus a probability of infarction curve can help predict the likelihood of infarction as a function of percentage normalized pCBV. First-pass CT perfusion is based on continuous cine imaging over a selected brain area during a bolus injection of contrast. During its first passage, contrast material compartmentalizes in the intravascular space, resulting in transient tissue enhancement. Functional maps such as cerebral blood flow (CBF), and volume (CBV), and mean transit time (MTT) are then constructed. We compared the effects of three different iodine concentrations (300, 350, or 400 mg/mL) on peak enhancement of normal brain tissue and artery and vein, stratified by region-of-interest (ROI) location, in 102 patients within 3 hours of stroke onset. A monotonic increasing peak opacification was evident at all ROI locations, suggesting that CTp evaluation of patients with acute stroke is best performed with the highest available concentration of contrast agent. In another study we investigated whether lesion volumes on CBV, CBF, and MTT maps within 3 hours of stroke onset predict final infarct volume, and whether all these parameters are needed for triage to intravenous recombinant tissue plasminogen activator (IV-rtPA). The effect of IV-rtPA on the affected brain by measuring salvaged tissue volume in patients receiving IV-rtPA and in controls was investigated also. CBV lesion volume did not necessarily represent dead tissue. MTT lesion volume alone can serve to identify the upper size limit of the abnormally perfused brain, and those with IV-rtPA salvaged more brain than did controls. Carotid CTA was compared with carotid DSA in grading of stenosis in patients with stroke symptoms. In CTA, the grade of stenosis was determined by means of axial source and maximum intensity projection (MIP) images as well as a semiautomatic vessel analysis. CTA provides an adequate, less invasive alternative to conventional DSA, although tending to underestimate clinically relevant grades of stenosis.
Resumo:
Objective: We aimed to assess the impact of task demands and individual characteristics on threat detection in baggage screeners. Background: Airport security staff work under time constraints to ensure optimal threat detection. Understanding the impact of individual characteristics and task demands on performance is vital to ensure accurate threat detection. Method: We examined threat detection in baggage screeners as a function of event rate (i.e., number of bags per minute) and time on task across 4 months. We measured performance in terms of the accuracy of detection of Fictitious Threat Items (FTIs) randomly superimposed on X-ray images of real passenger bags. Results: Analyses of the percentage of correct FTI identifications (hits) show that longer shifts with high baggage throughput result in worse threat detection. Importantly, these significant performance decrements emerge within the first 10 min of these busy screening shifts only. Conclusion: Longer shift lengths, especially when combined with high baggage throughput, increase the likelihood that threats go undetected. Application: Shorter shift rotations, although perhaps difficult to implement during busy screening periods, would ensure more consistently high vigilance in baggage screeners and, therefore, optimal threat detection and passenger safety.