147 resultados para Mythology, Classical
Resumo:
This paper presents a deterministic modelling approach to predict diffraction loss for an innovative Multi-User-Single-Antenna (MUSA) MIMO technology, proposed for rural Australian environments. In order to calculate diffraction loss, six receivers have been considered around an access point in a selected rural environment. Generated terrain profiles for six receivers are presented in this paper. Simulation results using classical diffraction models and diffraction theory are also presented by accounting the rural Australian terrain data. Results show that in an area of 900 m by 900 m surrounding the receivers, path loss due to diffraction can range between 5 dB and 35 dB. Diffraction loss maps can contribute to determine the optimal location for receivers of MUSA-MIMO systems in rural areas.
Resumo:
In most materials, short stress waves are generated during the process of plastic deformation, phase transformation, crack formation and crack growth. These phenomena are applied in acoustic emission (AE) for the detection of material defects in a wide spectrum of areas, ranging from nondestructive testing for the detection of materials defects to monitoring of microseismical activity. AE technique is also used for defect source identification and for failure detection. AE waves consist of P waves (primary longitudinal waves), S waves (shear/transverse waves) and Rayleigh (surface) waves as well as reflected and diffracted waves. The propagation of AE waves in various modes has made the determination of source location difficult. In order to use acoustic emission technique for accurate identification of source, an understanding of wave propagation of the AE signals at various locations in a plate structure is essential. Furthermore, an understanding of wave propagation can also assist in sensor location for optimum detection of AE signals along with the characteristics of the source. In real life, as the AE signals radiate from the source it will result in stress waves. Unless the type of stress wave is known, it is very difficult to locate the source when using the classical propagation velocity equations. This paper describes the simulation of AE waves to identify the source location and its characteristics in steel plate as well as the wave modes. The finite element analysis (FEA) is used for the numerical simulation of wave propagation in thin plate. By knowing the type of wave generated, it is possible to apply the appropriate wave equations to determine the location of the source. For a single plate structure, the results show that the simulation algorithm is effective to simulate different stress waves.
Resumo:
Growing evidence suggests that a novel member of the Chlamydiales order, Waddlia chondrophila, is a potential agent of miscarriage in humans and abortion in ruminants. Due to the lack of genetic tools to manipulate chlamydia, genomic analysis is proving to be the most incisive tool in stimulating investigations into the biology of these obligate intracellular bacteria. 454/Roche and Solexa/Illumina technologies were thus used to sequence and assemble de novo the full genome of the first representative of the Waddliaceae family, W. chondrophila. The bacteria possesses a 2′116′312bp chromosome and a 15′593 bp low-copy number plasmid that might integrate into the bacterial chromosome. The Waddlia genome displays numerous repeated sequences indicating different genome dynamics from classical chlamydia which almost completely lack repetitive elements. Moreover, W. chondrophila exhibits many virulence factors also present in classical chlamydia, including a functional type III secretion system, but also a large complement of specific factors for resistance to host or environmental stresses. Large families of outer membrane proteins were identified indicating that these highly immunogenic proteins are not Chlamydiaceae specific and might have been present in their last common ancestor. Enhanced metabolic capability for the synthesis of nucleotides, amino acids, lipids and other co-factors suggests that the common ancestor of the modern Chlamydiales may have been less dependent on their eukaryotic host. The fine-detailed analysis of biosynthetic pathways brings us closer to possibly developing a synthetic medium to grow W. chondrophila, a critical step in the development of genetic tools. As a whole, the availability of the W. chondrophila genome opens new possibilities in Chlamydiales research, providing new insights into the evolution of members of the order Chlamydiales and the biology of the Waddliaceae.
Resumo:
Reset/inhibitor nets are Petri nets extended with reset arcs and inhibitor arcs. These extensions can be used to model cancellation and blocking. A reset arc allows a transition to remove all tokens from a certain place when the transition fires. An inhibitor arc can stop a transition from being enabled if the place contains one or more tokens. While reset/inhibitor nets increase the expressive power of Petri nets, they also result in increased complexity of analysis techniques. One way of speeding up Petri net analysis is to apply reduction rules. Unfortunately, many of the rules defined for classical Petri nets do not hold in the presence of reset and/or inhibitor arcs. Moreover, new rules can be added. This is the first paper systematically presenting a comprehensive set of reduction rules for reset/inhibitor nets. These rules are liveness and boundedness preserving and are able to dramatically reduce models and their state spaces. It can be observed that most of the modeling languages used in practice have features related to cancellation and blocking. Therefore, this work is highly relevant for all kinds of application areas where analysis is currently intractable.
Resumo:
This paper analyses the Australian Values Education Program (VEP) within the framework of late-classical political economy. using analytical methods from systemic functional linguistics and critical discourse analysis, we demonstrate that the VEP is an unwitting restatement of the principles of ideology as developed by the likes of Destutt de Tracy and the Young Hegelians. We conclude that the sudden shock of globalisation and the post-national cultures this has entailed is in many ways similar to the shock of formal nationalism that emerged in the late-Seventeenth and early- Eighteenth centuries. The overall result of the VEP for the Australian school system is a massive procedural burden that is unlikely to produce the results at which the program is aimed.
Resumo:
In this paper, I show how new spaces are being prefigured for colonisation in the language of contemporary technology policy. Drawing on a corpus of 1.3 million words collected from technology policy centres throughout the world, I show the role of policy language in creating the foundations of an emergent form of political economy. The analysis is informed by principles from critical discourse analysis (CDA) and classical political economy. It foregrounds a functional aspect of language called process metaphor to show how aspects of human activity are prefigured for mass commodification by the manipulation of irrealis spaces. I also show how the fundamental element of any new political economy, the property element, is being largely ignored. The potential creation of a global space as concrete as landed property – electromagnetic spectrum – has significant ramifications for the future of social relations in any global “knowledge economy”.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.
Resumo:
This volume brings together a number of essays that seek to explore the nature of early modern scholarship, ostensibly with special regard to the themes of interdisciplinarity and collaboration. As one might expect, the essays thus cover a gamut of topics – political manoeuvring, philosophical debates, gift-giving and dramatic performance – and each study is important and useful in its own right. As a whole, however, this collection serves more as a starting point for an exploration of its themes, than as an authoritative overview of the subject at hand.
Resumo:
Follicle classification is an important aid to the understanding of follicular development and atresia. Some bovine primordial follicles have the classical primordial shape, but ellipsoidal shaped follicles with some cuboidal granulosa cells at the poles are far more common. Preantral follicles have one of two basal lamina phenotypes, either a single aligned layer or one with additional layers. In antral follicles <5 mm diameter, half of the healthy follicles have columnar shaped basal granulosa cells and additional layers of basal lamina, which appear as loops in cross section (‘loopy’). The remainder have aligned single-layered follicular basal laminas with rounded basal cells, and contain better quality oocytes than the loopy/columnar follicles. In sizes >5 mm, only aligned/rounded phenotypes are present. Dominant and subordinate follicles can be identified by ultrasound and/or histological examination of pairs of ovaries. Atretic follicles <5 mm are either basal atretic or antral atretic, named on the basis of the location in the membrana granulosa where cells die first. Basal atretic follicles have considerable biological differences to antral atretic follicles. In follicles >5 mm, only antral atresia is observed. The concentrations of follicular fluid steroid hormones can be used to classify atresia and distinguish some of the different types of atresia; however, this method is unlikely to identify follicles early in atresia, and hence misclassify them as healthy. Other biochemical and histological methods can be used, but since cell death is a part of normal homoeostatis, deciding when a follicle has entered atresia remains somewhat subjective.
Resumo:
In cloud computing, resource allocation and scheduling of multiple composite web services is an important and challenging problem. This is especially so in a hybrid cloud where there may be some low-cost resources available from private clouds and some high-cost resources from public clouds. Meeting this challenge involves two classical computational problems: one is assigning resources to each of the tasks in the composite web services; the other is scheduling the allocated resources when each resource may be used by multiple tasks at different points of time. In addition, Quality-of-Service (QoS) issues, such as execution time and running costs, must be considered in the resource allocation and scheduling problem. Here we present a Cooperative Coevolutionary Genetic Algorithm (CCGA) to solve the deadline-constrained resource allocation and scheduling problem for multiple composite web services. Experimental results show that our CCGA is both efficient and scalable.
Resumo:
A classical condition for fast learning rates is the margin condition, first introduced by Mammen and Tsybakov. We tackle in this paper the problem of adaptivity to this condition in the context of model selection, in a general learning framework. Actually, we consider a weaker version of this condition that allows one to take into account that learning within a small model can be much easier than within a large one. Requiring this “strong margin adaptivity” makes the model selection problem more challenging. We first prove, in a general framework, that some penalization procedures (including local Rademacher complexities) exhibit this adaptivity when the models are nested. Contrary to previous results, this holds with penalties that only depend on the data. Our second main result is that strong margin adaptivity is not always possible when the models are not nested: for every model selection procedure (even a randomized one), there is a problem for which it does not demonstrate strong margin adaptivity.
Resumo:
Texture analysis and textural cues have been applied for image classification, segmentation and pattern recognition. Dominant texture descriptors include directionality, coarseness, line-likeness etc. In this dissertation a class of textures known as particulate textures are defined, which are predominantly coarse or blob-like. The set of features that characterise particulate textures are different from those that characterise classical textures. These features are micro-texture, macro-texture, size, shape and compaction. Classical texture analysis techniques do not adequately capture particulate texture features. This gap is identified and new methods for analysing particulate textures are proposed. The levels of complexity in particulate textures are also presented ranging from the simplest images where blob-like particles are easily isolated from their back- ground to the more complex images where the particles and the background are not easily separable or the particles are occluded. Simple particulate images can be analysed for particle shapes and sizes. Complex particulate texture images, on the other hand, often permit only the estimation of particle dimensions. Real life applications of particulate textures are reviewed, including applications to sedimentology, granulometry and road surface texture analysis. A new framework for computation of particulate shape is proposed. A granulometric approach for particle size estimation based on edge detection is developed which can be adapted to the gray level of the images by varying its parameters. This study binds visual texture analysis and road surface macrotexture in a theoretical framework, thus making it possible to apply monocular imaging techniques to road surface texture analysis. Results from the application of the developed algorithm to road surface macro-texture, are compared with results based on Fourier spectra, the auto- correlation function and wavelet decomposition, indicating the superior performance of the proposed technique. The influence of image acquisition conditions such as illumination and camera angle on the results was systematically analysed. Experimental data was collected from over 5km of road in Brisbane and the estimated coarseness along the road was compared with laser profilometer measurements. Coefficient of determination R2 exceeding 0.9 was obtained when correlating the proposed imaging technique with the state of the art Sensor Measured Texture Depth (SMTD) obtained using laser profilometers.