875 resultados para Theory and Algorithms
Resumo:
This paper proposes a preliminary classification of knowledge organization research, divided among epistemology, theory, and methodology plus three spheres of research: design, study, and critique. This work is situated in a metatheoretical framework, drawn from sociological thought. Example works are presented along with preliminary classification. The classification is then briefly described as a comparison tool which can be used to demonstrate overlap and divergence in cognate discourses of knowledge organization (such as ontology engineering).
Resumo:
Ethos is the spirit that motivates ideas and practices. When we talk casually about the ethos of a town, state, or country we are describing the fundamental or at least underlying rationale for action, as we see it. Ideology is a way of looking at things.It is the set of ideas that constitute one’s goals, expectations, and actions. In this brief essay I want to create a space where we might talk about the ethos and ideology in knowledge organization from a particular point of view; combining ideas and inspiration from the Arts and Crafts movement of the early Twentieth Century, critical theory in extant knowledge organization work, the work of Slavoj Žižek, and the work of Thich Nhat Hahn on Engaged Buddhism.I will expand more below, but we can say here and now that there are many open questions about ethos and ideology in and of knowledge organization, both its practice and products. Many of them in classification, positioned as they are around identity politics of race, gender, and other marginalized groups, ask the classificationist to be mindful of the choice of terms and relationships between terms. From this work we understand that race and gender requires special consideration, which manifests as a particular concern for the form of representation inside extant schemes. Even with these advances in our understanding there are still other categories about which we must make decisions and take action. For example, there are ethical decisions about fiduciary resource allocation, political decisions about standards adoption, and even broader zeitgeist considerations like the question of Fordist conceptions (Day, 2001; Tennis 2006) of the mechanics of description and representation present in much of today’s practice.Just as taking action in a particular way is an ethical concern, so too is avoiding a lack of action. Scholars in Knowledge Organization have also looked at the absence of what we might call right action in the context of cataloguing and classification. This leads to some problems above, and hints at larger ethical concerns of watching a subtle semantic violence go on without intervention (Bowker and Star, 2001; Bade 2006).The problem is not to act or not act, but how to act or not act in an ethical way, or at least with ethical considerations. The action advocated by an ethical consideration for knowledge organization is an engaged one, and it is here where we can take a nod from contemporary ethical theory advanced by Engaged Buddhism. In this context we can see the manifestation of fourteen precepts that guide ethical action, and warn against lack of action.
Resumo:
Fluvial sediment transport is controlled by hydraulics, sediment properties and arrangement, and flow history across a range of time scales. This physical complexity has led to ambiguous definition of the reference frame (Lagrangian or Eulerian) in which sediment transport is analysed. A general Eulerian-Lagrangian approach accounts for inertial characteristics of particles in a Lagrangian (particle fixed) frame, and for the hydrodynamics in an independent Eulerian frame. The necessary Eulerian-Lagrangian transformations are simplified under the assumption of an ideal Inertial Measurement Unit (IMU), rigidly attached at the centre of the mass of a sediment particle. Real, commercially available IMU sensors can provide high frequency data on accelerations and angular velocities (hence forces and energy) experienced by grains during entrainment and motion, if adequately customized. IMUs are subjected to significant error accu- mulation but they can be used for statistical parametrisation of an Eulerian-Lagrangian model, for coarse sediment particles and over the temporal scale of individual entrainment events. In this thesis an Eulerian-Lagrangian model is introduced and evaluated experimentally. Absolute inertial accelerations were recorded at a 4 Hz frequency from a spherical instrumented particle (111 mm diameter and 2383 kg/m3 density) in a series of entrainment threshold experiments on a fixed idealised bed. The grain-top inertial acceleration entrainment threshold was approximated at 44 and 51 mg for slopes 0.026 and 0.037 respectively. The saddle inertial acceleration entrainment threshold was at 32 and 25 mg for slopes 0.044 and 0.057 respectively. For the evaluation of the complete Eulerian-Lagrangian model two prototype sensors are presented: an idealised (spherical) with a diameter of 90 mm and an ellipsoidal with axes 100, 70 and 30 mm. Both are instrumented with a complete IMU, capable of sampling 3D inertial accelerations and 3D angular velocities at 50 Hz. After signal analysis, the results can be used to parametrize sediment movement but they do not contain positional information. The two sensors (spherical and ellipsoidal) were tested in a series of entrainment experiments, similar to the evaluation of the 111 mm prototype, for a slope of 0.02. The spherical sensor entrained at discharges of 24.8 ± 1.8 l/s while the same threshold for the ellipsoidal sensor was 45.2 ± 2.2 l/s. Kinetic energy calculations were used to quantify the particle-bed energy exchange under fluvial (discharge at 30 l/s) and non-fluvial conditions. All the experiments suggest that the effect of the inertial characteristics of coarse sediments on their motion is comparable to the effect hydrodynamic forces. The coupling of IMU sensors with advanced telemetric systems can lead to the tracking of Lagrangian particle trajectories, at a frequency and accuracy that will permit the testing of diffusion/dispersion models across the range of particle diameters.
Resumo:
We present the first results of a study on meson spectroscopy using a covariant formalism based on the Covariant Spectator Theory. Our approach is derived directly in Minkowski space and it approximates the Bethe–Salpeter equation by taking effectively into account the contributions from both ladder and crossed ladder diagrams in the $q\bar{q}$ interaction kernel. A general Lorentz structure of the kernel is tested and chiral constraints on the kernel are discussed. Results for the pion form factor are also presented.
Resumo:
2016
Resumo:
The present Thesis reports on the various research projects to which I have contributed during my PhD period, working with several research groups, and whose results have been communicated in a number of scientific publications. The main focus of my research activity was to learn, test, exploit and extend the recently developed vdW-DFT (van der Waals corrected Density Functional Theory) methods for computing the structural, vibrational and electronic properties of ordered molecular crystals from first principles. A secondary, and more recent, research activity has been the analysis with microelectrostatic methods of Molecular Dynamics (MD) simulations of disordered molecular systems. While only very unreliable methods based on empirical models were practically usable until a few years ago, accurate calculations of the crystal energy are now possible, thanks to very fast modern computers and to the excellent performance of the best vdW-DFT methods. Accurate energies are particularly important for describing organic molecular solids, since they often exhibit several alternative crystal structures (polymorphs), with very different packing arrangements but very small energy differences. Standard DFT methods do not describe the long-range electron correlations which give rise to the vdW interactions. Although weak, these interactions are extremely sensitive to the packing arrangement, and neglecting them used to be a problem. The calculations of reliable crystal structures and vibrational frequencies has been made possible only recently, thanks to development of some good representations of the vdW contribution to the energy (known as “vdW corrections”).
Resumo:
This dissertation explores the entanglement between the visionary capacity of feminist theory to shape sustainable futures and the active contribution of feminist speculative fiction to the conceptual debate about the climate crisis. Over the last few years, increasing critical attention has been paid to ecofeminist perspectives on climate change, that see as a core cause of the climate crisis the patriarchal domination of nature, considered to go hand in hand with the oppression of women. What remains to be thoroughly scrutinised is the linkage between ecofeminist theories and other ethical stances capable of countering colonising epistemologies of mastery and dominion over nature. This dissertation intervenes in the debate about the master narrative of the Anthropocene, and about the one-dimensional perspective that often characterises its literary representations, from a feminist perspective that also aims at decolonising the imagination; it looks at literary texts that consider patriarchal domination of nature in its intersections with other injustices that play out within the Anthropocene, with a particular focus on race, colonialism, and capitalism. After an overview of the linkages between gender and climate change and between feminism and environmental humanities, it introduces the genre of climate fiction examining its main tropes. In an attempt to find alternatives to the mainstream narrative of the Anthropocene (namely to its gender-neutrality, colour-blindness, and anthropocentrism), it focuses on contemporary works of speculative fiction by four Anglophone women authors that particularly address the inequitable impacts of climate change experienced not only by women, but also by sexualised, racialised, and naturalised Others. These texts were chosen because of their specific engagement with the relationship between climate change, global capitalism, and a flat trust in techno-fixes on the one hand, and structural inequalities generated by patriarchy, racism, and intersecting systems of oppression on the other.
Resumo:
This thesis deals with efficient solution of optimization problems of practical interest. The first part of the thesis deals with bin packing problems. The bin packing problem (BPP) is one of the oldest and most fundamental combinatorial optimiza- tion problems. The bin packing problem and its generalizations arise often in real-world ap- plications, from manufacturing industry, logistics and transportation of goods, and scheduling. After an introductory chapter, I will present two applications of two of the most natural extensions of the bin packing: Chapter 2 will be dedicated to an application of bin packing in two dimension to a problem of scheduling a set of computational tasks on a computer cluster, while Chapter 3 deals with the generalization of BPP in three dimensions that arise frequently in logistic and transportation, often com- plemented with additional constraints on the placement of items and characteristics of the solution, like, for example, guarantees on the stability of the items, to avoid potential damage to the transported goods, on the distribution of the total weight of the bins, and on compatibility with loading and unloading operations. The second part of the thesis, and in particular Chapter 4 considers the Trans- mission Expansion Problem (TEP), where an electrical transmission grid must be expanded so as to satisfy future energy demand at the minimum cost, while main- taining some guarantees of robustness to potential line failures. These problems are gaining importance in a world where a shift towards renewable energy can impose a significant geographical reallocation of generation capacities, resulting in the ne- cessity of expanding current power transmission grids.
Resumo:
This PhD thesis focuses on studying the classical scattering of massive/massless particles toward black holes, and investigating double copy relations between classical observables in gauge theories and gravity. This is done in the Post-Minkowskian approximation i.e. a perturbative expansion of observables controlled by the gravitational coupling constant κ = 32πGN, with GN being the Newtonian coupling constant. The investigation is performed by using the Worldline Quantum Field Theory (WQFT), displaying a worldline path integral describing the scattering objects and a QFT path integral in the Born approximation, describing the intermediate bosons exchanged in the scattering event by the massive/massless particles. We introduce the WQFT, by deriving a relation between the Kosower- Maybee-O’Connell (KMOC) limit of amplitudes and worldline path integrals, then, we use that to study the classical Compton amplitude and higher point amplitudes. We also present a nice application of our formulation to the case of Hard Thermal Loops (HTL), by explicitly evaluating hard thermal currents in gauge theory and gravity. Next we move to the investigation of the classical double copy (CDC), which is a powerful tool to generate integrands for classical observables related to the binary inspiralling problem in General Relativity. In order to use a Bern-Carrasco-Johansson (BCJ) like prescription, straight at the classical level, one has to identify a double copy (DC) kernel, encoding the locality structure of the classical amplitude. Such kernel is evaluated by using a theory where scalar particles interacts through bi-adjoint scalars. We show here how to push forward the classical double copy so to account for spinning particles, in the framework of the WQFT. Here the quantization procedure on the worldline allows us to fully reconstruct the quantum theory on the gravitational side. Next we investigate how to describe the scattering of massless particles off black holes in the WQFT.
Resumo:
La seguente tesi propone un’introduzione al geometric deep learning. Nella prima parte vengono presentati i concetti principali di teoria dei grafi ed introdotta una dinamica di diffusione su grafo, in analogia con l’equazione del calore. A seguire, iniziando dal linear classifier verranno introdotte le architetture che hanno portato all’ideazione delle graph convolutional networks. In conclusione, si analizzano esempi di alcuni algoritmi utilizzati nel geometric deep learning e si mostra una loro implementazione sul Cora dataset, un insieme di dati con struttura a grafo.
Resumo:
Le teorie della gravità scalare-tensore sono una classe di teorie alternative alla rel- atività generale in cui l’interazione gravitazionale è descritta sia dalla metrica, sia da un campo scalare. Ne costituisce un esempio caratteristico la teoria di Brans-Dicke, in- trodotta come estensione della relatività generale in modo da renderla conforme con il principio di Mach. Il presente lavoro di tesi è volto a presentare un’analisi di questa teoria nei suoi aspetti principali, studiandone i fondamenti teorici e il modello cosmologico derivante, sottolineandone inoltre i limiti e le criticità; in seguito vengono esposti i risultati degli esperimenti fino ad ora svolti per verificare fondamenti e previsioni del modello.
Resumo:
With the advent of cheaper and faster DNA sequencing technologies, assembly methods have greatly changed. Instead of outputting reads that are thousands of base pairs long, new sequencers parallelize the task by producing read lengths between 35 and 400 base pairs. Reconstructing an organism’s genome from these millions of reads is a computationally expensive task. Our algorithm solves this problem by organizing and indexing the reads using n-grams, which are short, fixed-length DNA sequences of length n. These n-grams are used to efficiently locate putative read joins, thereby eliminating the need to perform an exhaustive search over all possible read pairs. Our goal was develop a novel n-gram method for the assembly of genomes from next-generation sequencers. Specifically, a probabilistic, iterative approach was utilized to determine the most likely reads to join through development of a new metric that models the probability of any two arbitrary reads being joined together. Tests were run using simulated short read data based on randomly created genomes ranging in lengths from 10,000 to 100,000 nucleotides with 16 to 20x coverage. We were able to successfully re-assemble entire genomes up to 100,000 nucleotides in length.
Resumo:
As systems for computer-aided-design and production of mechanical parts have developed, there has arisen a need for techniques for the comprehensive description of the desired part, including its 3-D shape. The creation and manipulation of shapes is generally known as geometric modelling. It is desirable that links be established between geometric modellers and machining programs. Currently, unbounded APT and some bounded geometry systems are being widely used in manufacturing industry for machining operations such as: milling, drilling, boring and turning, applied mainly to engineering parts. APT systems, however, are presently only linked to wire-frame drafting systems. The combination of a geometric modeller and APT will provide a powerful manufacturing system for industry from the initial design right through part manufacture using NC machines. This thesis describes a recently developed interface (ROMAPT) between a bounded geometry modeller (ROMULUS) and an unbounded NC processor (APT). A new set of theoretical functions and practical algorithms for the computer aided manufacturing of 3D solid geometric model has been investigated. This work has led to the development of a sophisticated computer program, ROMAPT, which provides a new link between CAD (in form of a goemetric modeller ROMULUS) and CAM (in form of the APT NC system). ROMAPT has been used to machine some engineering prototypes successfully both in soft foam material and aluminium. It has been demonstrated above that the theory and algorithms developed by the author for the development of computer aided manufacturing of 3D solid modelling are both valid and applicable. ROMAPT allows the full potential of a solid geometric modeller (ROMULUS) to be further exploited for NC applications without requiring major investment in new NC processor. ROMAPT supports output in APT-AC, APT4 and the CAM-I SSRI NC languages.
Resumo:
The Chihuahua desert is one of the most biologically diverse ecosystems in the world, but suffers serious degradation because of changes in fire regimes resulting in large catastrophic fires. My study was conducted in the Sierra La Mojonera (SLM) natural protected area in Mexico. The purpose of this study was to implement the use of FARSITE fire modeling as a fire management tool to develop an integrated fire management plan at SLM. Firebreaks proved to detain 100% of wildfire outbreaks. The rosetophilous scrub experienced the fastest rate of fire spread and lowland creosote bush scrub experienced the slowest rate of fire spread. March experienced the fastest rate of fire spread, while September experienced the slowest rate of fire spread. The results of my study provide a tool for wildfire management through the use geospatial technologies and, in particular, FARSITE fire modeling in SLM and Mexico.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.