935 resultados para MASS CLASSIFICATION SYSTEMS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of the International Society for Knowledge Organization, we often consider knowledge organization systems to comprise catalogues, thesauri, and bibliothecal classification schemes – schemes for library arrangement. In recent years we have added ontologies and folksonomies to our sphere of study. In all of these cases it seems we are concerned with improving access to information. We want a good system.And much of the literature from the late 19th into the late 20th century took that as their goal – to analyze the world of knowledge and the structures of representing it as its objects of study; again, with the ethos for creating a good system. In most cases this meant we had to be correct in our assertions about the universe of knowledge and the relationships that obtain between its constituent parts. As a result much of the literature of knowledge organization is prescriptive – instructing designers and professionals how to build or use the schemes correctly – that is to maximize redundant success in accessing information.In 2005, there was a turn in some of the knowledge organization literature. It has been called the descriptive turn. This is in relation to the otherwise prescriptive efforts of researchers in KO. And it is the descriptive turn that makes me think of context, languages, and cultures in knowledge organization–the theme of this year’s conference.Work in the descriptive turn questions the basic assumptions about what we want to do when we create, implement, maintain, and evaluate knowledge organization systems. Following on these assumptions researchers have examined a wider range of systems and question the motivations behind system design. Online websites that allow users to curate their own collections are one such addition, for example Pinterest (cf., Feinberg, 2011). However, researchers have also looked back at other lineages of organizing to compare forms and functions. For example, encyclopedias, catalogues raisonnés, archival description, and winter counts designed and used by Native Americans.In this case of online curated collections, Melanie Feinberg has started to examine the craft of curation, as she calls it. In this line of research purpose, voice, and rhetorical stance surface as design considerations. For example, in the case of the Pinterest, users are able and encouraged to create boards. The process of putting together these boards is an act of curation in contemporary terminology. It is describing this craft that comes from the descriptive turn in KO.In the second case, when researchers in the descriptive turn look back at older and varied examples of knowledge organization systems, we are looking for a full inventory of intent and inspiration for future design. Encyclopedias, catalogues raisonnés, archival description, and works of knowledge organization in other cultures provide a rich world for the descriptive turn. And researchers have availed themselves of this.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What theoretical framework can help in building, maintaining and evaluating networked knowledge organization resources? Specifically, what theoretical framework makes sense of the semantic prowess of ontologies and peer-to-peer sys- tems, and by extension aids in their building, maintenance, and evaluation? I posit that a theoretical work that weds both for- mal and associative (structural and interpretive) aspects of knowledge organization systems provides that framework. Here I lay out the terms and the intellectual constructs that serve as the foundation for investigative work into experientialist classifi- cation theory, a theoretical framework of embodied, infrastructural, and reified knowledge organization. I build on the inter- pretive work of scholars in information studies, cognitive semantics, sociology, and science studies. With the terms and the framework in place, I then outline classification theory s critiques of classificatory structures. In order to address these cri- tiques with an experientialist approach an experientialist semantics is offered as a design commitment for an example: metadata in peer-to-peer network knowledge organization structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Marine protected areas (MPAs) are a global conservation and management tool to enhance the resilience of linked social-ecological systems with the aim of conserving biodiversity and providing ecosystem services for sustainable use. However, MPAs implemented worldwide include a large variety of zoning and management schemes from single to multiple-zoning and from no-take to multiple-use areas. The current IUCN categorisation of MPAs is based on management objectives which many times have a significant mismatch to regulations causing a strong uncertainty when evaluating global MPAs effectiveness. A novel global classification system for MPAs based on regulations of uses as an alternative or complementing, the current IUCN system of categories is presented. Scores for uses weighted by their potential impact on biodiversity were built. Each zone within a MPA was scored and an MPA index integrates the zone scores. This system classifies MPAs as well as each MPA zone individually, is globally applicable and unambiguously discriminates the impacts of uses. (C) 2016 The Authors. Published by Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydrogen is considered as an appealing alternative to fossil fuels in the pursuit of sustainable, secure and prosperous growth in the UK and abroad. However there exists a persisting bottleneck in the effective storage of hydrogen for mobile applications in order to facilitate a wide implementation of hydrogen fuel cells in the fossil fuel dependent transportation industry. To address this issue, new means of solid state chemical hydrogen storage are proposed in this thesis. This involves the coupling of LiH with three different organic amines: melamine, urea and dicyandiamide. In principle, thermodynamically favourable hydrogen release from these systems proceeds via the deprotonation of the protic N-H moieties by the hydridic metal hydride. Simultaneously hydrogen kinetics is expected to be enhanced over heavier hydrides by incorporating lithium ions in the proposed binary hydrogen storage systems. Whilst the concept has been successfully demonstrated by the results obtained in this work, it was observed that optimising the ball milling conditions is central in promoting hydrogen desorption in the proposed systems. The theoretical amount of 6.97 wt% by dry mass of hydrogen was released when heating a ball milled mixture of LiH and melamine (6:1 stoichiometry) to 320 °C. It was observed that ball milling introduces a disruption in the intermolecular hydrogen bonding network that exists in pristine melamine. This effect extends to a molecular level electron redistribution observed as a function of shifting IR bands. It was postulated that stable phases form during the first stages of dehydrogenation which contain the triazine skeleton. Dehydrogenation of this system yields a solid product Li2NCN, which has been rehydrogenated back to melamine via hydrolysis under weak acidic conditions. On the other hand, the LiH and urea system (4:1 stoichiometry) desorbed approximately 5.8 wt% of hydrogen, from the theoretical capacity of 8.78 wt% (dry mass), by 270 °C accompanied by undesirable ammonia and trace amount of water release. The thermal dehydrogenation proceeds via the formation of Li(HN(CO)NH2) at 104.5 °C; which then decomposes to LiOCN and unidentified phases containing C-N moieties by 230 °C. The final products are Li2NCN and Li2O (270 °C) with LiCN and Li2CO3 also detected under certain conditions. It was observed that ball milling can effectively supress ammonia formation. Furthermore results obtained from energetic ball milling experiments have indicated that the barrier to full dehydrogenation between LiH and urea is principally kinetic. Finally the dehydrogenation reaction between LiH and dicyandiamide system (4:1 stoichiometry) occurs through two distinct pathways dependent on the ball milling conditions. When ball milled at 450 RPM for 1 h, dehydrogenation proceeds alongside dicyandiamide condensation by 400 °C whilst at a slower milling speed of 400 RPM for 6h, decomposition occurs via a rapid gas desorption (H2 and NH3) at 85 °C accompanied by sample foaming. The reactant dicyandiamide can be generated by hydrolysis using the product Li2NCN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forest biomass has been having an increasing importance in the world economy and in the evaluation of the forests development and monitoring. It was identified as a global strategic reserve, due to its applications in bioenergy, bioproduct development and issues related to reducing greenhouse gas emissions. The estimation of above ground biomass is frequently done with allometric functions per species with plot inventory data. An adequate sampling design and intensity for an error threshold is required. The estimation per unit area is done using an extrapolation method. This procedure is labour demanding and costly. The mail goal of this study is the development of allometric functions for the estimation of above ground biomass with ground cover as independent variable, for forest areas of holm aok (Quercus rotundifolia), cork oak (Quercus suber) and umbrella pine (Pinus pinea) in multiple use systems. Ground cover per species was derived from crown horizontal projection obtained by processing high resolution satellite images, orthorectified, geometrically and atmospheric corrected, with multi-resolution segmentation method and object oriented classification. Forest inventory data were used to estimate plot above ground biomass with published allometric functions at tree level. The developed functions were fitted for monospecies stands and for multispecies stands of Quercus rotundifolia and Quercus suber, and Quercus suber and Pinus pinea. The stand composition was considered adding dummy variables to distinguish monospecies from multispecies stands. The models showed a good performance. Noteworthy is that the dummy variables, reflecting the differences between species, originated improvements in the models. Significant differences were found for above ground biomass estimation with the functions with and without the dummy variables. An error threshold of 10% corresponds to stand areas of about 40 ha. This method enables the overall area evaluation, not requiring extrapolation procedures, for the three species, which occur frequently in multispecies stands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is about a PhD thesis and includes the study and analysis of the performance of an onshore wind energy conversion system. First, mathematical models of a variable speed wind turbine with pitch control are studied, followed by the study of different controller types such as integer-order controllers, fractional-order controllers, fuzzy logic controllers, adaptive controllers and predictive controllers and the study of a supervisor based on finite state machines is also studied. The controllers are included in the lower level of a hierarchical structure composed by two levels whose objective is to control the electric output power around the rated power. The supervisor included at the higher level is based on finite state machines whose objective is to analyze the operational states according to the wind speed. The studied mathematical models are integrated into computer simulations for the wind energy conversion system and the obtained numerical results allow for the performance assessment of the system connected to the electric grid. The wind energy conversion system is composed by a variable speed wind turbine, a mechanical transmission system described by a two mass drive train, a gearbox, a doubly fed induction generator rotor and by a two level converter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global warming and climate change have been among the most controversial topics after the industrial revolution. The main contributor to global warming is carbon dioxide (CO2), which increases the temperature by trapping heat in the atmosphere. Atmospheric CO2 concentration before the industrial era was around 280 ppm for a long period, while it has increased dramatically since the industrial revolution up to approximately 420 ppm. According to the Paris agreement it is needed to keep the temperature increase up to 2°C, preferably 1.5° C, to prevent reaching the tipping point of climate change. To keep the temperature increase below the range, it is required to find solutions to reduce CO2 emissions. The solutions can be low-carbon systems and transition from fossil fuels to renewable energy sources (RES). This thesis is allocated to the assessment of low-carbon systems and the reduction of CO2 by using RES instead of fossil fuels. One of the most important aspects to define the location and capacity of low-carbon systems is CO2 mass estimation. As mentioned, high-emission systems can be substituted by low-carbon systems. An example of high-emission systems is dredging. The global CO2 emission from dredging is relatively high which is associated with the growth of marine transport in addition to its high emission. Thus, ejectors system as alternative for dredging is investigated in chapter 2. For the transition from fossil fuels to RES, it is required to provide solutions for the RES storage problem. A solution could be zero-emission fuels such as hydrogen. However, the production of hydrogen requires electricity, and electricity production emits a large amount of CO2. Therefore, the last three chapters are allocated to hydrogen generation via electrolysis, at the current condition and scenarios of RES and variation of cell characteristics and stack materials, and its delivery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent widespread use of social media platforms and web services has led to a vast amount of behavioral data that can be used to model socio-technical systems. A significant part of this data can be represented as graphs or networks, which have become the prevalent mathematical framework for studying the structure and the dynamics of complex interacting systems. However, analyzing and understanding these data presents new challenges due to their increasing complexity and diversity. For instance, the characterization of real-world networks includes the need of accounting for their temporal dimension, together with incorporating higher-order interactions beyond the traditional pairwise formalism. The ongoing growth of AI has led to the integration of traditional graph mining techniques with representation learning and low-dimensional embeddings of networks to address current challenges. These methods capture the underlying similarities and geometry of graph-shaped data, generating latent representations that enable the resolution of various tasks, such as link prediction, node classification, and graph clustering. As these techniques gain popularity, there is even a growing concern about their responsible use. In particular, there has been an increased emphasis on addressing the limitations of interpretability in graph representation learning. This thesis contributes to the advancement of knowledge in the field of graph representation learning and has potential applications in a wide range of complex systems domains. We initially focus on forecasting problems related to face-to-face contact networks with time-varying graph embeddings. Then, we study hyperedge prediction and reconstruction with simplicial complex embeddings. Finally, we analyze the problem of interpreting latent dimensions in node embeddings for graphs. The proposed models are extensively evaluated in multiple experimental settings and the results demonstrate their effectiveness and reliability, achieving state-of-the-art performances and providing valuable insights into the properties of the learned representations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a study of globular clusters (GCs), based on analysis of Monte Carlo simulations of globular clusters (GCs) with the aim to define new empirical parameters measurable from observations and able to trace the different phases of their dynamical evolution history. During their long term dynamical evolution, due to mass segregation and and dynamical friction, massive stars transfer kinetic energy to lower-mass objects, causing them to sink toward the cluster center. This continuous transfer of kinetic energy from the core to the outskirts triggers the runaway contraction of the core, known as "core collapse" (CC), followed by episodes of expansion and contraction called gravothermal oscillations. Clearly, such an internal dynamical evolution corresponds to significant variations also of the structure of the system. Determining the dynamical age of a cluster can be challenging as it depends on various internal and external properties. The traditional classification of GCs as CC or post-CC systems relies on detecting a steep power-law cusp in the central density profile, which may not always be reliable due to post-CC oscillations or other processes. In this thesis, based on the normalized cumulative radial distribution (nCRD) within a fraction of the half-mass radius is analyzed, and three diagnostics (A5, P5, and S2.5) are defined. These diagnostics show sensitivity to dynamical evolution and can distinguish pre-CC clusters from post-CC clusters.The analysis performed using multiple simulations with different initial conditions, including varying binary fractions and the presence of dark remnants showed the time variations of the diagnostics follow distinct patterns depending on the binary fraction and the retention or ejection of black holes. This analysis is extended to a larger set of simulations matching the observed properties of Galactic GCs, and the parameters show a potential to distinguish the dynamical stages of the observed clusters as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The abundance of visual data and the push for robust AI are driving the need for automated visual sensemaking. Computer Vision (CV) faces growing demand for models that can discern not only what images "represent," but also what they "evoke." This is a demand for tools mimicking human perception at a high semantic level, categorizing images based on concepts like freedom, danger, or safety. However, automating this process is challenging due to entropy, scarcity, subjectivity, and ethical considerations. These challenges not only impact performance but also underscore the critical need for interoperability. This dissertation focuses on abstract concept-based (AC) image classification, guided by three technical principles: situated grounding, performance enhancement, and interpretability. We introduce ART-stract, a novel dataset of cultural images annotated with ACs, serving as the foundation for a series of experiments across four key domains: assessing the effectiveness of the end-to-end DL paradigm, exploring cognitive-inspired semantic intermediaries, incorporating cultural and commonsense aspects, and neuro-symbolic integration of sensory-perceptual data with cognitive-based knowledge. Our results demonstrate that integrating CV approaches with semantic technologies yields methods that surpass the current state of the art in AC image classification, outperforming the end-to-end deep vision paradigm. The results emphasize the role semantic technologies can play in developing both effective and interpretable systems, through the capturing, situating, and reasoning over knowledge related to visual data. Furthermore, this dissertation explores the complex interplay between technical and socio-technical factors. By merging technical expertise with an understanding of human and societal aspects, we advocate for responsible labeling and training practices in visual media. These insights and techniques not only advance efforts in CV and explainable artificial intelligence but also propel us toward an era of AI development that harmonizes technical prowess with deep awareness of its human and societal implications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded systems are increasingly integral to daily life, improving and facilitating the efficiency of modern Cyber-Physical Systems which provide access to sensor data, and actuators. As modern architectures become increasingly complex and heterogeneous, their optimization becomes a challenging task. Additionally, ensuring platform security is important to avoid harm to individuals and assets. This study primarily addresses challenges in contemporary Embedded Systems, focusing on platform optimization and security enforcement. The initial section of this study delves into the application of machine learning methods to efficiently determine the optimal number of cores for a parallel RISC-V cluster to minimize energy consumption using static source code analysis. Results demonstrate that automated platform configuration is not only viable but also that there is a moderate performance trade-off when relying solely on static features. The second part focuses on addressing the problem of heterogeneous device mapping, which involves assigning tasks to the most suitable computational device in a heterogeneous platform for optimal runtime. The contribution of this section lies in the introduction of novel pre-processing techniques, along with a training framework called Siamese Networks, that enhances the classification performance of DeepLLVM, an advanced approach for task mapping. Importantly, these proposed approaches are independent from the specific deep-learning model used. Finally, this research work focuses on addressing issues concerning the binary exploitation of software running in modern Embedded Systems. It proposes an architecture to implement Control-Flow Integrity in embedded platforms with a Root-of-Trust, aiming to enhance security guarantees with limited hardware modifications. The approach involves enhancing the architecture of a modern RISC-V platform for autonomous vehicles by implementing a side-channel communication mechanism that relays control-flow changes executed by the process running on the host core to the Root-of-Trust. This approach has limited impact on performance and it is effective in enhancing the security of embedded platforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hand gesture recognition based on surface electromyography (sEMG) signals is a promising approach for the development of intuitive human-machine interfaces (HMIs) in domains such as robotics and prosthetics. The sEMG signal arises from the muscles' electrical activity, and can thus be used to recognize hand gestures. The decoding from sEMG signals to actual control signals is non-trivial; typically, control systems map sEMG patterns into a set of gestures using machine learning, failing to incorporate any physiological insight. This master thesis aims at developing a bio-inspired hand gesture recognition system based on neuromuscular spike extraction rather than on simple pattern recognition. The system relies on a decomposition algorithm based on independent component analysis (ICA) that decomposes the sEMG signal into its constituent motor unit spike trains, which are then forwarded to a machine learning classifier. Since ICA does not guarantee a consistent motor unit ordering across different sessions, 3 approaches are proposed: 2 ordering criteria based on firing rate and negative entropy, and a re-calibration approach that allows the decomposition model to retain information about previous sessions. Using a multilayer perceptron (MLP), the latter approach results in an accuracy up to 99.4% in a 1-subject, 1-degree of freedom scenario. Afterwards, the decomposition and classification pipeline for inference is parallelized and profiled on the PULP platform, achieving a latency < 50 ms and an energy consumption < 1 mJ. Both the classification models tested (a support vector machine and a lightweight MLP) yielded an accuracy > 92% in a 1-subject, 5-classes (4 gestures and rest) scenario. These results prove that the proposed system is suitable for real-time execution on embedded platforms and also capable of matching the accuracy of state-of-the-art approaches, while also giving some physiological insight on the neuromuscular spikes underlying the sEMG.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within the classification of orbits in axisymmetric stellar systems, we present a new algorithm able to automatically classify the orbits according to their nature. The algorithm involves the application of the correlation integral method to the surface of section of the orbit; fitting the cumulative distribution function built with the consequents in the surface of section of the orbit, we can obtain the value of its logarithmic slope m which is directly related to the orbit’s nature: for slopes m ≈ 1 we expect the orbit to be regular, for slopes m ≈ 2 we expect it to be chaotic. With this method we have a fast and reliable way to classify orbits and, furthermore, we provide an analytical expression of the probability that an orbit is regular or chaotic given the logarithmic slope m of its correlation integral. Although this method works statistically well, the underlying algorithm can fail in some cases, misclassifying individual orbits under some peculiar circumstances. The performance of the algorithm benefits from a rich sampling of the traces of the SoS, which can be obtained with long numerical integration of orbits. Finally we note that the algorithm does not differentiate between the subtypes of regular orbits: resonantly trapped and untrapped orbits. Such distinction would be a useful feature, which we leave for future work. Since the result of the analysis is a probability linked to a Gaussian distribution, for the very definition of distribution, some orbits even if they have a certain nature are classified as belonging to the opposite class and create the probabilistic tails of the distribution. So while the method produces fair statistical results, it lacks in absolute classification precision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of galaxies at high redshift plays a crucial role to understand the mechanism of galaxy formation and evolution. At redshifts just after the epoch of re-ionization (4mass, change their morphological type and progressively become more obscured due to increased dust attenuation of the UV light. Therefore, determining physical parameters regarding dust is essential to trace the history of the star formation rate (SFR). The main purpose of this thesis is to determine the spatial extent of the dust emission in high-redshift galaxies and to provide a lower limit on dust temperature, to constrain the dust mass. This is achieved by studying 23 FIR continuum detected main-sequence galaxies of the ALMA Large Program to INvestigate (ALPINE) survey, performed at high redshift (4systems. Of these 20, 7 are spatially resolved; for each of the remaining 13, we provide an upper limit to the dust size. We find that the gas emission is more extended than the dust spatial scale, by a factor of 1.40±0.29, while the latter appears to be larger than the stellar emission size. Moreover, we do not find any significant trend for dust size as a function of the stellar mass and the redshift. In addition, we provide a minimum dust temperature estimate for the 7 resolved sources, for which we find Tmin∼16−19K. We also derive dust masses for the resolved sources, logMdust∼7−8M⊙.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Intermittent fasting (IF) is an often-used intervention to decrease body mass. In male Sprague-Dawley rats, 24 hour cycles of IF result in light caloric restriction, reduced body mass gain, and significant decreases in the efficiency of energy conversion. Here, we study the metabolic effects of IF in order to uncover mechanisms involved in this lower energy conversion efficiency. After 3 weeks, IF animals displayed overeating during fed periods and lower body mass, accompanied by alterations in energy-related tissue mass. The lower efficiency of energy use was not due to uncoupling of muscle mitochondria. Enhanced lipid oxidation was observed during fasting days, whereas fed days were accompanied by higher metabolic rates. Furthermore, an increased expression of orexigenic neurotransmitters AGRP and NPY in the hypothalamus of IF animals was found, even on feeding days, which could explain the overeating pattern. Together, these effects provide a mechanistic explanation for the lower efficiency of energy conversion observed. Overall, we find that IF promotes changes in hypothalamic function that explain differences in body mass and caloric intake.