908 resultados para Data Structure Operations
Resumo:
Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.
Resumo:
In the presented thesis work, meshfree method with distance fields is applied to create a novel computational approach which enables inclusion of the realistic geometric models of the microstructure and liberates Finite Element Analysis(FEA) from thedependance on and limitations of meshing of fine microstructural feature such as splats and porosity.Manufacturing processes of ceramics produce materials with complex porosity microstructure.Geometry of pores, their size and location substantially affect macro scale physical properties of the material. Complex structure and geometry of the pores severely limit application of modern Finite Element Analysis methods because they require construction of spatial grids (meshes) that conform to the geometric shape of the structure. As a result, there are virtually no effective tools available for predicting overall mechanical and thermal properties of porous materials based on their microstructure. This thesis is a separate handling and controls of geometric and physical computational models that are seamlessly combined at solution run time. Using the proposedapproach we will determine the effective thermal conductivity tensor of real porous ceramic materials featuring both isotropic and anisotropic thermal properties. This work involved development and implementation of numerical algorithms, data structure, and software.
Resumo:
User Quality of Experience (QoE) is a subjective entity and difficult to measure. One important aspect of it, User Experience (UX), corresponds to the sensory and emotional state of a user. For a user interacting through a User Interface (UI), precise information on how they are using the UI can contribute to understanding their UX, and thereby understanding their QoE. As well as a user’s use of the UI such as clicking, scrolling, touching, or selecting, other real-time digital information about the user such as from smart phone sensors (e.g. accelerometer, light level) and physiological sensors (e.g. heart rate, ECG, EEG) could contribute to understanding UX. Baran is a framework that is designed to capture, record, manage and analyse the User Digital Imprint (UDI) which, is the data structure containing all user context information. Baran simplifies the process of collecting experimental information in Human and Computer Interaction (HCI) studies, by recording comprehensive real-time data for any UI experiment, and making the data available as a standard UDI data structure. This paper presents an overview of the Baran framework, and provides an example of its use to record user interaction and perform some basic analysis of the interaction.
Resumo:
Conventional web search engines are centralised in that a single entity crawls and indexes the documents selected for future retrieval, and the relevance models used to determine which documents are relevant to a given user query. As a result, these search engines suffer from several technical drawbacks such as handling scale, timeliness and reliability, in addition to ethical concerns such as commercial manipulation and information censorship. Alleviating the need to rely entirely on a single entity, Peer-to-Peer (P2P) Information Retrieval (IR) has been proposed as a solution, as it distributes the functional components of a web search engine – from crawling and indexing documents, to query processing – across the network of users (or, peers) who use the search engine. This strategy for constructing an IR system poses several efficiency and effectiveness challenges which have been identified in past work. Accordingly, this thesis makes several contributions towards advancing the state of the art in P2P-IR effectiveness by improving the query processing and relevance scoring aspects of a P2P web search. Federated search systems are a form of distributed information retrieval model that route the user’s information need, formulated as a query, to distributed resources and merge the retrieved result lists into a final list. P2P-IR networks are one form of federated search in routing queries and merging result among participating peers. The query is propagated through disseminated nodes to hit the peers that are most likely to contain relevant documents, then the retrieved result lists are merged at different points along the path from the relevant peers to the query initializer (or namely, customer). However, query routing in P2P-IR networks is considered as one of the major challenges and critical part in P2P-IR networks; as the relevant peers might be lost in low-quality peer selection while executing the query routing, and inevitably lead to less effective retrieval results. This motivates this thesis to study and propose query routing techniques to improve retrieval quality in such networks. Cluster-based semi-structured P2P-IR networks exploit the cluster hypothesis to organise the peers into similar semantic clusters where each such semantic cluster is managed by super-peers. In this thesis, I construct three semi-structured P2P-IR models and examine their retrieval effectiveness. I also leverage the cluster centroids at the super-peer level as content representations gathered from cooperative peers to propose a query routing approach called Inverted PeerCluster Index (IPI) that simulates the conventional inverted index of the centralised corpus to organise the statistics of peers’ terms. The results show a competitive retrieval quality in comparison to baseline approaches. Furthermore, I study the applicability of using the conventional Information Retrieval models as peer selection approaches where each peer can be considered as a big document of documents. The experimental evaluation shows comparative and significant results and explains that document retrieval methods are very effective for peer selection that brings back the analogy between documents and peers. Additionally, Learning to Rank (LtR) algorithms are exploited to build a learned classifier for peer ranking at the super-peer level. The experiments show significant results with state-of-the-art resource selection methods and competitive results to corresponding classification-based approaches. Finally, I propose reputation-based query routing approaches that exploit the idea of providing feedback on a specific item in the social community networks and manage it for future decision-making. The system monitors users’ behaviours when they click or download documents from the final ranked list as implicit feedback and mines the given information to build a reputation-based data structure. The data structure is used to score peers and then rank them for query routing. I conduct a set of experiments to cover various scenarios including noisy feedback information (i.e, providing positive feedback on non-relevant documents) to examine the robustness of reputation-based approaches. The empirical evaluation shows significant results in almost all measurement metrics with approximate improvement more than 56% compared to baseline approaches. Thus, based on the results, if one were to choose one technique, reputation-based approaches are clearly the natural choices which also can be deployed on any P2P network.
Resumo:
The first mechanical Automaton concept was found in a Chinese text written in the 3rd century BC, while Computer Vision was born in the late 1960s. Therefore, visual perception applied to machines (i.e. the Machine Vision) is a young and exciting alliance. When robots came in, the new field of Robotic Vision was born, and these terms began to be erroneously interchanged. In short, we can say that Machine Vision is an engineering domain, which concern the industrial use of Vision. The Robotic Vision, instead, is a research field that tries to incorporate robotics aspects in computer vision algorithms. Visual Servoing, for example, is one of the problems that cannot be solved by computer vision only. Accordingly, a large part of this work deals with boosting popular Computer Vision techniques by exploiting robotics: e.g. the use of kinematics to localize a vision sensor, mounted as the robot end-effector. The remainder of this work is dedicated to the counterparty, i.e. the use of computer vision to solve real robotic problems like grasping objects or navigate avoiding obstacles. Will be presented a brief survey about mapping data structures most widely used in robotics along with SkiMap, a novel sparse data structure created both for robotic mapping and as a general purpose 3D spatial index. Thus, several approaches to implement Object Detection and Manipulation, by exploiting the aforementioned mapping strategies, will be proposed, along with a completely new Machine Teaching facility in order to simply the training procedure of modern Deep Learning networks.
Resumo:
A Digital Scholarly Edition is a conceptually and structurally sophisticated entity. Throughout the centuries, diverse methodologies have been employed to reconstruct a text transmitted through one or multiple sources, resulting in various edition types. With the advent of digital technology in philology, these practices have undergone a significant transformation, compelling scholars to reconsider their approach in light of the web. In the digital age, philologists are expected to possess (too) advanced technical skills to prepare interactive and enriched editions, even though, in most cases, only mechanical or documentary editions are published online. The Śivadharma Database is a web Content Management System (CMS) designed to facilitate the preparation, publication, and updating of Digital Scholarly Editions. By providing scholars with a user-friendly CRUD web application to reconstruct and annotate a text, they can prepare their textus with additional components such as apparatus, notes, translations, citations, and parallels. It is possible by leveraging an annotation system based on HTML and graph data structure. This choice is made because the text entity is multidimensional and multifaceted, even if its sequential presentation constrains it. In particular, editions of South Asian texts of the Śivadharma corpus, the case study of this research, contain a series of phenomena that are difficult to manage formally, such as overlapping hierarchies. Hence, it becomes necessary to establish the data structure best suited to represent this complexity. In Śivadharma Database, the textus is an HTML file readily displayable. Textual fragments, annotated via an interface without requiring philologists to write code and saved in the backend, form the atomic unit of multiple relationships organised in a graph database. This approach enables the formal representation of complex and overlapping textual phenomena, allowing for good annotation expressiveness with minimal effort to learn the relevant technologies during the editing workflow.
Resumo:
The European Union has expanded significantly in recent years. Sustainable trade within the Union, leading to economic growth to the benefit of the ‘old’ and ‘new’ member states is thus extremely important. The road infrastructure is strategic and vital to such development since an uneven transport infrastructure, in terms of capacity and condition, has the potential to reinforce uneven development trends and hinder economic convergence of old and new member states. In the decades since their design and construction, loading conditions have significantly changed for many major highway infrastructure elements/networks owing primarily to increased freight volumes and vehicle sizes. This, coupled with the gradual deterioration of a significant number of highway structures due to their age, and the absence of a pan-European assessment framework, can be expected to affect the smooth functioning of the infrastructure in its as-built condition. Increased periods of reduced flow can be expected owing to planned and unplanned interventions for repair/rehabilitation. This paper reports the findings of a survey regarding the current status of the highway infrastructure elements in six countries within the European Union as reported by the owners/operators. The countries surveyed include a cross-section of ‘existing’ older countries and ‘new’ member states. The current situations for bridges, culverts, tunnels and retaining walls are reported, along with their potential replacement costs. The findings act as a departure point for further studies in support of a centralised and/or synchronised EU approach to infrastructure maintenance management. Information in the form presented in this paper is central to any future decision-making frameworks in terms of trade route choice and operations, monetary investment, optimised maintenance, management and rehabilitation of the built infrastructure and the economic integration of the newly joined member states.
Resumo:
Geographic Data Warehouses (GDW) are one of the main technologies used in decision-making processes and spatial analysis, and the literature proposes several conceptual and logical data models for GDW. However, little effort has been focused on studying how spatial data redundancy affects SOLAP (Spatial On-Line Analytical Processing) query performance over GDW. In this paper, we investigate this issue. Firstly, we compare redundant and non-redundant GDW schemas and conclude that redundancy is related to high performance losses. We also analyze the issue of indexing, aiming at improving SOLAP query performance on a redundant GDW. Comparisons of the SB-index approach, the star-join aided by R-tree and the star-join aided by GiST indicate that the SB-index significantly improves the elapsed time in query processing from 25% up to 99% with regard to SOLAP queries defined over the spatial predicates of intersection, enclosure and containment and applied to roll-up and drill-down operations. We also investigate the impact of the increase in data volume on the performance. The increase did not impair the performance of the SB-index, which highly improved the elapsed time in query processing. Performance tests also show that the SB-index is far more compact than the star-join, requiring only a small fraction of at most 0.20% of the volume. Moreover, we propose a specific enhancement of the SB-index to deal with spatial data redundancy. This enhancement improved performance from 80 to 91% for redundant GDW schemas.
Resumo:
The collection of spatial information to quantify changes to the state and condition of the environment is a fundamental component of conservation or sustainable utilization of tropical and subtropical forests, Age is an important structural attribute of old-growth forests influencing biological diversity in Australia eucalypt forests. Aerial photograph interpretation has traditionally been used for mapping the age and structure of forest stands. However this method is subjective and is not able to accurately capture fine to landscape scale variation necessary for ecological studies. Identification and mapping of fine to landscape scale vegetative structural attributes will allow the compilation of information associated with Montreal Process indicators lb and ld, which seek to determine linkages between age structure and the diversity and abundance of forest fauna populations. This project integrated measurements of structural attributes derived from a canopy-height elevation model with results from a geometrical-optical/spectral mixture analysis model to map forest age structure at a landscape scale. The availability of multiple-scale data allows the transfer of high-resolution attributes to landscape scale monitoring. Multispectral image data were obtained from a DMSV (Digital Multi-Spectral Video) sensor over St Mary's State Forest in Southeast Queensland, Australia. Local scene variance levels for different forest tapes calculated from the DMSV data were used to optimize the tree density and canopy size output in a geometric-optical model applied to a Landsat Thematic Mapper (TU) data set. Airborne laser scanner data obtained over the project area were used to calibrate a digital filter to extract tree heights from a digital elevation model that was derived from scanned colour stereopairs. The modelled estimates of tree height, crown size, and tree density were used to produce a decision-tree classification of forest successional stage at a landscape scale. The results obtained (72% accuracy), were limited in validation, but demonstrate potential for using the multi-scale methodology to provide spatial information for forestry policy objectives (ie., monitoring forest age structure).
Resumo:
The principal topic of this work is the application of data mining techniques, in particular of machine learning, to the discovery of knowledge in a protein database. In the first chapter a general background is presented. Namely, in section 1.1 we overview the methodology of a Data Mining project and its main algorithms. In section 1.2 an introduction to the proteins and its supporting file formats is outlined. This chapter is concluded with section 1.3 which defines that main problem we pretend to address with this work: determine if an amino acid is exposed or buried in a protein, in a discrete way (i.e.: not continuous), for five exposition levels: 2%, 10%, 20%, 25% and 30%. In the second chapter, following closely the CRISP-DM methodology, whole the process of construction the database that supported this work is presented. Namely, it is described the process of loading data from the Protein Data Bank, DSSP and SCOP. Then an initial data exploration is performed and a simple prediction model (baseline) of the relative solvent accessibility of an amino acid is introduced. It is also introduced the Data Mining Table Creator, a program developed to produce the data mining tables required for this problem. In the third chapter the results obtained are analyzed with statistical significance tests. Initially the several used classifiers (Neural Networks, C5.0, CART and Chaid) are compared and it is concluded that C5.0 is the most suitable for the problem at stake. It is also compared the influence of parameters like the amino acid information level, the amino acid window size and the SCOP class type in the accuracy of the predictive models. The fourth chapter starts with a brief revision of the literature about amino acid relative solvent accessibility. Then, we overview the main results achieved and finally discuss about possible future work. The fifth and last chapter consists of appendices. Appendix A has the schema of the database that supported this thesis. Appendix B has a set of tables with additional information. Appendix C describes the software provided in the DVD accompanying this thesis that allows the reconstruction of the present work.
Resumo:
The conjugate margins system of the Gulf of Lion and West Sardinia (GLWS) represents a unique natural laboratory for addressing fundamental questions about rifting due to its landlocked situation, its youth, its thick sedimentary layers, including prominent palaeo-marker such as the MSC event, and the amount of available data and multidisciplinary studies. The main goals of the SARDINIA experiment, were to (i) investigate the deep structure of the entire system within the two conjugate margins: the Gulf of Lion and West Sardinia, (ii) characterize the nature of the crust, and (iii) define the geometry of the basin and provide important constrains on its genesis. This paper presents the results of P-wave velocity modelling on three coincident near-vertical reflection multi-channel seismic (MCS) and wide-angle seismic profiles acquired in the Gulf of Lion, to a depth of 35 km. A companion paper [part II Afilhado et al., 2015] addresses the results of two other SARDINIA profiles located on the oriental conjugate West Sardinian margin. Forward wide-angle modelling of both data sets confirms that the margin is characterised by three distinct domains following the onshore unthinned, 33 km-thick continental crust domain: Domain I is bounded by two necking zones, where the crust thins respectively from 30 to 20 and from 20 to 7 km over a width of about 170 km; the outermost necking is imprinted by the well-known T-reflector at its crustal base; Domain II is characterised by a 7 km-thick crust with anomalous velocities ranging from 6 to 7.5 km/s; it represents the transition between the thinned continental crust (Domain I) and a very thin (only 4-5 km) "atypical" oceanic crust (Domain III). In Domain II, the hypothesis of the presence of exhumed mantle is falsified by our results: this domain may likely consist of a thin exhumed lower continental crust overlying a heterogeneous, intruded lower layer. Moreover, despite the difference in their magnetic signatures, Domains II and III present the very similar seismic velocities profiles, and we discuss the possibility of a connection between these two different domains.
Resumo:
Geophysical data acquired on the conjugate margins system of the Gulf of Lion and West Sardinia (GLWS) is unique in its ability to address fundamental questions about rifting (i.e. crustal thinning, the nature of the continent-ocean transition zone, the style of rifting and subsequent evolution, and the connection between deep and surface processes). While the Gulf of Lion (GoL) was the site of several deep seismic experiments, which occurred before the SARDINIA Experiment (ESP and ECORS Experiments in 1981 and 1988 respectively), the crustal structure of the West Sardinia margin remains unknown. This paper describes the first modeling of wide-angle and near-vertical reflection multi-channel seismic (MCS) profiles crossing the West Sardinia margin, in the Mediterranean Sea. The profiles were acquired, together with the exact conjugate of the profiles crossing the GoL, during the SARDINIA experiment in December 2006 with the French R/V L'Atalante. Forward wide-angle modeling of both data sets (wide-angle and multi-channel seismic) confirms that the margin is characterized by three distinct domains following the onshore unthinned, 26 km-thick continental crust : Domain V, where the crust thins from 26 to 6 km in a width of about 75 km; Domain IV where the basement is characterized by high velocity gradients and lower crustal seismic velocities from 6.8 to 7.25 km/s, which are atypical for either crustal or upper mantle material, and Domain III composed of "atypical" oceanic crust.The structure observed on the West Sardinian margin presents a distribution of seismic velocities that is symmetrical with those observed on the Gulf of Lion's side, except for the dimension of each domain and with respect to the initiation of seafloor spreading. This result does not support the hypothesis of simple shear mechanism operating along a lithospheric detachment during the formation of the Liguro-Provencal basin.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Física
Resumo:
Almost half of Ireland’s commercial stocks face overexploitation. As traditional species decrease in abundance and become less profitable, the industry is increasingly turning to alternate species. Atlantic saury (Scomberesox saurus saurus (Walbaum)) has been identified as a potential species for exploitation. Very little information is available on its biology or population dynamics, especially for Irish waters. This thesis aims to obtain sound scientific data, which will help to ensure that a future Atlantic saury fishery can be sustainably managed. The research has produced valuable data, some of which contradicts previous studies. Growth of Atlantic saury measured using otolith microstructure is found to be more than twice that previously calculated from annual structures on scales and otoliths. This results in a significant reduction of the expected life span from five to about two years. Investigation of maturity stage at age indicates that Atlantic saury will reproduce for the first time at age one and will survive for one or at most two reproduction seasons. It is concluded that a future Irish fishery will target mostly fish prior to their first reproduction. Finally the thesis gives some insights into the population structure of Atlantic saury, by analysis of otolith morphometric. Significant differences are detected between Northeastern Atlantic and western Mediterranean Sea specimens of the 0+ age class (less than one year old). The implications of these results for the management of an emerging fishery are discussed.
Resumo:
Similarity-based operations, similarity join, similarity grouping, data integration