877 resultados para Acceleration data structure
Resumo:
In recent years, corporate reputation has gained the attention of many scholars in the strategic management and related fields. There is a general consensus that higher corporate reputation is positively related to firm success or performance. However, the link is not always straightforward; as a result, it calls for researchers to dedicate their efforts to investigate the causes and effects of firm reputation and how it is related to performance. In this doctoral dissertation, innovation is suggested as a mediating variable in this relationship. Innovation is a critical factor for firm success and survival. Highly reputed firms are in a more advantageous position to attract critical resources for innovation such as human and financial capital. These firms face constant pressure from external stakeholders, e.g. the general public, or customers, to achieve and remain at high levels of innovativeness. As a result, firms are in constant search, internally or externally, for new technologies expanding their knowledge base. Consequently, these firms engage in firms acquisitions. In the dissertation, the author assesses the effects of domestic versus international acquisitions as well as related versus unrelated acquisitions on the level of innovativeness and performance. Building upon an established measure of firm-level degree of internationalization (DOI), the dissertation proposes a more detailed and enhanced measure for the firm's DOI. It is modeled as an interaction effect between corporate reputation and resources for innovation. More specifically, firms with higher levels of internationalization will have access to resources for innovation, i.e. human and financial capital, at a global scale. Additionally, the distance between firms and higher education institutions, i.e. universities, is considered as another interaction effect for the human capital attraction. The dissertation is built on two theoretical frameworks, the resource-based view of the firm and institutional theory. It studies 211 U.S. firms using a longitudinal panel data structure from 2006 to 2012. It utilizes a linear dynamic panel data estimation methodology for its hypotheses analyses. Results confirm the hypotheses proposed in the study.
Resumo:
With the main focus on safety, design of structures for vibration serviceability is often overlooked or mismanaged, resulting in some high profile structures failing publicly to perform adequately under human dynamic loading due to walking, running or jumping. A standard tool to inform better design, prove fitness for purpose before entering service and design retrofits is modal testing, a procedure that typically involves acceleration measurements using an array of wired sensors and force generation using a mechanical shaker. A critical but often overlooked aspect is using input (force) to output (response) relationships to enable estimation of modal mass, which is a key parameter directly controlling vibration levels in service.
This paper describes the use of wireless inertial measurement units (IMUs), designed for biomechanics motion capture applications, for the modal testing of a 109 m footbridge. IMUs were first used for an output-only vibration survey to identify mode frequencies, shapes and damping ratios, then for simultaneous measurement of body accelerations of a human subject jumping to excite specific vibrations modes and build up bridge deck accelerations at the jumping location. Using the mode shapes and the vertical acceleration data from a suitable body landmark scaled by body mass, thus providing jumping force data, it was possible to create frequency response functions and estimate modal masses.
The modal mass estimates for this bridge were checked against estimates obtained using an instrumented hammer and known mass distributions, showing consistency among the experimental estimates. Finally, the method was used in an applied research application on a short span footbridge where the benefits of logistical and operational simplicity afforded by the highly portable and easy to use IMUs proved extremely useful for an efficient evaluation of vibration serviceability, including estimation of modal masses.
Resumo:
Building on previous research, the goal of this project was to identify significant influencing factors for the Iowa Department of Transportation (DOT) to consider in future updates of its Instructional Memorandum (I.M.) 3.213, which provides guidelines for determining the need for traffic barriers (guardrail and bridge rail) at secondary roadway bridges—specifically, factors that might be significant for the bridge rail rating system component of I.M. 3.213. A literature review was conducted of policies and guidelines in other states and, specifically, of studies related to traffic barrier safety countermeasures at bridges in several states. In addition, a safety impact study was conducted to evaluate possible non-driver-related behavior characteristics of crashes on secondary road structures in Iowa using road data, structure data, and crash data from 2004 to 2013. Statistical models (negative binomial regression) were used to determine which factors were significant in terms of crash volume and crash severity. The study found that crashes are somewhat more frequent on or at bridges possessing certain characteristics—traffic volume greater than 400 vehicles per day (vpd) (paved) or greater than 50 vpd (unpaved), bridge length greater than 150 ft (paved) or greater than 35 ft (unpaved), bridge width narrower than its approach (paved) or narrower than 20 ft (unpaved), and bridges older than 25 years (both paved and unpaved). No specific roadway or bridge characteristic was found to contribute to more serious crashes. The study also confirmed previous research findings that crashes with bridges on secondary roads are rare, low-severity events. Although the findings of the study support the need for appropriate use of bridge rails, it concludes that prescriptive guidelines for bridge rail use on secondary roads may not be necessary, given the limited crash expectancy and lack of differences in crash expectancy among the various combinations of explanatory characteristics.
Resumo:
In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.
Resumo:
Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.
Resumo:
In the presented thesis work, meshfree method with distance fields is applied to create a novel computational approach which enables inclusion of the realistic geometric models of the microstructure and liberates Finite Element Analysis(FEA) from thedependance on and limitations of meshing of fine microstructural feature such as splats and porosity.Manufacturing processes of ceramics produce materials with complex porosity microstructure.Geometry of pores, their size and location substantially affect macro scale physical properties of the material. Complex structure and geometry of the pores severely limit application of modern Finite Element Analysis methods because they require construction of spatial grids (meshes) that conform to the geometric shape of the structure. As a result, there are virtually no effective tools available for predicting overall mechanical and thermal properties of porous materials based on their microstructure. This thesis is a separate handling and controls of geometric and physical computational models that are seamlessly combined at solution run time. Using the proposedapproach we will determine the effective thermal conductivity tensor of real porous ceramic materials featuring both isotropic and anisotropic thermal properties. This work involved development and implementation of numerical algorithms, data structure, and software.
Resumo:
User Quality of Experience (QoE) is a subjective entity and difficult to measure. One important aspect of it, User Experience (UX), corresponds to the sensory and emotional state of a user. For a user interacting through a User Interface (UI), precise information on how they are using the UI can contribute to understanding their UX, and thereby understanding their QoE. As well as a user’s use of the UI such as clicking, scrolling, touching, or selecting, other real-time digital information about the user such as from smart phone sensors (e.g. accelerometer, light level) and physiological sensors (e.g. heart rate, ECG, EEG) could contribute to understanding UX. Baran is a framework that is designed to capture, record, manage and analyse the User Digital Imprint (UDI) which, is the data structure containing all user context information. Baran simplifies the process of collecting experimental information in Human and Computer Interaction (HCI) studies, by recording comprehensive real-time data for any UI experiment, and making the data available as a standard UDI data structure. This paper presents an overview of the Baran framework, and provides an example of its use to record user interaction and perform some basic analysis of the interaction.
Resumo:
Conventional web search engines are centralised in that a single entity crawls and indexes the documents selected for future retrieval, and the relevance models used to determine which documents are relevant to a given user query. As a result, these search engines suffer from several technical drawbacks such as handling scale, timeliness and reliability, in addition to ethical concerns such as commercial manipulation and information censorship. Alleviating the need to rely entirely on a single entity, Peer-to-Peer (P2P) Information Retrieval (IR) has been proposed as a solution, as it distributes the functional components of a web search engine – from crawling and indexing documents, to query processing – across the network of users (or, peers) who use the search engine. This strategy for constructing an IR system poses several efficiency and effectiveness challenges which have been identified in past work. Accordingly, this thesis makes several contributions towards advancing the state of the art in P2P-IR effectiveness by improving the query processing and relevance scoring aspects of a P2P web search. Federated search systems are a form of distributed information retrieval model that route the user’s information need, formulated as a query, to distributed resources and merge the retrieved result lists into a final list. P2P-IR networks are one form of federated search in routing queries and merging result among participating peers. The query is propagated through disseminated nodes to hit the peers that are most likely to contain relevant documents, then the retrieved result lists are merged at different points along the path from the relevant peers to the query initializer (or namely, customer). However, query routing in P2P-IR networks is considered as one of the major challenges and critical part in P2P-IR networks; as the relevant peers might be lost in low-quality peer selection while executing the query routing, and inevitably lead to less effective retrieval results. This motivates this thesis to study and propose query routing techniques to improve retrieval quality in such networks. Cluster-based semi-structured P2P-IR networks exploit the cluster hypothesis to organise the peers into similar semantic clusters where each such semantic cluster is managed by super-peers. In this thesis, I construct three semi-structured P2P-IR models and examine their retrieval effectiveness. I also leverage the cluster centroids at the super-peer level as content representations gathered from cooperative peers to propose a query routing approach called Inverted PeerCluster Index (IPI) that simulates the conventional inverted index of the centralised corpus to organise the statistics of peers’ terms. The results show a competitive retrieval quality in comparison to baseline approaches. Furthermore, I study the applicability of using the conventional Information Retrieval models as peer selection approaches where each peer can be considered as a big document of documents. The experimental evaluation shows comparative and significant results and explains that document retrieval methods are very effective for peer selection that brings back the analogy between documents and peers. Additionally, Learning to Rank (LtR) algorithms are exploited to build a learned classifier for peer ranking at the super-peer level. The experiments show significant results with state-of-the-art resource selection methods and competitive results to corresponding classification-based approaches. Finally, I propose reputation-based query routing approaches that exploit the idea of providing feedback on a specific item in the social community networks and manage it for future decision-making. The system monitors users’ behaviours when they click or download documents from the final ranked list as implicit feedback and mines the given information to build a reputation-based data structure. The data structure is used to score peers and then rank them for query routing. I conduct a set of experiments to cover various scenarios including noisy feedback information (i.e, providing positive feedback on non-relevant documents) to examine the robustness of reputation-based approaches. The empirical evaluation shows significant results in almost all measurement metrics with approximate improvement more than 56% compared to baseline approaches. Thus, based on the results, if one were to choose one technique, reputation-based approaches are clearly the natural choices which also can be deployed on any P2P network.
Resumo:
The first mechanical Automaton concept was found in a Chinese text written in the 3rd century BC, while Computer Vision was born in the late 1960s. Therefore, visual perception applied to machines (i.e. the Machine Vision) is a young and exciting alliance. When robots came in, the new field of Robotic Vision was born, and these terms began to be erroneously interchanged. In short, we can say that Machine Vision is an engineering domain, which concern the industrial use of Vision. The Robotic Vision, instead, is a research field that tries to incorporate robotics aspects in computer vision algorithms. Visual Servoing, for example, is one of the problems that cannot be solved by computer vision only. Accordingly, a large part of this work deals with boosting popular Computer Vision techniques by exploiting robotics: e.g. the use of kinematics to localize a vision sensor, mounted as the robot end-effector. The remainder of this work is dedicated to the counterparty, i.e. the use of computer vision to solve real robotic problems like grasping objects or navigate avoiding obstacles. Will be presented a brief survey about mapping data structures most widely used in robotics along with SkiMap, a novel sparse data structure created both for robotic mapping and as a general purpose 3D spatial index. Thus, several approaches to implement Object Detection and Manipulation, by exploiting the aforementioned mapping strategies, will be proposed, along with a completely new Machine Teaching facility in order to simply the training procedure of modern Deep Learning networks.
Resumo:
The central aim of this dissertation is to introduce innovative methods, models, and tools to enhance the overall performance of supply chains responsible for handling perishable products. This concept of improved performance encompasses several critical dimensions, including enhanced efficiency in supply chain operations, product quality, safety, sustainability, waste generation minimization, and compliance with norms and regulations. The research is structured around three specific research questions that provide a solid foundation for delving into and narrowing down the array of potential solutions. These questions primarily concern enhancing the overall performance of distribution networks for perishable products and optimizing the package hierarchy, extending to unconventional packaging solutions. To address these research questions effectively, a well-defined research framework guides the approach. However, the dissertation adheres to an overarching methodological approach that comprises three fundamental aspects. The first aspect centers on the necessity of systematic data sampling and categorization, including identifying critical points within food supply chains. The data collected in this context must then be organized within a customized data structure designed to feed both cyber-physical and digital twins to quantify and analyze supply chain failures with a preventive perspective.
Resumo:
A Digital Scholarly Edition is a conceptually and structurally sophisticated entity. Throughout the centuries, diverse methodologies have been employed to reconstruct a text transmitted through one or multiple sources, resulting in various edition types. With the advent of digital technology in philology, these practices have undergone a significant transformation, compelling scholars to reconsider their approach in light of the web. In the digital age, philologists are expected to possess (too) advanced technical skills to prepare interactive and enriched editions, even though, in most cases, only mechanical or documentary editions are published online. The Śivadharma Database is a web Content Management System (CMS) designed to facilitate the preparation, publication, and updating of Digital Scholarly Editions. By providing scholars with a user-friendly CRUD web application to reconstruct and annotate a text, they can prepare their textus with additional components such as apparatus, notes, translations, citations, and parallels. It is possible by leveraging an annotation system based on HTML and graph data structure. This choice is made because the text entity is multidimensional and multifaceted, even if its sequential presentation constrains it. In particular, editions of South Asian texts of the Śivadharma corpus, the case study of this research, contain a series of phenomena that are difficult to manage formally, such as overlapping hierarchies. Hence, it becomes necessary to establish the data structure best suited to represent this complexity. In Śivadharma Database, the textus is an HTML file readily displayable. Textual fragments, annotated via an interface without requiring philologists to write code and saved in the backend, form the atomic unit of multiple relationships organised in a graph database. This approach enables the formal representation of complex and overlapping textual phenomena, allowing for good annotation expressiveness with minimal effort to learn the relevant technologies during the editing workflow.
Resumo:
In this paper, we present multiband optical polarimetric observations of the very-high energy blazar PKS 2155-304 made simultaneously with a HESS/Fermi high-energy campaign in 2008, when the source was found to be in a low state. The intense daily coverage of the data set allowed us to study in detail the temporal evolution of the emission, and we found that the particle acceleration time-scales are decoupled from the changes in the polarimetric properties of the source. We present a model in which the optical polarimetric emission originates at the polarized mm-wave core and propose an explanation for the lack of correlation between the photometric and polarimetric fluxes. The optical emission is consistent with an inhomogeneous synchrotron source in which the large-scale field is locally organized by a shock in which particle acceleration takes place. Finally, we use these optical polarimetric observations of PKS 2155-304 at a low state to propose an origin for the quiescent gamma-ray flux of the object, in an attempt to provide clues for the source of its recently established persistent TeV emission.
Resumo:
The collection of spatial information to quantify changes to the state and condition of the environment is a fundamental component of conservation or sustainable utilization of tropical and subtropical forests, Age is an important structural attribute of old-growth forests influencing biological diversity in Australia eucalypt forests. Aerial photograph interpretation has traditionally been used for mapping the age and structure of forest stands. However this method is subjective and is not able to accurately capture fine to landscape scale variation necessary for ecological studies. Identification and mapping of fine to landscape scale vegetative structural attributes will allow the compilation of information associated with Montreal Process indicators lb and ld, which seek to determine linkages between age structure and the diversity and abundance of forest fauna populations. This project integrated measurements of structural attributes derived from a canopy-height elevation model with results from a geometrical-optical/spectral mixture analysis model to map forest age structure at a landscape scale. The availability of multiple-scale data allows the transfer of high-resolution attributes to landscape scale monitoring. Multispectral image data were obtained from a DMSV (Digital Multi-Spectral Video) sensor over St Mary's State Forest in Southeast Queensland, Australia. Local scene variance levels for different forest tapes calculated from the DMSV data were used to optimize the tree density and canopy size output in a geometric-optical model applied to a Landsat Thematic Mapper (TU) data set. Airborne laser scanner data obtained over the project area were used to calibrate a digital filter to extract tree heights from a digital elevation model that was derived from scanned colour stereopairs. The modelled estimates of tree height, crown size, and tree density were used to produce a decision-tree classification of forest successional stage at a landscape scale. The results obtained (72% accuracy), were limited in validation, but demonstrate potential for using the multi-scale methodology to provide spatial information for forestry policy objectives (ie., monitoring forest age structure).
Resumo:
The principal topic of this work is the application of data mining techniques, in particular of machine learning, to the discovery of knowledge in a protein database. In the first chapter a general background is presented. Namely, in section 1.1 we overview the methodology of a Data Mining project and its main algorithms. In section 1.2 an introduction to the proteins and its supporting file formats is outlined. This chapter is concluded with section 1.3 which defines that main problem we pretend to address with this work: determine if an amino acid is exposed or buried in a protein, in a discrete way (i.e.: not continuous), for five exposition levels: 2%, 10%, 20%, 25% and 30%. In the second chapter, following closely the CRISP-DM methodology, whole the process of construction the database that supported this work is presented. Namely, it is described the process of loading data from the Protein Data Bank, DSSP and SCOP. Then an initial data exploration is performed and a simple prediction model (baseline) of the relative solvent accessibility of an amino acid is introduced. It is also introduced the Data Mining Table Creator, a program developed to produce the data mining tables required for this problem. In the third chapter the results obtained are analyzed with statistical significance tests. Initially the several used classifiers (Neural Networks, C5.0, CART and Chaid) are compared and it is concluded that C5.0 is the most suitable for the problem at stake. It is also compared the influence of parameters like the amino acid information level, the amino acid window size and the SCOP class type in the accuracy of the predictive models. The fourth chapter starts with a brief revision of the literature about amino acid relative solvent accessibility. Then, we overview the main results achieved and finally discuss about possible future work. The fifth and last chapter consists of appendices. Appendix A has the schema of the database that supported this thesis. Appendix B has a set of tables with additional information. Appendix C describes the software provided in the DVD accompanying this thesis that allows the reconstruction of the present work.
Resumo:
The conjugate margins system of the Gulf of Lion and West Sardinia (GLWS) represents a unique natural laboratory for addressing fundamental questions about rifting due to its landlocked situation, its youth, its thick sedimentary layers, including prominent palaeo-marker such as the MSC event, and the amount of available data and multidisciplinary studies. The main goals of the SARDINIA experiment, were to (i) investigate the deep structure of the entire system within the two conjugate margins: the Gulf of Lion and West Sardinia, (ii) characterize the nature of the crust, and (iii) define the geometry of the basin and provide important constrains on its genesis. This paper presents the results of P-wave velocity modelling on three coincident near-vertical reflection multi-channel seismic (MCS) and wide-angle seismic profiles acquired in the Gulf of Lion, to a depth of 35 km. A companion paper [part II Afilhado et al., 2015] addresses the results of two other SARDINIA profiles located on the oriental conjugate West Sardinian margin. Forward wide-angle modelling of both data sets confirms that the margin is characterised by three distinct domains following the onshore unthinned, 33 km-thick continental crust domain: Domain I is bounded by two necking zones, where the crust thins respectively from 30 to 20 and from 20 to 7 km over a width of about 170 km; the outermost necking is imprinted by the well-known T-reflector at its crustal base; Domain II is characterised by a 7 km-thick crust with anomalous velocities ranging from 6 to 7.5 km/s; it represents the transition between the thinned continental crust (Domain I) and a very thin (only 4-5 km) "atypical" oceanic crust (Domain III). In Domain II, the hypothesis of the presence of exhumed mantle is falsified by our results: this domain may likely consist of a thin exhumed lower continental crust overlying a heterogeneous, intruded lower layer. Moreover, despite the difference in their magnetic signatures, Domains II and III present the very similar seismic velocities profiles, and we discuss the possibility of a connection between these two different domains.