711 resultados para Localization real-world challenges
Resumo:
This paper challenges current practices in the use of digital media to communicate Australian Aboriginal knowledge practices in a learning context. It proposes that any digital representation of Aboriginal knowledge practices needs to examine the epistemology and ontology of these practices in order to design digital environments that effectively support and enable existing Aboriginal knowledge practices in the real world. Central to this is the essential task of any new digital representation of Aboriginal knowledge to resolve the conflict between database and narrative views of knowledge (L. Manovich, 2001). This is in order to provide a tool that complements rather than supplants direct experience of traditional knowledge practices (V. Hart, 2001). This paper concludes by reporting on the recent development of an advanced learning technology that addresses this.
Resumo:
This thesis deals with the challenging problem of designing systems able to perceive objects in underwater environments. In the last few decades research activities in robotics have advanced the state of art regarding intervention capabilities of autonomous systems. State of art in fields such as localization and navigation, real time perception and cognition, safe action and manipulation capabilities, applied to ground environments (both indoor and outdoor) has now reached such a readiness level that it allows high level autonomous operations. On the opposite side, the underwater environment remains a very difficult one for autonomous robots. Water influences the mechanical and electrical design of systems, interferes with sensors by limiting their capabilities, heavily impacts on data transmissions, and generally requires systems with low power consumption in order to enable reasonable mission duration. Interest in underwater applications is driven by needs of exploring and intervening in environments in which human capabilities are very limited. Nowadays, most underwater field operations are carried out by manned or remotely operated vehicles, deployed for explorations and limited intervention missions. Manned vehicles, directly on-board controlled, expose human operators to risks related to the stay in field of the mission, within a hostile environment. Remotely Operated Vehicles (ROV) currently represent the most advanced technology for underwater intervention services available on the market. These vehicles can be remotely operated for long time but they need support from an oceanographic vessel with multiple teams of highly specialized pilots. Vehicles equipped with multiple state-of-art sensors and capable to autonomously plan missions have been deployed in the last ten years and exploited as observers for underwater fauna, seabed, ship wrecks, and so on. On the other hand, underwater operations like object recovery and equipment maintenance are still challenging tasks to be conducted without human supervision since they require object perception and localization with much higher accuracy and robustness, to a degree seldom available in Autonomous Underwater Vehicles (AUV). This thesis reports the study, from design to deployment and evaluation, of a general purpose and configurable platform dedicated to stereo-vision perception in underwater environments. Several aspects related to the peculiar environment characteristics have been taken into account during all stages of system design and evaluation: depth of operation and light conditions, together with water turbidity and external weather, heavily impact on perception capabilities. The vision platform proposed in this work is a modular system comprising off-the-shelf components for both the imaging sensors and the computational unit, linked by a high performance ethernet network bus. The adopted design philosophy aims at achieving high flexibility in terms of feasible perception applications, that should not be as limited as in case of a special-purpose and dedicated hardware. Flexibility is required by the variability of underwater environments, with water conditions ranging from clear to turbid, light backscattering varying with daylight and depth, strong color distortion, and other environmental factors. Furthermore, the proposed modular design ensures an easier maintenance and update of the system over time. Performance of the proposed system, in terms of perception capabilities, has been evaluated in several underwater contexts taking advantage of the opportunity offered by the MARIS national project. Design issues like energy power consumption, heat dissipation and network capabilities have been evaluated in different scenarios. Finally, real-world experiments, conducted in multiple and variable underwater contexts, including open sea waters, have led to the collection of several datasets that have been publicly released to the scientific community. The vision system has been integrated in a state of the art AUV equipped with a robotic arm and gripper, and has been exploited in the robot control loop to successfully perform underwater grasping operations.
Resumo:
The Politics of the New Germany takes a new approach to understanding politics in the post-unification Federal Republic. Assuming only elementary knowledge, it focuses on debates and issues in order to help students understand both the workings of Germany's key institutions and some of the key policy challenges facing German politicians. Written in a straightforward style by four experts, each of the chapters draws on a rich variety of real-world examples. Packed with boxed summaries of key points, a guide to further reading and a range of seminar questions for discussion at the end of each chapter, this book highlights both the challenges and opportunities facing policy-makers in such areas as foreign affairs, economic policy, immigration, identity politics and institutional reforms. The book also takes a bird's-eye view of the big debates that define German politics over time, regardless of which party happens to be in power. It pinpoints three key themes that have characterised German politics over the last sixty years; reconciliation, consensus and transformation. Table of Contents: Introduction 1. Germany and the Burden of History 2. Germany’s Post-War Development, 1945-1989 3. Towards German Unity? 4. A Blockaded System of Government? 5. The Party System and Electoral Behaviour: The Path to Stable Instability? 6. Economic Management: The End of the German Model? 7. Citizenship and Demographics: A Country of Immigration? 8. The Reform of the Welfare State? 9. Germany and the European Union: A European Germany or a German Europe? 10. Foreign and Security Policy: A New Role for the Twenty-First Century? 11. Conclusion
Resumo:
A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.
Resumo:
The behaviour of control functions in safety critical software systems is typically bounded to prevent the occurrence of known system level hazards. These bounds are typically derived through safety analyses and can be implemented through the use of necessary design features. However, the unpredictability of real world problems can result in changes in the operating context that may invalidate the behavioural bounds themselves, for example, unexpected hazardous operating contexts as a result of failures or degradation. For highly complex problems it may be infeasible to determine the precise desired behavioural bounds of a function that addresses or minimises risk for hazardous operation cases prior to deployment. This paper presents an overview of the safety challenges associated with such a problem and how such problems might be addressed. A self-management framework is proposed that performs on-line risk management. The features of the framework are shown in context of employing intelligent adaptive controllers operating within complex and highly dynamic problem domains such as Gas-Turbine Aero Engine control. Safety assurance arguments enabled by the framework necessary for certification are also outlined.
Resumo:
The Politics of the New Germany continues to provide the most comprehensive, authoritative and up-to-date textbook on contemporary German Politics. The text takes a new approach to understanding politics in the post-unification Federal Republic. Assuming only elementary knowledge, it focuses on a series of the most important debates and issues in Germany today with the aim of helping students understand both the workings of the country's key institutions and some of the most important policy challenges facing German politicians. For this second edition, the content has been comprehensively updated throughout, augmented by additional factboxes and data, and features new material on: •Grand coalition •Lisbon treaty •Constitutional court •Financial crisis •Reform of social policy •Afghanistan. Written in a straightforward style by three experts, each of the chapters draws on a rich variety of real-world examples. In doing so, it highlights both the challenges and opportunities facing policy-makers in such areas as foreign affairs, economic policy, immigration, identity politics and institutional reform. The book also takes a bird’s-eye view of the big debates that have defined German politics over time, regardless of which political parties happened to be in power. It pinpoints three key themes that have characterised German politics over the last sixty years; reconciliation, consensus and transformation. The book is a comprehensive, yet highly accessible, overview of politics in 21st Century Germany and should be essential reading for students of politics and international relations, as well as of European and German studies.
Resumo:
Premium intraocular lenses (IOLs) aim to surgically correct astigmatism and presbyopia following cataract extraction, optimising vision and eliminating the need for cataract surgery in later years. It is usual to fully correct astigmatism and to provide visual correction for distance and near when prescribing spectacles and contact lenses, however for correction with the lens implanted during cataract surgery, patients are required to purchase the premium IOLs and pay surgery fees outside the National Health Service in the UK. The benefit of using toric IOLs was thus demonstrated, both in standard visual tests and real-world situations. Orientation of toric IOLs during implantation is critical and the benefit of using conjunctival blood vessels for alignment was shown. The issue of centration of IOLs relative to the pupil was also investigated, showing changes with the amount of dilation and repeat dilation evaluation, which must be considered during surgery to optimize the visual performance of premium IOLs. Presbyopia is a global issue, of growing importance as life expectancy increases, with no real long-term cure. Despite enhanced lifestyles, changes in diet and improved medical care, presbyopia still presents in modern life as a significant visual impairment. The onset of presbyopia was found to vary with risk factors including alcohol consumption, smoking, UV exposure and even weight as well as age. A new technique to make measurement of accommodation more objective and robust was explored, although needs for further design modifications were identified. Due to dysphotopsia and lack of intermediate vision through most multifocal IOL designs, the development of a trifocal IOL was shown to minimize these aspects. The current thesis, therefore, emphasises the challenges of premium IOL surgery and need for refinement for optimum visual outcome in addition to outlining how premium IOLs may provide long-term and successful correction of astigmatism and presbyopia.
Resumo:
Latent topics derived by topic models such as Latent Dirichlet Allocation (LDA) are the result of hidden thematic structures which provide further insights into the data. The automatic labelling of such topics derived from social media poses however new challenges since topics may characterise novel events happening in the real world. Existing automatic topic labelling approaches which depend on external knowledge sources become less applicable here since relevant articles/concepts of the extracted topics may not exist in external sources. In this paper we propose to address the problem of automatic labelling of latent topics learned from Twitter as a summarisation problem. We introduce a framework which apply summarisation algorithms to generate topic labels. These algorithms are independent of external sources and only rely on the identification of dominant terms in documents related to the latent topic. We compare the efficiency of existing state of the art summarisation algorithms. Our results suggest that summarisation algorithms generate better topic labels which capture event-related context compared to the top-n terms returned by LDA. © 2014 Association for Computational Linguistics.
Resumo:
Radio-frequency identification technology (RFID) is a popular modern technology proven to deliver a range of value-added benefits to achieve system and operational efficiency, as well as cost-effectiveness. The operational characteristics of RFID outperform barcodes in many aspects. One of the main challenges for RFID adoption is proving its ability to improve competitiveness. In this paper, we examine multiple real-world examples where RFID technology has been demonstrated to provide significant benefits to industry competitiveness, and also to enhance human experience in the service sector. This paper will explore and survey existing value-added applications of RFID systems in industry and the service sector, with particular focus on applications in retail, logistics, manufacturing, healthcare, leisure and the public sector. © 2012 AICIT.
A simulation analysis of spoke-terminals operating in LTL Hub-and-Spoke freight distribution systems
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT The research presented in this thesis is concerned with Discrete-Event Simulation (DES) modelling as a method to facilitate logistical policy development within the UK Less-than-Truckload (LTL) freight distribution sector which has been typified by “Pallet Networks” operating on a hub-and-spoke philosophy. Current literature relating to LTL hub-and-spoke and cross-dock freight distribution systems traditionally examines a variety of network and hub design configurations. Each is consistent with classical notions of creating process efficiency, improving productivity, reducing costs and generally creating economies of scale through notions of bulk optimisation. Whilst there is a growing abundance of papers discussing both the network design and hub operational components mentioned above, there is a shortcoming in the overall analysis when it comes to discussing the “spoke-terminal” of hub-and-spoke freight distribution systems and their capabilities for handling the diverse and discrete customer profiles of freight that multi-user LTL hub-and-spoke networks typically handle over the “last-mile” of the delivery, in particular, a mix of retail and non-retail customers. A simulation study is undertaken to investigate the impact on operational performance when the current combined spoke-terminal delivery tours are separated by ‘profile-type’ (i.e. retail or nonretail). The results indicate that a potential improvement in delivery performance can be made by separating retail and non-retail delivery runs at the spoke-terminal and that dedicated retail and non-retail delivery tours could be adopted in order to improve customer delivery requirements and adapt hub-deployed policies. The study also leverages key operator experiences to highlight the main practical implementation challenges when integrating the observed simulation results into the real-world. The study concludes that DES be harnessed as an enabling device to develop a ‘guide policy’. This policy needs to be flexible and should be applied in stages, taking into account the growing retail-exposure.
Resumo:
Computer software plays an important role in business, government, society and sciences. To solve real-world problems, it is very important to measure the quality and reliability in the software development life cycle (SDLC). Software Engineering (SE) is the computing field concerned with designing, developing, implementing, maintaining and modifying software. The present paper gives an overview of the Data Mining (DM) techniques that can be applied to various types of SE data in order to solve the challenges posed by SE tasks such as programming, bug detection, debugging and maintenance. A specific DM software is discussed, namely one of the analytical tools for analyzing data and summarizing the relationships that have been identified. The paper concludes that the proposed techniques of DM within the domain of SE could be well applied in fields such as Customer Relationship Management (CRM), eCommerce and eGovernment. ACM Computing Classification System (1998): H.2.8.
Resumo:
One of the major challenges in measuring efficiency in terms of resources and outcomes is the assessment of the evolution of units over time. Although Data Envelopment Analysis (DEA) has been applied for time series datasets, DEA models, by construction, form the reference set for inefficient units (lambda values) based on their distance from the efficient frontier, that is, in a spatial manner. However, when dealing with temporal datasets, the proximity in time between units should also be taken into account, since it reflects the structural resemblance among time periods of a unit that evolves. In this paper, we propose a two-stage spatiotemporal DEA approach, which captures both the spatial and temporal dimension through a multi-objective programming model. In the first stage, DEA is solved iteratively extracting for each unit only previous DMUs as peers in its reference set. In the second stage, the lambda values derived from the first stage are fed to a Multiobjective Mixed Integer Linear Programming model, which filters peers in the reference set based on weights assigned to the spatial and temporal dimension. The approach is demonstrated on a real-world example drawn from software development.
Resumo:
A tanulmány Magyarország egyik legnagyobb foglalkoztatójának megrendelésére készült abból a célból, hogy milyen megoldásokkal lehetne a vállalati működést hatékonyabbá tenni. Ennek keretében a szerzők megvizsgálták, hol tart ma a HR adatbányászati kutatás a világban. Milyen eszközök állnak rendelkezésre ahhoz, hogy a munkavállalói elmenetelt előre jelezzék, illetve figyeljék, valamint milyen lehetőség van a hálózati kutatások felhasználására a biztonság területén. Szerencsés, hogy a vállalkozói kérdések és erőforrások találkozhattak a kutatói szféra aktuális kutatási területeivel. A tanulmány szerzői úgy gondolják, hogy a cikkben megfogalmazott állítások, következtetések, eredmények a jövőben hasznosíthatók lesznek a vállalat és más cégek számára is. _____ The authors were pleased to take part in this research project initiated by one of Hungary’s largest employer. The goal of the project was to work out BI solutions to improve upon their business process. In the framework of the project first the authors made a survey on the current trends in the world of HR datamining. They reviewed the available tools for the prediction of employee promotion and investigated the question on how to utilize results achieved in social network analysis in the field of enterprise security. When real business problems and resources meet the mainstream research of the scientific community it is always a fortunate and it is rather fruitful. The authors are certain that the results published in this document will be beneficial for Foxconn in the near future. Of course, they are not done. There are continually new research perspectives opening up and huge amount of information is accumulating in the enterprises just waiting for getting discovered and analysed. Also the environment in which an enterprise operates is dynamically changing and thus the company faces new challenges and new type of business problems arise. The authors are in the hope that their research experience will help decision makers also in the future to solve real world business problems.
Resumo:
Modern geographical databases, which are at the core of geographic information systems (GIS), store a rich set of aspatial attributes in addition to geographic data. Typically, aspatial information comes in textual and numeric format. Retrieving information constrained on spatial and aspatial data from geodatabases provides GIS users the ability to perform more interesting spatial analyses, and for applications to support composite location-aware searches; for example, in a real estate database: “Find the nearest homes for sale to my current location that have backyard and whose prices are between $50,000 and $80,000”. Efficient processing of such queries require combined indexing strategies of multiple types of data. Existing spatial query engines commonly apply a two-filter approach (spatial filter followed by nonspatial filter, or viceversa), which can incur large performance overheads. On the other hand, more recently, the amount of geolocation data has grown rapidly in databases due in part to advances in geolocation technologies (e.g., GPS-enabled smartphones) that allow users to associate location data to objects or events. The latter poses potential data ingestion challenges of large data volumes for practical GIS databases. In this dissertation, we first show how indexing spatial data with R-trees (a typical data pre-processing task) can be scaled in MapReduce—a widely-adopted parallel programming model for data intensive problems. The evaluation of our algorithms in a Hadoop cluster showed close to linear scalability in building R-tree indexes. Subsequently, we develop efficient algorithms for processing spatial queries with aspatial conditions. Novel techniques for simultaneously indexing spatial with textual and numeric data are developed to that end. Experimental evaluations with real-world, large spatial datasets measured query response times within the sub-second range for most cases, and up to a few seconds for a small number of cases, which is reasonable for interactive applications. Overall, the previous results show that the MapReduce parallel model is suitable for indexing tasks in spatial databases, and the adequate combination of spatial and aspatial attribute indexes can attain acceptable response times for interactive spatial queries with constraints on aspatial data.
Resumo:
Personalized recommender systems aim to assist users in retrieving and accessing interesting items by automatically acquiring user preferences from the historical data and matching items with the preferences. In the last decade, recommendation services have gained great attention due to the problem of information overload. However, despite recent advances of personalization techniques, several critical issues in modern recommender systems have not been well studied. These issues include: (1) understanding the accessing patterns of users (i.e., how to effectively model users' accessing behaviors); (2) understanding the relations between users and other objects (i.e., how to comprehensively assess the complex correlations between users and entities in recommender systems); and (3) understanding the interest change of users (i.e., how to adaptively capture users' preference drift over time). To meet the needs of users in modern recommender systems, it is imperative to provide solutions to address the aforementioned issues and apply the solutions to real-world applications. ^ The major goal of this dissertation is to provide integrated recommendation approaches to tackle the challenges of the current generation of recommender systems. In particular, three user-oriented aspects of recommendation techniques were studied, including understanding accessing patterns, understanding complex relations and understanding temporal dynamics. To this end, we made three research contributions. First, we presented various personalized user profiling algorithms to capture click behaviors of users from both coarse- and fine-grained granularities; second, we proposed graph-based recommendation models to describe the complex correlations in a recommender system; third, we studied temporal recommendation approaches in order to capture the preference changes of users, by considering both long-term and short-term user profiles. In addition, a versatile recommendation framework was proposed, in which the proposed recommendation techniques were seamlessly integrated. Different evaluation criteria were implemented in this framework for evaluating recommendation techniques in real-world recommendation applications. ^ In summary, the frequent changes of user interests and item repository lead to a series of user-centric challenges that are not well addressed in the current generation of recommender systems. My work proposed reasonable solutions to these challenges and provided insights on how to address these challenges using a simple yet effective recommendation framework.^