915 resultados para Springer briefs
Resumo:
The following paper proposes a novel application of Skid-to-Turn maneuvers for fixed wing Unmanned Aerial Vehicles (UAVs) inspecting locally linear infrastructure. Fixed wing UAVs, following the design of manned aircraft, traditionally employ Bank-to-Turn maneuvers to change heading and thus direction of travel. Commonly overlooked is the effect these maneuvers have on downward facing body fixed sensors, which as a result of bank, point away from the feature during turns. By adopting Skid-to-Turn maneuvers, the aircraft is able change heading whilst maintaining wings level flight, thus allowing body fixed sensors to maintain a downward facing orientation. Eliminating roll also helps to improve data quality, as sensors are no longer subjected to the swinging motion induced as they pivot about an axis perpendicular to their line of sight. Traditional tracking controllers that apply an indirect approach of capturing ground based data by flying directly overhead can also see the feature off center due to steady state pitch and roll required to stay on course. An Image Based Visual Servo controller is developed to address this issue, allowing features to be directly tracked within the image plane. Performance of the proposed controller is tested against that of a Bank-to-Turn tracking controller driven by GPS derived cross track error in a simulation environment developed to simulate the field of view of a body fixed camera.
Resumo:
This study explores the relationship between new venture team composition and new venture persistence and performance over time. We examine the team characteristics of a 5-year panel study of 202 new venture teams and new venture performance. Our study makes two contributions. First, we extend earlier research concerning homophily theories of the prevalence of homogeneous teams. Using structural event analysis we demonstrate that team members’ start-up experience is important in this context. Second, we attempt to reconcile conflicting evidence concerning the influence of team homogeneity on performance by considering the element of time. We hypothesize that higher team homogeneity is positively related to short term outcomes, but is less effective in the longer term. Our results confirm a difference over time. We find that more homogeneous teams are less likely to be higher performing in the long term. However, we find no relationship between team homogeneity and short-term performance outcomes.
Resumo:
In public places, crowd size may be an indicator of congestion, delay, instability, or of abnormal events, such as a fight, riot or emergency. Crowd related information can also provide important business intelligence such as the distribution of people throughout spaces, throughput rates, and local densities. A major drawback of many crowd counting approaches is their reliance on large numbers of holistic features, training data requirements of hundreds or thousands of frames per camera, and that each camera must be trained separately. This makes deployment in large multi-camera environments such as shopping centres very costly and difficult. In this chapter, we present a novel scene-invariant crowd counting algorithm that uses local features to monitor crowd size. The use of local features allows the proposed algorithm to calculate local occupancy statistics, scale to conditions which are unseen in the training data, and be trained on significantly less data. Scene invariance is achieved through the use of camera calibration, allowing the system to be trained on one or more viewpoints and then deployed on any number of new cameras for testing without further training. A pre-trained system could then be used as a ‘turn-key’ solution for crowd counting across a wide range of environments, eliminating many of the costly barriers to deployment which currently exist.
Resumo:
The structures of two hydrated proton-transfer compounds of 4-piperidinecarboxamide (isonipecotamide) with the isomeric heteroaromatic carboxylic acids indole-2-carboxylic acid and indole-3-carboxylic acid, namely 4-carbamoylpiperidinium indole-2-carboxylate dihydrate (1) and 4-carbamoylpiperidinium indole-3-carboxylate hemihydrate (2) have been determined at 200 K. Crystals of both 1 and 2 are monoclinic, space groups P21/c and P2/c respectively with Z = 4 in cells having dimensions a = 10.6811(4), b = 12.2017(4), c = 12.5456(5) Å, β = 96.000(4)o (1) and a = 15.5140(4), b = 10.2908(3), c = 9.7047(3) Å, β = 97.060(3)o (2). Hydrogen-bonding in 1 involves a primary cyclic interaction involving complementary cation amide N-H…O(carboxyl) anion and anion hetero N-H…O(amide) cation hydrogen bonds [graph set R22(9)]. Secondary associations involving also the water molecules of solvation give a two-dimensional network structure which includes weak water O-H…π interactions. In the three-dimensional hydrogen-bonded structure of 2, there are classic centrosymmetric cyclic head-to-head hydrogen-bonded amide-amide interactions [graph set R22(8)] as well as lateral cyclic amide-O linked amide-amide extensions [graph set R24(8)]. The anions and the water molecule, which lies on a twofold rotation axis, are involved in secondary extensions.
Resumo:
Resilient organised crime groups survive and prosper despite law enforcement activity, criminal competition and market forces. Corrupt police networks, like any other crime network, must contain resiliency characteristics if they are to continue operation and avoid being closed down through detection and arrest of their members. This paper examines the resilience of a large corrupt police network, namely The Joke which operated in the Australian state of Queensland for a number of decades. The paper uses social network analysis tools to determine the resilient characteristics of the network. This paper also assumes that these characteristics will be different to those of mainstream organised crime groups because the police network operates within an established policing agency rather than as an independent entity hiding within the broader community.
Resumo:
The safety risk management process describes the systematic application of management policies, procedures and practices to the activities of communicating, consulting, establishing the context, and identifying, analysing, evaluating, treating, monitoring and reviewing risk. This process is undertaken to provide assurances that the risks of a particular unmanned aircraft system activity have been managed to an acceptable level. The safety risk management process and its outcomes form part of the documented safety case necessary to obtain approvals for unmanned aircraft system operations. It also guides the development of an organisation’s operations manual and is a primary component of an organisation’s safety management system. The aim of this chapter is to provide existing risk practitioners with a high level introduction to some of the unique issues and challenges in the application of the safety risk management process to unmanned aircraft systems. The scope is limited to safety risks associated with the operation of unmanned aircraft in the civil airspace system and over inhabited areas. The structure of the chapter is based on the safety risk management process as defined by the international risk management standard ISO 31000:2009 and draws on aviation safety resources provided by International Civil Aviation Organization, the Federal Aviation Administration and U.S. Department of Defense. References to relevant aviation safety regulations, programs of research and fielded systems are also provided.
Resumo:
Two decades after its inception, Latent Semantic Analysis(LSA) has become part and parcel of every modern introduction to Information Retrieval. For any tool that matures so quickly, it is important to check its lore and limitations, or else stagnation will set in. We focus here on the three main aspects of LSA that are well accepted, and the gist of which can be summarized as follows: (1) that LSA recovers latent semantic factors underlying the document space, (2) that such can be accomplished through lossy compression of the document space by eliminating lexical noise, and (3) that the latter can best be achieved by Singular Value Decomposition. For each aspect we performed experiments analogous to those reported in the LSA literature and compared the evidence brought to bear in each case. On the negative side, we show that the above claims about LSA are much more limited than commonly believed. Even a simple example may show that LSA does not recover the optimal semantic factors as intended in the pedagogical example used in many LSA publications. Additionally, and remarkably deviating from LSA lore, LSA does not scale up well: the larger the document space, the more unlikely that LSA recovers an optimal set of semantic factors. On the positive side, we describe new algorithms to replace LSA (and more recent alternatives as pLSA, LDA, and kernel methods) by trading its l2 space for an l1 space, thereby guaranteeing an optimal set of semantic factors. These algorithms seem to salvage the spirit of LSA as we think it was initially conceived.
Resumo:
Intuitively, any ‘bag of words’ approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distributions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document’s initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur’s search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.
Resumo:
Most one-round key exchange protocols provide only weak forward secrecy at best. Furthermore, one-round protocols with strong forward secrecy often break badly when faced with an adversary who can obtain ephemeral keys. We provide a characterisation of how strong forward secrecy can be achieved in one-round key exchange. Moreover, we show that protocols exist which provide strong forward secrecy and remain secure with weak forward secrecy even when the adversary is allowed to obtain ephemeral keys. We provide a compiler to achieve this for any existing secure protocol with weak forward secrecy.
Resumo:
Introducing engineering-based model-eliciting experiences in the elementary curriculum is a new and increasingly important domain of research by mathematics, science, technology, and engineering educators. Recent research has raised questions about the context of engineering problems that are meaningful, engaging, and inspiring for young students. In the present study an environmental engineering activity was implemented in two classes of 11-year-old students in Cyprus. The problem required students to develop a procedure for selecting among alternative countries from which to buy water. Students created a range of models that adequately solved the problem although not all models took into account all of the data provided. The models varied in the number of problem factors taken into consideration and also in the different approaches adopted in dealing with the problem factors. At least two groups of students integrated into their models the environmental aspect of the problem (energy consumption, water pollution) and further refined their models. Results indicate that engineering model-eliciting activities can be introduced effectively into the elementary curriculum, providing rich opportunities for students to deal with engineering contexts and to apply their learning in mathematics and science to solving real-world engineering problems.
Resumo:
On obstacle-cluttered construction sites where heavy equipment is in use, safety issues are of major concern. The main objective of this paper is to develop a framework with algorithms for obstacle avoidance and path planning based on real-time three-dimensional job site models to improve safety during equipment operation. These algorithms have the potential to prevent collisions between heavy equipment vehicles and other on-site objects. In this study, algorithms were developed for image data acquisition, real-time 3D spatial modeling, obstacle avoidance, and shortest path finding and were all integrated to construct a comprehensive collision-free path. Preliminary research results show that the proposed approach is feasible and has the potential to be used as an active safety feature for heavy equipment.
Resumo:
In the third year of the Link the Wiki track, the focus has been shifted to anchor-to-bep link discovery. The participants were encouraged to utilize different technologies to resolve the issue of focused link discovery. Apart from the 2009 Wikipedia collection, the Te Ara collection was introduced for the first time in INEX. For the link the wiki tasks, 5000 file-to-file topics were randomly selected and 33 anchor-to-bep topics were nominated by the participants. The Te Ara collection does not contain hyperlinks and the task was to cross link the entire collection. A GUI tool for self-verification of the linking results was distributed. This helps participants verify the location of the anchor and bep. The assessment tool and the evaluation tool were revised to improve efficiency. Submission runs were evaluated against Wikipedia ground-truth and manual result set respectively. Focus-based evaluation was undertaken using a new metric. Evaluation results are presented and link discovery approaches are described
Resumo:
This paper gives an overview of the INEX 2009 Ad Hoc Track. The main goals of the Ad Hoc Track were three-fold. The first goal was to investigate the impact of the collection scale and markup, by using a new collection that is again based on a the Wikipedia but is over 4 times larger, with longer articles and additional semantic annotations. For this reason the Ad Hoc track tasks stayed unchanged, and the Thorough Task of INEX 2002–2006 returns. The second goal was to study the impact of more verbose queries on retrieval effectiveness, by using the available markup as structural constraints—now using both the Wikipedia’s layout-based markup, as well as the enriched semantic markup—and by the use of phrases. The third goal was to compare different result granularities by allowing systems to retrieve XML elements, ranges of XML elements, or arbitrary passages of text. This investigates the value of the internal document structure (as provided by the XML mark-up) for retrieving relevant information. The INEX 2009 Ad Hoc Track featured four tasks: For the Thorough Task a ranked-list of results (elements or passages) by estimated relevance was needed. For the Focused Task a ranked-list of non-overlapping results (elements or passages) was needed. For the Relevant in Context Task non-overlapping results (elements or passages) were returned grouped by the article from which they came. For the Best in Context Task a single starting point (element start tag or passage start) for each article was needed. We discuss the setup of the track, and the results for the four tasks.