269 resultados para Bear (Ship)
Resumo:
Free surface flow past a two-dimensional semi-infinite curved plate is considered, with emphasis given to solving for the shape of the resulting wave train that appears downstream on the surface of the fluid. This flow configuration can be interpreted as applying near the stern of a wide blunt ship. For steady flow in a fluid of finite depth, we apply the Wiener-Hopf technique to solve a linearised problem, valid for small perturbations of the uniform stream. Weakly nonlinear results found using a forced KdV equation are also presented, as are numerical solutions to the fully nonlinear problem, computed using a conformal mapping and a boundary integral technique. By considering different families of shapes for the semi-infinite plate, it is shown how the amplitude of the waves can be minimised. For plates that increase in height as a function of the direction of flow, reach a local maximum, and then point slightly downwards at the point at which the free surface detaches, it appears the downstream wavetrain can be eliminated entirely.
Resumo:
A number of learning problems can be cast as an Online Convex Game: on each round, a learner makes a prediction x from a convex set, the environment plays a loss function f, and the learner’s long-term goal is to minimize regret. Algorithms have been proposed by Zinkevich, when f is assumed to be convex, and Hazan et al., when f is assumed to be strongly convex, that have provably low regret. We consider these two settings and analyze such games from a minimax perspective, proving minimax strategies and lower bounds in each case. These results prove that the existing algorithms are essentially optimal.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d=VC(F) bound on the graph density of a subgraph of the hypercube—one-inclusion graph. The first main result of this report is a density bound of n∙choose(n-1,≤d-1)/choose(n,≤d) < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d-contractible simplicial complexes, extending the well-known characterization that d=1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VC-dimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(log n) and is shown to be optimal up to a O(log k) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout
Resumo:
We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between [square root T] and [log T]. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.
Resumo:
We consider the problem of prediction with expert advice in the setting where a forecaster is presented with several online prediction tasks. Instead of competing against the best expert separately on each task, we assume the tasks are related, and thus we expect that a few experts will perform well on the entire set of tasks. That is, our forecaster would like, on each task, to compete against the best expert chosen from a small set of experts. While we describe the "ideal" algorithm and its performance bound, we show that the computation required for this algorithm is as hard as computation of a matrix permanent. We present an efficient algorithm based on mixing priors, and prove a bound that is nearly as good for the sequential task presentation case. We also consider a harder case where the task may change arbitrarily from round to round, and we develop an efficient approximate randomized algorithm based on Markov chain Monte Carlo techniques.
Resumo:
In the ocean science community, researchers have begun employing novel sensor platforms as integral pieces in oceanographic data collection, which have significantly advanced the study and prediction of complex and dynamic ocean phenomena. These innovative tools are able to provide scientists with data at unprecedented spatiotemporal resolutions. This paper focuses on the newly developed Wave Glider platform from Liquid Robotics. This vehicle produces forward motion by harvesting abundant natural energy from ocean waves, and provides a persistent ocean presence for detailed ocean observation. This study is targeted at determining a kinematic model for offline planning that provides an accurate estimation of the vehicle speed for a desired heading and set of environmental parameters. Given the significant wave height, ocean surface and subsurface currents, wind speed and direction, we present the formulation of a system identification to provide the vehicle’s speed over a range of possible directions.
Resumo:
The OED reminds us as surely as Ovid that a labyrinth is a “structure consisting of a number of intercommunicating passages arranged in bewildering complexity, through which it is it difficult or impossible to find one’s way without guidance”. Both Shaun Tan’s The Arrival (2006) and Matt Ottley’s Requiem for a Beast: A Work for Image, Word and Music (2007) mark a kind of labyrinthine watershed in Australian children’s literature. Deploying complex, intercommunicating logics of story and literacy, these books make high demands of their reader but also offer guidance for the successful navigation of their stories; for their protagonists as surely as for readers. That the shared logic of navigation in each book is literacy as privileged form of meaning-making is not surprising in the sense that within “a culture deeply invested in myths of individualism and self-sufficiency, it is easy to see why literacy is glorified as an attribute of individual control and achievement” (Williams and Zenger 166). The extent to which these books might be read as exemplifying desired norms of contemporary Australian culture seems to be affirmed by the fact of Tan and Ottley winning the Australian “Picture Book of the Year” prize awarded by the Children’s Book Council of Australia in 2007 and 2008 respectively. However, taking its cue from Ottley’s explicit intertextual use of the myth of Theseus and from Tan’s visual rhetoric of lostness and displacement, this paper reads these texts’ engagement with tropes of “literacy” in order to consider the ways in which norms of gender and culture seemingly circulated within these texts might be undermined by constructions of “nation” itself as a labyrinth that can only partly be negotiated by a literate subject. In doing so, I argue that these picture books, to varying degrees, reveal a perpetuation of the “literacy myth” (Graff 12) as a discourse of safety and agency but simultaneously bear traces of Ariadne’s story, wherein literacy alone is insufficient for safe navigation of the labyrinth of culture.
Resumo:
Women are substantially under-represented in the professoriate in Australia with a ratio of one female professor to every three male professors. This gender imbalance has been an ongoing concern with various affirmative action programs implemented in universities but to limited effect. Hence, there is a need to investigate the catalysts for and inhibitors to women’s ascent to the professoriate. This investigation focussed on women appointed to the professoriate between 2005, when a research quality assessment was first proposed, and 2008. Henceforth, these women are referred to as “New Women Professors”. The catalysts and inhibitors in these women’s careers were investigated through an electronic survey and focus group interviews. The survey was administered to new women professors (n=255) and new men professors (n=240) to enable a comparison of responses. However, only women participated in focus group discussions (n=21). An analysis of the survey and interview data revealed that the most critical catalysts for women’s advancement to the professoriate were equal employment opportunities and mentoring. Equal opportunity initiatives provided women with access to traditionally male-dominated forums. Mentoring gave women an insider perspective on the complexity of academia and the politics of the academy. The key inhibitors to women’s career advancement were negative discrimination, the culture of the boys’ club, the tension between personal and professional life, and isolation. Negative discrimination and the boys’ club are problematic because they favour men and marginalise women. The tension between personal and professional life is a particular concern for women who bear children and typically assume the major role in a family for child rearing. Isolation was a concern for both women and men with isolation appearing to increase after ascent to the professoriate. Knowledge of the significant catalysts and inhibitors provides a pragmatic way to orient universities towards redressing the gender balance in the professoriate.
Resumo:
Unusual event detection in crowded scenes remains challenging because of the diversity of events and noise. In this paper, we present a novel approach for unusual event detection via sparse reconstruction of dynamic textures over an overcomplete basis set, with the dynamic texture described by local binary patterns from three orthogonal planes (LBPTOP). The overcomplete basis set is learnt from the training data where only the normal items observed. In the detection process, given a new observation, we compute the sparse coefficients using the Dantzig Selector algorithm which was proposed in the literature of compressed sensing. Then the reconstruction errors are computed, based on which we detect the abnormal items. Our application can be used to detect both local and global abnormal events. We evaluate our algorithm on UCSD Abnormality Datasets for local anomaly detection, which is shown to outperform current state-of-the-art approaches, and we also get promising results for rapid escape detection using the PETS2009 dataset.
Resumo:
The role of human rights in environmental governance is increasingly gaining attention. This is particularly the case in relation to the challenge of climate change, where there is growing recognition of a real threat to human rights. This chapter argues in favour of greater reference to human rights principles in environmental governance. It refers to the experiences of Torres Strait Islanders to demonstrate the impact of climate change on human rights, and the many benefits which can be gained from a greater consideration of human rights norms in the development of strategies to combat climate change. The chapter also argues that a human rights perspective can help address the underlying injustice of climate change: that it is the people who have contributed least to the problem who will bear the heaviest burden of its effects.
Resumo:
The increasing capability of mobile devices and social networks to gather contextual and social data has led to increased interest in context-aware computing for mobile applications. This paper explores ways of reconciling two different viewpoints of context, representational and interactional, that have arisen respectively from technical and social science perspectives on context-aware computing. Through a case study in agile ridesharing, the importance of dynamic context control, historical context and broader context is discussed. We build upon earlier work that has sought to address the divide by further explicating the problem in the mobile context and expanding on the design approaches.
Resumo:
The World Health Organisation has highlighted the urgent need to address the escalating global public health crisis associated with road trauma. Low-income and middle-income countries bear the brunt of this, and rapid increases in private vehicle ownership in these nations present new challenges to authorities, citizens, and researchers alike. The role of human factors in the road safety equation is high. In China, human factors have been implicated in more than 90% of road crashes, with speeding identified as the primary cause (Wang, 2003). However, research investigating the factors that influence driving speeds in China is lacking (WHO, 2004). To help address this gap, we present qualitative findings from group interviews conducted with 35 Beijing car drivers in 2008. Some themes arising from data analysis showed strong similarities with findings from highly-motorised nations (e.g., UK, USA, and Australia) and include issues such as driver definitions of ‘speeding’ that appear to be aligned with legislative enforcement tolerances, factors relating to ease/difficulty of speed limit compliance, and the modifying influence of speed cameras. However, unique differences were evident, some of which, to our knowledge, are previously unreported in research literature. Themes included issues relating to an expressed lack of understanding about why speed limits are necessary and a perceived lack of transparency in traffic law enforcement and use of associated revenue. The perception of an unfair system seemed related to issues such as differential treatment of certain drivers and the large amount of individual discretion available to traffic police when administering sanctions. Additionally, a wide range of strategies to overtly avoid detection for speeding and/or the associated sanctions were reported. These strategies included the use of in-vehicle speed camera detectors, covering or removing vehicle licence number plates, and using personal networks of influential people to reduce or cancel a sanction. These findings have implications for traffic law, law enforcement, driver training, and public education in China. While not representative of all Beijing drivers, we believe that these research findings offer unique insights into driver behaviour in China.
Resumo:
This paper presents a comprehensive study to find the most efficient bitrate requirement to deliver mobile video that optimizes bandwidth, while at the same time maintains good user viewing experience. In the study, forty participants were asked to choose the lowest quality video that would still provide for a comfortable and long-term viewing experience, knowing that higher video quality is more expensive and bandwidth intensive. This paper proposes the lowest pleasing bitrates and corresponding encoding parameters for five different content types: cartoon, movie, music, news and sports. It also explores how the lowest pleasing quality is influenced by content type, image resolution, bitrate, and user gender, prior viewing experience, and preference. In addition, it analyzes the trajectory of users’ progression while selecting the lowest pleasing quality. The findings reveal that the lowest bitrate requirement for a pleasing viewing experience is much higher than that of the lowest acceptable quality. Users’ criteria for the lowest pleasing video quality are related to the video’s content features, as well as its usage purpose and the user’s personal preferences. These findings can provide video providers guidance on what quality they should offer to please mobile users.
Resumo:
Two decades after its inception, Latent Semantic Analysis(LSA) has become part and parcel of every modern introduction to Information Retrieval. For any tool that matures so quickly, it is important to check its lore and limitations, or else stagnation will set in. We focus here on the three main aspects of LSA that are well accepted, and the gist of which can be summarized as follows: (1) that LSA recovers latent semantic factors underlying the document space, (2) that such can be accomplished through lossy compression of the document space by eliminating lexical noise, and (3) that the latter can best be achieved by Singular Value Decomposition. For each aspect we performed experiments analogous to those reported in the LSA literature and compared the evidence brought to bear in each case. On the negative side, we show that the above claims about LSA are much more limited than commonly believed. Even a simple example may show that LSA does not recover the optimal semantic factors as intended in the pedagogical example used in many LSA publications. Additionally, and remarkably deviating from LSA lore, LSA does not scale up well: the larger the document space, the more unlikely that LSA recovers an optimal set of semantic factors. On the positive side, we describe new algorithms to replace LSA (and more recent alternatives as pLSA, LDA, and kernel methods) by trading its l2 space for an l1 space, thereby guaranteeing an optimal set of semantic factors. These algorithms seem to salvage the spirit of LSA as we think it was initially conceived.
Resumo:
This paper presents a practical framework to synthesize multi-sensor navigation information for localization of a rotary-wing unmanned aerial vehicle (RUAV) and estimation of unknown ship positions when the RUAV approaches the landing deck. The estimation performance of the visual tracking sensor can also be improved through integrated navigation. Three different sensors (inertial navigation, Global Positioning System, and visual tracking sensor) are utilized complementarily to perform the navigation tasks for the purpose of an automatic landing. An extended Kalman filter (EKF) is developed to fuse data from various navigation sensors to provide the reliable navigation information. The performance of the fusion algorithm has been evaluated using real ship motion data. Simulation results suggest that the proposed method can be used to construct a practical navigation system for a UAV-ship landing system.