999 resultados para Conrad


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cores and dredges described in this report were taken during the Robert Conrad Cruise 8 from November 1963 until August 1964 by the Lamont Geological Observatory, Columbia University from the R/V Robert Conrad. A total of 140 cores and dredges were recovered and are available at Lamont-Doherty Earth Observatory for sampling and study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cores described in this report were taken during the R/V Robert Conrad Cruise 06 from May until June 1963 by the Lamont Geological Observatory, Columbia University. A total of 5 cores were recovered and are available at Lamont-Doherty Earth Observatory for sampling and study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este relatório tem como principal objectivo a descrição e análise de todos os outlets do departamento de F&B do Conrad Algarve, Resort 5 estrelas luxo no Algarve. Estando o F&B presente ao longo de toda a estada do hóspede, torna-se num departamento em constante observação e avaliação. A falta de qualificação e formação dificulta a satisfação das elevadas expectativas do cliente que adquire uma experiencia de luxo mediante o patamar de serviço realmente alcançado por cada sector. De forma mais ou menos incisiva, esta realidade faz-se sentir na produtividade, na motivação e no atendimento geral ao cliente. Nos outlets em que os recursos a trabalhadores temporários são elevados, o decréscimo do elevado padrão de qualidade é notório aos olhos dos hóspedes. Assim, na mesma medida da falta de formação, também a rotatividade se torna um verdadeiro impedimento ao alcance de um nível estável e coerente da prestação de um serviço de luxo.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While extensive literature exists on knowledge-based urban development (KBUD) focusing on large metropolitan cities, there is a paucity of literature looking into similar developments in small regional towns. The major aim of the paper is to examine the nature and potential for building knowledge precincts in regional towns. Through a review of extant literature on knowledge precincts, five key value elements and principles for development are identified. These principles are then tested and applied to a case study of the small town of Cooroy in Noosa, Australia. The Cooroy Lower Mill Site and its surroundings are the designated location for what may be called a community-based creative knowledge precinct. The opportunities and challenges for setting up a creative knowledge precinct in Cooroy were examined. The study showed that there is a potential to develop Cooroy with the provision of cultural and learning facilities, partnerships with government, business and educational institutions, and networking with other creative and knowledge precincts in the region. However, there are also specific challenges relating to the development of a knowledge precinct within the regional town and these relate to critical mass, competition and governance.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This architectural and urban design project was conducted as part of the Brisbane Airport Corporations master-planning Atelier, run in conjunction with City Lab. This creation and innovation event brought together approximately 80 designers, associated professionals, and both local and state government representatives to research concepts for future development and planning of the Brisbane airport site. The Team Delta research project explored the development of a new precinct cluster around the existing international terminal building; with a view of reinforcing the sense of place and arrival. The development zone explores the options of developing a subtropical character through landscape elements such as open plazas, tourist attractions, links to existing adjacent waterways, and localised rapid transport options. The proposal tests the possibilities of developing a cultural hub in conjunction with transport infrastructure and the airport terminal(s).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It hasn’t been a good year for media barons. Actually, it’s not been a great century. In 2007 Baron Conrad Black was sent to jail in the US for embezzling his shareholders. Silvio Berlusconi’s grip on the Italian media hasn’t prevented a steady flow of allegations of sleaze and scandal since 2009, which have reduced him to a global laughing stock. And since July 2011, we have seen the dizzying fall of Rupert Murdoch and his son, James, from their positions of unquestioned (and unquestionable) authority at the helm of the world’s most powerful media empire.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The R statistical environment and language has demonstrated particular strengths for interactive development of statistical algorithms, as well as data modelling and visualisation. Its current implementation has an interpreter at its core which may result in a performance penalty in comparison to directly executing user algorithms in the native machine code of the host CPU. In contrast, the C++ language has no built-in visualisation capabilities, handling of linear algebra or even basic statistical algorithms; however, user programs are converted to high-performance machine code, ahead of execution. A new method avoids possible speed penalties in R by using the Rcpp extension package in conjunction with the Armadillo C++ matrix library. In addition to the inherent performance advantages of compiled code, Armadillo provides an easy-to-use template-based meta-programming framework, allowing the automatic pooling of several linear algebra operations into one, which in turn can lead to further speedups. With the aid of Rcpp and Armadillo, conversion of linear algebra centered algorithms from R to C++ becomes straightforward. The algorithms retains the overall structure as well as readability, all while maintaining a bidirectional link with the host R environment. Empirical timing comparisons of R and C++ implementations of a Kalman filtering algorithm indicate a speedup of several orders of magnitude.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modelling video sequences by subspaces has recently shown promise for recognising human actions. Subspaces are able to accommodate the effects of various image variations and can capture the dynamic properties of actions. Subspaces form a non-Euclidean and curved Riemannian manifold known as a Grassmann manifold. Inference on manifold spaces usually is achieved by embedding the manifolds in higher dimensional Euclidean spaces. In this paper, we instead propose to embed the Grassmann manifolds into reproducing kernel Hilbert spaces and then tackle the problem of discriminant analysis on such manifolds. To achieve efficient machinery, we propose graph-based local discriminant analysis that utilises within-class and between-class similarity graphs to characterise intra-class compactness and inter-class separability, respectively. Experiments on KTH, UCF Sports, and Ballet datasets show that the proposed approach obtains marked improvements in discrimination accuracy in comparison to several state-of-the-art methods, such as the kernel version of affine hull image-set distance, tensor canonical correlation analysis, spatial-temporal words and hierarchy of discriminative space-time neighbourhood features.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background subtraction is a fundamental low-level processing task in numerous computer vision applications. The vast majority of algorithms process images on a pixel-by-pixel basis, where an independent decision is made for each pixel. A general limitation of such processing is that rich contextual information is not taken into account. We propose a block-based method capable of dealing with noise, illumination variations, and dynamic backgrounds, while still obtaining smooth contours of foreground objects. Specifically, image sequences are analyzed on an overlapping block-by-block basis. A low-dimensional texture descriptor obtained from each block is passed through an adaptive classifier cascade, where each stage handles a distinct problem. A probabilistic foreground mask generation approach then exploits block overlaps to integrate interim block-level decisions into final pixel-level foreground segmentation. Unlike many pixel-based methods, ad-hoc postprocessing of foreground masks is not required. Experiments on the difficult Wallflower and I2R datasets show that the proposed approach obtains on average better results (both qualitatively and quantitatively) than several prominent methods. We furthermore propose the use of tracking performance as an unbiased approach for assessing the practical usefulness of foreground segmentation methods, and show that the proposed approach leads to considerable improvements in tracking accuracy on the CAVIAR dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the field of face recognition, Sparse Representation (SR) has received considerable attention during the past few years. Most of the relevant literature focuses on holistic descriptors in closed-set identification applications. The underlying assumption in SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such assumption is easily violated in the more challenging face verification scenario, where an algorithm is required to determine if two faces (where one or both have not been seen before) belong to the same person. In this paper, we first discuss why previous attempts with SR might not be applicable to verification problems. We then propose an alternative approach to face verification via SR. Specifically, we propose to use explicit SR encoding on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which are then concatenated to form an overall face descriptor. Due to the deliberate loss spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment & various image deformations. Within the proposed framework, we evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN), and an implicit probabilistic technique based on Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems. The experiments also show that l1-minimisation based encoding has a considerably higher computational than the other techniques, but leads to higher recognition rates.