606 resultados para Cylindres (enregistrements sonores) -- Catalogues
Resumo:
Title from spine.
Resumo:
We consider the statistical problem of catalogue matching from a machine learning perspective with the goal of producing probabilistic outputs, and using all available information. A framework is provided that unifies two existing approaches to producing probabilistic outputs in the literature, one based on combining distribution estimates and the other based on combining probabilistic classifiers. We apply both of these to the problem of matching the HI Parkes All Sky Survey radio catalogue with large positional uncertainties to the much denser SuperCOSMOS catalogue with much smaller positional uncertainties. We demonstrate the utility of probabilistic outputs by a controllable completeness and efficiency trade-off and by identifying objects that have high probability of being rare. Finally, possible biasing effects in the output of these classifiers are also highlighted and discussed.
Resumo:
An emerging issue in the field of astronomy is the integration, management and utilization of databases from around the world to facilitate scientific discovery. In this paper, we investigate application of the machine learning techniques of support vector machines and neural networks to the problem of amalgamating catalogues of galaxies as objects from two disparate data sources: radio and optical. Formulating this as a classification problem presents several challenges, including dealing with a highly unbalanced data set. Unlike the conventional approach to the problem (which is based on a likelihood ratio) machine learning does not require density estimation and is shown here to provide a significant improvement in performance. We also report some experiments that explore the importance of the radio and optical data features for the matching problem.
Resumo:
Ce mémoire présente deux algorithmes qui ont pour but d’améliorer la précision de l’estimation de la direction d’arrivée de sources sonores et de leurs échos. Le premier algorithme, qui s’appelle la méthode par élimination des sources, permet d’améliorer l’estimation de la direction d’arrivée d’échos qui sont noyés dans le bruit. Le second, qui s’appelle Multiple Signal Classification à focalisation de phase, utilise l’information dans la phase à chaque fréquence pour déterminer la direction d’arrivée de sources à large bande. La combinaison de ces deux algorithmes permet de localiser des échos dont la puissance est de -17 dB par rapport à la source principale, jusqu’à un rapport échoà- bruit de -15 dB. Ce mémoire présente aussi des mesures expérimentales qui viennent confirmer les résultats obtenus lors de simulations.
Resumo:
Chaque année, le piratage mondial de la musique coûte plusieurs milliards de dollars en pertes économiques, pertes d’emplois et pertes de gains des travailleurs ainsi que la perte de millions de dollars en recettes fiscales. La plupart du piratage de la musique est dû à la croissance rapide et à la facilité des technologies actuelles pour la copie, le partage, la manipulation et la distribution de données musicales [Domingo, 2015], [Siwek, 2007]. Le tatouage des signaux sonores a été proposé pour protéger les droit des auteurs et pour permettre la localisation des instants où le signal sonore a été falsifié. Dans cette thèse, nous proposons d’utiliser la représentation parcimonieuse bio-inspirée par graphe de décharges (spikegramme), pour concevoir une nouvelle méthode permettant la localisation de la falsification dans les signaux sonores. Aussi, une nouvelle méthode de protection du droit d’auteur. Finalement, une nouvelle attaque perceptuelle, en utilisant le spikegramme, pour attaquer des systèmes de tatouage sonore. Nous proposons tout d’abord une technique de localisation des falsifications (‘tampering’) des signaux sonores. Pour cela nous combinons une méthode à spectre étendu modifié (‘modified spread spectrum’, MSS) avec une représentation parcimonieuse. Nous utilisons une technique de poursuite perceptive adaptée (perceptual marching pursuit, PMP [Hossein Najaf-Zadeh, 2008]) pour générer une représentation parcimonieuse (spikegramme) du signal sonore d’entrée qui est invariante au décalage temporel [E. C. Smith, 2006] et qui prend en compte les phénomènes de masquage tels qu’ils sont observés en audition. Un code d’authentification est inséré à l’intérieur des coefficients de la représentation en spikegramme. Puis ceux-ci sont combinés aux seuils de masquage. Le signal tatoué est resynthétisé à partir des coefficients modifiés, et le signal ainsi obtenu est transmis au décodeur. Au décodeur, pour identifier un segment falsifié du signal sonore, les codes d’authentification de tous les segments intacts sont analysés. Si les codes ne peuvent être détectés correctement, on sait qu’alors le segment aura été falsifié. Nous proposons de tatouer selon le principe à spectre étendu (appelé MSS) afin d’obtenir une grande capacité en nombre de bits de tatouage introduits. Dans les situations où il y a désynchronisation entre le codeur et le décodeur, notre méthode permet quand même de détecter des pièces falsifiées. Par rapport à l’état de l’art, notre approche a le taux d’erreur le plus bas pour ce qui est de détecter les pièces falsifiées. Nous avons utilisé le test de l’opinion moyenne (‘MOS’) pour mesurer la qualité des systèmes tatoués. Nous évaluons la méthode de tatouage semi-fragile par le taux d’erreur (nombre de bits erronés divisé par tous les bits soumis) suite à plusieurs attaques. Les résultats confirment la supériorité de notre approche pour la localisation des pièces falsifiées dans les signaux sonores tout en préservant la qualité des signaux. Ensuite nous proposons une nouvelle technique pour la protection des signaux sonores. Cette technique est basée sur la représentation par spikegrammes des signaux sonores et utilise deux dictionnaires (TDA pour Two-Dictionary Approach). Le spikegramme est utilisé pour coder le signal hôte en utilisant un dictionnaire de filtres gammatones. Pour le tatouage, nous utilisons deux dictionnaires différents qui sont sélectionnés en fonction du bit d’entrée à tatouer et du contenu du signal. Notre approche trouve les gammatones appropriés (appelés noyaux de tatouage) sur la base de la valeur du bit à tatouer, et incorpore les bits de tatouage dans la phase des gammatones du tatouage. De plus, il est montré que la TDA est libre d’erreur dans le cas d’aucune situation d’attaque. Il est démontré que la décorrélation des noyaux de tatouage permet la conception d’une méthode de tatouage sonore très robuste. Les expériences ont montré la meilleure robustesse pour la méthode proposée lorsque le signal tatoué est corrompu par une compression MP3 à 32 kbits par seconde avec une charge utile de 56.5 bps par rapport à plusieurs techniques récentes. De plus nous avons étudié la robustesse du tatouage lorsque les nouveaux codec USAC (Unified Audion and Speech Coding) à 24kbps sont utilisés. La charge utile est alors comprise entre 5 et 15 bps. Finalement, nous utilisons les spikegrammes pour proposer trois nouvelles méthodes d’attaques. Nous les comparons aux méthodes récentes d’attaques telles que 32 kbps MP3 et 24 kbps USAC. Ces attaques comprennent l’attaque par PMP, l’attaque par bruit inaudible et l’attaque de remplacement parcimonieuse. Dans le cas de l’attaque par PMP, le signal de tatouage est représenté et resynthétisé avec un spikegramme. Dans le cas de l’attaque par bruit inaudible, celui-ci est généré et ajouté aux coefficients du spikegramme. Dans le cas de l’attaque de remplacement parcimonieuse, dans chaque segment du signal, les caractéristiques spectro-temporelles du signal (les décharges temporelles ;‘time spikes’) se trouvent en utilisant le spikegramme et les spikes temporelles et similaires sont remplacés par une autre. Pour comparer l’efficacité des attaques proposées, nous les comparons au décodeur du tatouage à spectre étendu. Il est démontré que l’attaque par remplacement parcimonieux réduit la corrélation normalisée du décodeur de spectre étendu avec un plus grand facteur par rapport à la situation où le décodeur de spectre étendu est attaqué par la transformation MP3 (32 kbps) et 24 kbps USAC.
Resumo:
Des sites de visionnement de contenu audio-vidéo en temps-réel comme YouTube sont devenus très populaires. Le téléchargement des fichiers audio/vidéo consomme une quantité importante de bande passante des réseaux Internet. L’utilisation de codecs à bas débit permet de compresser la taille des fichiers transmis afin de consommer moins de bande passante. La conséquence est une diminution de la qualité de ce qui est transmis. Une diminution de qualité mène à l’apparition de défauts perceptibles dans les fichiers. Ces défauts sont appelés des artifices de compression. L’utilisation d’un algorithme de post-traitement sur les fichiers sonores pourrait augmenter la qualité perçue de la musique transmise en corrigeant certains artifices à la réception, sans toutefois consommer davantage de bande passante. Pour rehausser la qualité subjective des fichiers sonores, il est d’abord nécessaire de déterminer quelles caractéristiques dégradent la qualité perceptuelle. Le présent projet a donc pour objectif le développement d’un algorithme capable de localiser et de corriger de façon non intrusive, un artifice provoqué par des discontinuités et des incohérences au niveau des harmoniques qui dégrade la qualité objective dans les signaux sonores compressés à bas débits (8 – 12 kilobits par seconde).
Resumo:
Through media such as newspapers, letterbox flyers, corporate brochures and television we are regularly confronted with descriptions for conventional (bricks 'n' mortar style) services. These representations vary in the terminology utilised, the depth of the description, the aspects of the service that are characterised and their applicability to candidate service requestors. Existing service catalogues (such as the Yellow Pages) provide little relief for service requestors from the burdensome task of discovering, comparing and substituting services. Add to this environment the rapidly evolving area of web services with its associated surfeit of standards, and the result is a considerably fragmented approach to the description of services. It leaves the reality of the Semantic Web somewhat clouded. --------- Let's consider service description briefly, before discussing our concerns with existing approaches to description. The act of describing is performed prior to advertising. This simple fact provides an interesting paradox as services cannot be described exactly before advertisement. This doesn't mean they can't be described comprehensively. By "exactly", we are referring to the fact that context provided by a service requestor (and their service needs) will alter the description of the service that is presented to the discoverer. For example, a service provider who operates a cinema wants to describe the price of their service. Let's say the advertised price is $15. They also want to state that a pensioner discount and a student discount is available which provides a 50% discount. A customer (i.e. service requestor) uses the cinema web site to purchase tickets online. They find the movie of their choice at a time that suits. However, its not until some context is provided by the requestor that the exact price is determined. The requestor might state that they are a pensioner. The same is applicable for a service requestor who purchases multiple tickets perhaps on behalf of other people. The disconnect between when the service is described and when a requestor provides context introduces challenges to the description process. A service provider would be ill-advised to offer independent descriptions that represent all the permutations possible for a single service. The descriptive effort would be prohibitive.
Resumo:
This article is an analysis and contextualisation of 'Super Vanitas' a video installation by Stephen Russell that was held at Boxcopy ARI, Brisbane. It discusses the significance of the painting 'Death of Marat' (J.L. David, 1793) to the work and describes the methodological processes that are revealed in the work.
Resumo:
Cities accumulate and distribute vast sets of digital information. Many decision-making and planning processes in councils, local governments and organisations are based on both real-time and historical data. Until recently, only a small, carefully selected subset of this information has been released to the public – usually for specific purposes (e.g. train timetables, release of planning application through websites to name just a few). This situation is however changing rapidly. Regulatory frameworks, such as the Freedom of Information Legislation in the US, the UK, the European Union and many other countries guarantee public access to data held by the state. One of the results of this legislation and changing attitudes towards open data has been the widespread release of public information as part of recent Government 2.0 initiatives. This includes the creation of public data catalogues such as data.gov.au (U.S.), data.gov.uk (U.K.), data.gov.au (Australia) at federal government levels, and datasf.org (San Francisco) and data.london.gov.uk (London) at municipal levels. The release of this data has opened up the possibility of a wide range of future applications and services which are now the subject of intensified research efforts. Previous research endeavours have explored the creation of specialised tools to aid decision-making by urban citizens, councils and other stakeholders (Calabrese, Kloeckl & Ratti, 2008; Paulos, Honicky & Hooker, 2009). While these initiatives represent an important step towards open data, they too often result in mere collections of data repositories. Proprietary database formats and the lack of an open application programming interface (API) limit the full potential achievable by allowing these data sets to be cross-queried. Our research, presented in this paper, looks beyond the pure release of data. It is concerned with three essential questions: First, how can data from different sources be integrated into a consistent framework and made accessible? Second, how can ordinary citizens be supported in easily composing data from different sources in order to address their specific problems? Third, what are interfaces that make it easy for citizens to interact with data in an urban environment? How can data be accessed and collected?
Resumo:
Research background: The general public is predominantly unaware of the complexities and skills involved in the fashion supply chain (design, manufacture and retail) of couture/bespoke garments. As cited in McMahon and Morley (2011) “While a high price tag is widely accepted as a necessary element of luxury products (Fionda &Moore, 2009) this must be accompanied by a story that gives the items intrinsic as well as extrinsic value (Keller, 2009). Research question: Is it possible to simulate a fashion couture studio environment in a non-traditional public space in order to produce and promote the processes involved in couture designs; each with their own story and aligned to the aesthetic of six collaborating high profile couture fashion retailers? Research contribution: The Couture Academy project allowed the team to curate the story behind the couture design and supply chain process. It was an experimental, curated, ‘hot-house’ fashion design project undertaken in real time to create one-off couture garments, inspired by key seasonal fashion trends as determined by leading Westfield retailers. The project was industry based, with Westfield Chermside as the launch pad for six QUT fashion students to experiment with design nuances aligned to renowned national fashion industry retailers; Cue, Dissh, Kitten D'Amour, Mombasa and Pink Mint. Industry mentors were assigned to each student designer, in order to heighten the design challenge. The exhibition consisted of a pop-up couture workshop based at Westfield Chermside. A complete fashion studio (sewing machines, pattern-cutting tables and mannequins) was set up for a seven day period in the foyer of the shopping centre with the public watching as the design process unfolded in real-time. The final design outcomes were paraded at the Southbank Precinct to a prominent industry and media panel, with the winner receiving a $2000 prize to fund a research trip to an international fashion capital of their choice. Research significance: This curated fashion project was funded by Westfield Group Australia. "It was the most successful season launch Westfield Chermside has ever had from both an average volume for exposure perspective, and in terms of the level of engagement with retailers and shoppers," said Laura Walls, Westfield Public Relations Consultant. Significant media coverage was generated; including three full pages of editorial in Brisbane’s Sunday Mail, with an estimated publicity value of $95,000. And public exposure through the live project/exhibition was estimated at 7,000 people over the 7 days.
Resumo:
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2014 evaluation campaign, which consisted of three tracks: The Interactive Social Book Search Track investigated user information seeking behavior when interacting with various sources of information, for realistic task scenarios, and how the user interface impacts search and the search experience. The Social Book Search Track investigated the relative value of authoritative metadata and user-generated content for search and recommendation using a test collection with data from Amazon and LibraryThing, including user profiles and personal catalogues. The Tweet Contextualization Track investigated tweet contextualization, helping a user to understand a tweet by providing him with a short background summary generated from relevant Wikipedia passages aggregated into a coherent summary. INEX 2014 was an exciting year for INEX in which we for the third time ran our workshop as part of the CLEF labs. This paper gives an overview of all the INEX 2014 tracks, their aims and task, the built test-collections, the participants, and gives an initial analysis of the results.
Resumo:
Similar to most other creative industries, the evolution of the music industry is heavily shaped by media technologies. This was equally true in 1999, when the global recorded music industry had experienced two decades of continuous growth largely driven by the rapid transition from vinyl records to Compact Discs. The transition encouraged avid music listeners to purchase much of their music collections all over again in order to listen to their favourite music with ‘digital sound’. As a consequence of this successful product innovation, recorded music sales (unit measure) more than doubled between the early 1980s and the end of the 1990s. It was with this backdrop that the first peer-to-peer file sharing service was developed and released to the mainstream music market in 1999 by the college student Shawn Fanning. The service was named Napster and it marks the beginning of an era that is now a classic example of how an innovation is able to disrupt an entire industry and make large swathes of existing industry competences obsolete. File sharing services such as Napster, followed by a range of similar services in its path, reduced physical unit sales in the music industry to levels that had not been seen since the 1970s. The severe impact of the internet on physical sales shocked many music industry executives who spent much of the 2000s vigorously trying to reverse the decline and make the disruptive technologies go away. At the end, they learned that their efforts were to no avail and the impact on the music industry proved to be transformative, irreversible and, to many music industry professionals, also devastating. Thousands of people lost their livelihood, large and small music companies have folded or been forced into mergers or acquisitions. But as always during periods of disruption, the past 15 years have also been very innovative, spurring a plethora of new music business models. These new business models have mainly emerged outside the music industry and the innovators have been often been required to be both persuasive and persistent in order to get acceptance from the risk-averse and cash-poor music industry establishment. Apple was one such change agent that in 2003 was the first company to open up a functioning and legal market for online music. iTunes Music Store was the first online retail outlet that was able to offer the music catalogues from all the major music companies; it used an entirely novel pricing model, and it allowed consumers to de-bundle the music album and only buy the songs that they actually liked. Songs had previously been bundled by physical necessity as discs or cassettes, but with iTunes Music Store, the institutionalized album bundle slowly started to fall apart. The consequences had an immediate impact on music retailing and within just a few years, many brick and mortar record stores were forced out of business in markets across the world. The transformation also had disruptive consequences beyond music retailing and redefined music companies’ organizational structures, work processes and routines, as well as professional roles. iTunes Music Store in one sense was a disruptive innovation, but it was at the same time relatively incremental, since the major labels’ positions and power structures remained largely unscathed. The rights holders still controlled their intellectual properties and the structures that guided the royalties paid per song that was sold were predictable, transparent and in line with established music industry practices.
Resumo:
The study is dedicated to the Russian poet and prose writer Anatolii Borisovich Mariengof (1897–1962). Mariengof – “the last dandy of the Republic” – was one of the leaders and main theoreticians in the poetic group of the Russian Imaginists. For his contemporaries, he was an Imaginist par excellence. His Imaginist principles – in theory and practice – are applied to the study of his first fictional novel, Cynics (1928), which served as an epilogue for his Imaginist period (1918–1928). The novel was not published in the Soviet Union until 1988. The method used in the study is a conceptual and literary historical reading, making use of the contemporary semiotic understanding of cultural mechanisms and of intertextual analysis. There are three main concepts used throughout the study: dandy, montage and catachresis. In the first chapter, the history, practice and theory of the Russian Imaginism are analyzed from the point of view of dandyism. The Imaginist theatricalisation of life is juxtaposed with the thematic analysis of their poetry, and Imaginist dandyism appears as a catachrestic category in culture. The second chapter examines the Imaginist poetic theory. It is discussed in the context of the montage principle, defining the post-revolutionary culture in Soviet Russia. The Imaginist montage can be divided into three main theoretical paradigms: S. Yesenin’s “technical montage” (reminiscent of Dadaist collage), V. Shershenevich’s “nominative montage” (catalogues of images) and Anatolii Mariengof’s “catachrestic montage”. The final chapter deals with Mariengof’s first fictional novel, Cynics. The study begins with the complex history of publication of the novel, as well as its relation to the Imaginist poetic principles and to the history of the poetic movement. Cynics is, essentially, an Imaginist montage novel. The fragmentary play of the fictional and the documentary material follows the Imaginist montage principle. The chapter concludes in a thematic analysis of the novel, concentrating on the description of the October Revolution in Cynics.
Resumo:
Bernard Bernstein collection documents professional activities of Bernard Bernstein, a jeweler, metal smith, writer, and teacher. The collection includes artifacts, correspondence, documents, manuscripts, printed materials, photographs, other visual materials, and sketches.The larger part of the collection includes materials dealing with the artistic side of Bernard Bernstein. These materials are found throughout the collection and consist of artifacts produced during his schooling at City College (Series I: Artifacts), various jewelry designs produced by Bernard Bernstein for commercial use (Series III: Designs), certificates and awards (Series V: General), and materials pertaining to a number of shows and exhibits that Bernard Bernstein was a part of (Series IV: Exhibitions and Art Catalogues).Other materials include documents pertaining to Bernard Bernstein education, professional carrier as a teacher ( Series II: City College of the City University of New York, Series V: General), and his articles in professional journals (Series VI: Printed Materials).In some cases materials are accompanied by Bernard Bernstein’s notes explaining the significance and provenance of the documents.
Resumo:
Revolution at home! Visual Changes in Everyday Life in Finland in the Late 1960s and Early 1970s The purpose of my research was to investigate the visual changes in private homes in Finland during the 1960s and 1970s. The 1960s is often described as a turning point in Finnish life, a time when the society's previous agricultural orientation began to give way first to an industrial orientation and then, by the end of the 1970s, to a service orientation. My title refers to three elements in the transition period: the question of daily life; the timeframe; and the visual changes observable in private homes, which in retrospect signalled a kind of revolution in the social orientation. Those changes appeared not only in colours and designs but also in the forms and materials of household objects. My premise is that analysing interiors from a historical perspective can reveal valuable information about Finnish society and social attitudes, information that might easily escape attention otherwise. I have used the time-honoured method of collecting narratives. As far back as Aristotle, formulating narratives has been a means of gaining knowledge. By collecting and classifying narratives about the 1960s and 1970s, it is possible to gain new insight into these important decades. The archetypal 1960s narrative, involving student demonstrations and young people's efforts to improve society, is well known. Less well known is the narrative that relates the changes going on in daily life. Substantially the study focuses mainly on fabrics, porcelain ware and the use of plastics. Marimekko's style is especially important when following innovations in the 1950s, 1960s and 1970s. Porcelain production at the Arabia factory was another element that had a great influence on the look of Finnish homes and kitchens; and a further widespread phenomenon of the late 1960s and early 1970s was the use of plastics in many different forms. Further evidence was sought in Anttila department store mail catalogues, which displayed products that were marketed on a large scale, as well as in magazines such as Avotakka. The terminal point of the visual evolution is the real homes, as seen in the questionnaire "Homemade". I have used the 800 pages of the oral history text that respondents of the Finnish Literature Society have written about their first home in the 1960s. I also used archival material on actual homes in Helsinki from the archives of the Helsinki City Museum. The basic story is the elite narrative, which was produced by students in the 1960s. My main narrative from the same time is visual change in everyday life in the late 1960s and early 1970s. I have classified the main narrative of visual change into four subcategories: the narrative of national ideas, the narrative of a better standard of living, the narrative of objects in the culture of everyday life and the narrative of changing colour and form.