874 resultados para Convergence of media
Resumo:
This paper presents the on-going research performed in order to integrate process automation and process management support in the context of media production. This has been addressed on the basis of a holistic approach to software engineering applied to media production modelling to ensure design correctness, completeness and effectiveness. The focus of the research and development has been to enhance the metadata management throughout the process in a similar fashion to that achieved in Decision Support Systems (DSS) to facilitate well-grounded business decisions. The paper sets out the aims and objectives and the methodology deployed. The paper describes the solution in some detail and sets out some preliminary conclusions and the planned future work.
Resumo:
Dual Carrier Modulation (DCM) was chosen as the higher data rate modulation scheme for MB-OFDM (Multiband Orthogonal Frequency Division Multiplexing) in the UWB (Ultra-Wide Band) radio platform ECMA-368. ECMA-368 has been chosen as the physical implementation for high data rate Wireless USB (W-USB) and Bluetooth 3.0. In this paper, different demapping methods for the DCM demapper are presented, being Soft Bit, Maximum Likely (ML) Soft Bit and Log Likelihood Ratio (LLR). Frequency diversity and Channel State Information (CSI) are further techniques to enhance demapping methods. The system performance for those DCM demapping methods simulated in realistic multi-path environments are provided and compared.
Resumo:
In recent years nonpolynomial finite element methods have received increasing attention for the efficient solution of wave problems. As with their close cousin the method of particular solutions, high efficiency comes from using solutions to the Helmholtz equation as basis functions. We present and analyze such a method for the scattering of two-dimensional scalar waves from a polygonal domain that achieves exponential convergence purely by increasing the number of basis functions in each element. Key ingredients are the use of basis functions that capture the singularities at corners and the representation of the scattered field towards infinity by a combination of fundamental solutions. The solution is obtained by minimizing a least-squares functional, which we discretize in such a way that a matrix least-squares problem is obtained. We give computable exponential bounds on the rate of convergence of the least-squares functional that are in very good agreement with the observed numerical convergence. Challenging numerical examples, including a nonconvex polygon with several corner singularities, and a cavity domain, are solved to around 10 digits of accuracy with a few seconds of CPU time. The examples are implemented concisely with MPSpack, a MATLAB toolbox for wave computations with nonpolynomial basis functions, developed by the authors. A code example is included.
Resumo:
The academic discipline of television studies has been constituted by the claim that television is worth studying because it is popular. Yet this claim has also entailed a need to defend the subject against the triviality that is associated with the television medium because of its very popularity. This article analyses the many attempts in the later twentieth and twenty-first centuries to constitute critical discourses about television as a popular medium. It focuses on how the theoretical currents of Television Studies emerged and changed in the UK, where a disciplinary identity for the subject was founded by borrowing from related disciplines, yet argued for the specificity of the medium as an object of criticism. Eschewing technological determinism, moral pathologization and sterile debates about television's supposed effects, UK writers such as Raymond Williams addressed television as an aspect of culture. Television theory in Britain has been part of, and also separate from, the disciplinary fields of media theory, literary theory and film theory. It has focused its attention on institutions, audio-visual texts, genres, authors and viewers according to the ways that research problems and theoretical inadequacies have emerged over time. But a consistent feature has been the problem of moving from a descriptive discourse to an analytical and evaluative one, and from studies of specific texts, moments and locations of television to larger theories. By discussing some historically significant critical work about television, the article considers how academic work has constructed relationships between the different kinds of objects of study. The article argues that a fundamental tension between descriptive and politically activist discourses has confused academic writing about ›the popular‹. Television study in Britain arose not to supply graduate professionals to the television industry, nor to perfect the instrumental techniques of allied sectors such as advertising and marketing, but to analyse and critique the medium's aesthetic forms and to evaluate its role in culture. Since television cannot be made by ›the people‹, the empowerment that discourses of television theory and analysis aimed for was focused on disseminating the tools for critique. Recent developments in factual entertainment television (in Britain and elsewhere) have greatly increased the visibility of ›the people‹ in programmes, notably in docusoaps, game shows and other participative formats. This has led to renewed debates about whether such ›popular‹ programmes appropriately represent ›the people‹ and how factual entertainment that is often despised relates to genres hitherto considered to be of high quality, such as scripted drama and socially-engaged documentary television. A further aspect of this problem of evaluation is how television globalisation has been addressed, and the example that the issue has crystallised around most is the reality TV contest Big Brother. Television theory has been largely based on studying the texts, institutions and audiences of television in the Anglophone world, and thus in specific geographical contexts. The transnational contexts of popular television have been addressed as spaces of contestation, for example between Americanisation and national or regional identities. Commentators have been ambivalent about whether the discipline's role is to celebrate or critique television, and whether to do so within a national, regional or global context. In the discourses of the television industry, ›popular television‹ is a quantitative and comparative measure, and because of the overlap between the programming with the largest audiences and the scheduling of established programme types at the times of day when the largest audiences are available, it has a strong relationship with genre. The measurement of audiences and the design of schedules are carried out in predominantly national contexts, but the article refers to programmes like Big Brother that have been broadcast transnationally, and programmes that have been extensively exported, to consider in what ways they too might be called popular. Strands of work in television studies have at different times attempted to diagnose what is at stake in the most popular programme types, such as reality TV, situation comedy and drama series. This has centred on questions of how aesthetic quality might be discriminated in television programmes, and how quality relates to popularity. The interaction of the designations ›popular‹ and ›quality‹ is exemplified in the ways that critical discourse has addressed US drama series that have been widely exported around the world, and the article shows how the two critical terms are both distinct and interrelated. In this context and in the article as a whole, the aim is not to arrive at a definitive meaning for ›the popular‹ inasmuch as it designates programmes or indeed the medium of television itself. Instead the aim is to show how, in historically and geographically contingent ways, these terms and ideas have been dynamically adopted and contested in order to address a multiple and changing object of analysis.
Resumo:
The speed of convergence while training is an important consideration in the use of neural nets. The authors outline a new training algorithm which reduces both the number of iterations and training time required for convergence of multilayer perceptrons, compared to standard back-propagation and conjugate gradient descent algorithms.
Resumo:
This paper investigates the robustness of a hybrid analog/digital feedback active noise cancellation (ANC) headset system. The digital ANC systems with the filtered-x least-mean-square (FXLMS) algorithm require accurate estimation of the secondary path for the stability and convergence of the algorithm. This demands a great challenge for the ANC headset design because the secondary path may fluctuate dramatically such as when the user adjusts the position of the ear-cup. In this paper, we analytically show that adding an analog feedback loop into the digital ANC systems can effectively reduce the plant fluctuation, thus achieving a more robust system. The method for designing the analog controller is highlighted. A practical hybrid analog/digital feedback ANC headset has been built and used to conduct experiments, and the experimental results show that the hybrid headset system is more robust under large plant fluctuation, and has achieved satisfactory noise cancellation for both narrowband and broadband noises.
Resumo:
A new boundary integral operator is introduced for the solution of the soundsoft acoustic scattering problem, i.e., for the exterior problem for the Helmholtz equation with Dirichlet boundary conditions. We prove that this integral operator is coercive in L2(Γ) (where Γ is the surface of the scatterer) for all Lipschitz star-shaped domains. Moreover, the coercivity is uniform in the wavenumber k = ω/c, where ω is the frequency and c is the speed of sound. The new boundary integral operator, which we call the “star-combined” potential operator, is a slight modification of the standard combined potential operator, and is shown to be as easy to implement as the standard one. Additionally, to the authors' knowledge, it is the only second-kind integral operator for which convergence of the Galerkin method in L2(Γ) is proved without smoothness assumptions on Γ except that it is Lipschitz. The coercivity of the star-combined operator implies frequency-explicit error bounds for the Galerkin method for any approximation space. In particular, these error estimates apply to several hybrid asymptoticnumerical methods developed recently that provide robust approximations in the high-frequency case. The proof of coercivity of the star-combined operator critically relies on an identity first introduced by Morawetz and Ludwig in 1968, supplemented further by more recent harmonic analysis techniques for Lipschitz domains.
Resumo:
Currently, most operational forecasting models use latitude-longitude grids, whose convergence of meridians towards the poles limits parallel scaling. Quasi-uniform grids might avoid this limitation. Thuburn et al, JCP, 2009 and Ringler et al, JCP, 2010 have developed a method for arbitrarily-structured, orthogonal C-grids (TRiSK), which has many of the desirable properties of the C-grid on latitude-longitude grids but which works on a variety of quasi-uniform grids. Here, five quasi-uniform, orthogonal grids of the sphere are investigated using TRiSK to solve the shallow-water equations. We demonstrate some of the advantages and disadvantages of the hexagonal and triangular icosahedra, a Voronoi-ised cubed sphere, a Voronoi-ised skipped latitude-longitude grid and a grid of kites in comparison to a full latitude-longitude grid. We will show that the hexagonal-icosahedron gives the most accurate results (for least computational cost). All of the grids suffer from spurious computational modes; this is especially true of the kite grid, despite it having exactly twice as many velocity degrees of freedom as height degrees of freedom. However, the computational modes are easiest to control on the hexagonal icosahedron since they consist of vorticity oscillations on the dual grid which can be controlled using a diffusive advection scheme for potential vorticity.
Resumo:
This article draws upon Karen Lury's definitions of 'space' and 'place' in relation to the BBC children's programme Blue Peter (1958–present). Through an analysis of the Blue Peter studio over the past 53 years, Amanda Beauchamp highlights its evolution from a 'space' to a 'place' within the history of children's television. Her article considers how the Blue Peter studio's 'infinite nature' was achieved, alongside the role it played in creating the programme institution. She addresses the impact of major changes in the studio layout since 2005, when the studio went from being 'tardis-like' to a 'cosy cubbyhole'. Amanda concludes by questioning the impact that this change has had on programme identity and whether the 'place' that pre-2005 Blue Peter took 47 years to create has been compromised.
Resumo:
When we first encounter the narrator of Austerlitz, he is wandering around the unfamiliar town of Antwerp with, he tells us, “unsicheren Schritten” (1; 9). As well as reflecting the unfamiliarity of the locale, these “uncertain steps” evince a proud modesty characteristic of the classic Sebaldian narrator, a wanderer who discreetly relays the stories of the people and places he is privileged to encounter. Although Sebald does not use the phrase, steps of this sort, unpurposed yet unerring, are made with what is commonly known in German as somnambule Sicherheit: the legendary surefootedness of the sleepwalker. The convergence of sleepwalking and certainty in a single phrase poses an interesting challenge to one of the central tenets of the English-language canonization of Sebald, for his writing has been most highly valued for its ability to move the reader through apparent certainties towards a salutary uncertainty. But somnambule Sicherheit also presents the possibility that the current may be reversed, that narrative may move under cover of uncertainty towards certainty. That Sebald criticism has not been more troubled by this possibility is in no small part due to the fact that it tends to deploy the notion of sleepwalking with a minimum of reflection on its theoretical ramifications. To evoke some of the complexities of this matter, I first offer a brief cultural history of sleepwalking, as well as a brief account of the topic of uncertainty in Sebald criticism. Most of my argument, however, involves an extended comparative analysis of sleepwalking in Sebald's Austerlitz and Hermann Broch's 1933 trilogy The Sleepwalkers. Although these writers have not previously been the object of any sustained comparison, sleepwalking in Broch's novels illuminates much that is left implicit on the topic in Sebald's fiction and points toward some difficult questions regarding the role of aesthetics and agency in Sebald's work.
Resumo:
Climate simulations by 16 atmospheric general circulation models (AGCMs) are compared on an aqua-planet, a water-covered Earth with prescribed sea surface temperature varying only in latitude. The idealised configuration is designed to expose differences in the circulation simulated by different models. Basic features of the aqua-planet climate are characterised by comparison with Earth. The models display a wide range of behaviour. The balanced component of the tropospheric mean flow, and mid-latitude eddy covariances subject to budget constraints, vary relatively little among the models. In contrast, differences in damping in the dynamical core strongly influence transient eddy amplitudes. Historical uncertainty in modelled lower stratospheric temperatures persists in APE. Aspects of the circulation generated more directly by interactions between the resolved fluid dynamics and parameterized moist processes vary greatly. The tropical Hadley circulation forms either a single or double inter-tropical convergence zone (ITCZ) at the equator, with large variations in mean precipitation. The equatorial wave spectrum shows a wide range of precipitation intensity and propagation characteristics. Kelvin mode-like eastward propagation with remarkably constant phase speed dominates in most models. Westward propagation, less dispersive than the equatorial Rossby modes, dominates in a few models or occurs within an eastward propagating envelope in others. The mean structure of the ITCZ is related to precipitation variability, consistent with previous studies. The aqua-planet global energy balance is unknown but the models produce a surprisingly large range of top of atmosphere global net flux, dominated by differences in shortwave reflection by clouds. A number of newly developed models, not optimised for Earth climate, contribute to this. Possible reasons for differences in the optimised models are discussed. The aqua-planet configuration is intended as one component of an experimental hierarchy used to evaluate AGCMs. This comparison does suggest that the range of model behaviour could be better understood and reduced in conjunction with Earth climate simulations. Controlled experimentation is required to explore individual model behaviour and investigate convergence of the aqua-planet climate with increasing resolution.
Resumo:
This paper extends and clarifies results of Steinsaltz and Evans [Trans. Amer. Math. Soc. 359 (2007) 1285–1234], which found conditions for convergence of a killed one-dimensional diffusion conditioned on survival, to a quasistationary distribution whose density is given by the principal eigenfunction of the generator. Under the assumption that the limit of the killing at infinity differs from the principal eigenvalue we prove that convergence to quasistationarity occurs if and only if the principal eigenfunction is integrable. When the killing at ∞ is larger than the principal eigenvalue, then the eigenfunction is always integrable. When the killing at ∞ is smaller, the eigenfunction is integrable only when the unkilled process is recurrent; otherwise, the process conditioned on survival converges to 0 density on any bounded interval.
Resumo:
Data analysis based on station observations reveals that many meteorological variables averaged over the Tibetan Plateau (TP) are closely correlated, and their trends during the past decades are well correlated with the rainfall trend of the Asian summer monsoon. However, such correlation does not necessarily imply causality. Further diagnosis confirms the existence of a weakening trend in TP thermal forcing, characterized by weakened surface sensible heat flux in spring and summer during the past decades. This weakening trend is associated with decreasing summer precipitation over northern South Asia and North China and increasing precipitation over northwestern China, South China, and Korea. An atmospheric general circulation model, the HadAM3, is employed to elucidate the causality between the weakening TP forcing and the change in the Asian summer monsoon rainfall. Results demonstrate that a weakening in surface sensible heating over the TP results in reduced summer precipitation in the plateau region and a reduction in the associated latent heat release in summer. These changes in turn result in the weakening of the near-surface cyclonic circulation surrounding the plateau and the subtropical anticyclone over the subtropical western North Pacific, similar to the results obtained from the idealized TP experiment in Part I of this study. The southerly that normally dominates East Asia, ranging from the South China Sea to North China, weakens, resulting in a weaker equilibrated Sverdrup balance between positive vorticity generation and latent heat release. Consequently, the convergence of water vapor transport is confined to South China, forming a unique anomaly pattern in monsoon rainfall, the so-called “south wet and north dry.” Because the weakening trend in TP thermal forcing is associated with global warming, the present results provide an effective means for assessing projections of regional climate over Asia in the context of global warming.