15 resultados para High definition picture
em CentAUR: Central Archive University of Reading - UK
Resumo:
The creation of OFDM based Wireless Personal Area Networks (WPANs) has allowed the development of high bit-rate wireless communication devices suitable for streaming High Definition video between consumer products, as demonstrated in Wireless-USB and Wireless-HDMI. However, these devices need high frequency clock rates, particularly for the OFDM, FFT and symbol processing sections resulting in high silicon cost and high electrical power. The high clock rates make hardware prototyping difficult and verification is therefore very important but costly. Acknowledging that electrical power in wireless consumer devices is more critical than the number of implemented logic gates, this paper presents a Double Data Rate (DDR) architecture for implementation inside a OFDM baseband codec in order to reduce the high frequency clock rates by a complete factor of 2. The presented architecture has been implemented and tested for ECMA-368 (Wireless- USB context) resulting in a maximum clock rate of 264MHz instead of the expected 528MHz clock rate existing anywhere on the baseband codec die.
Resumo:
The creation of OFDM based Wireless Personal Area Networks (WPANs) has allowed high bit-rate wireless communication devices suitable for streaming High Definition video between consumer products as demonstrated in Wireless- USB. However, these devices need high clock rates, particularly for the OFDM sections resulting in high silicon cost and high electrical power. Acknowledging that electrical power in wireless consumer devices is more critical than the number of implemented logic gates, this paper presents a Double Data Rate (DDR) architecture to reduce the OFDM input and output clock rate by a factor of 2. The architecture has been implemented and tested for Wireless-USB (ECMA-368) resulting in a maximum clock of 264MHz instead of 528MHz existing anywhere on the die.
Resumo:
This chapter explores the distinctive qualities of the Matt Smith era Doctor Who, focusing on how dramatic emphases are connected with emphases on visual style, and how this depends on the programme's production methods and technologies. Doctor Who was first made in the 1960s era of live, studio-based, multi-camera television with monochrome pictures. However, as technical innovations like colour filming, stereo sound, CGI and post-production effects technology have been routinely introduced into the programme, and now High Definition (HD) cameras, they have given Doctor Who’s creators new ways of making visually distinctive narratives. Indeed, it has been argued that since the 1980s television drama has become increasingly like cinema in its production methods and aesthetic aims. Viewers’ ability to view the programme on high-specification TV sets, and to record and repeat episodes using digital media, also encourage attention to visual style in television as much as in cinema. The chapter evaluates how these new circumstances affect what Doctor Who has become and engages with arguments that visual style has been allowed to override characterisation and story in the current Doctor Who. The chapter refers to specific episodes, and frames the analysis with reference to earlier years in Doctor Who’s long history. For example, visual spectacle using green-screen and CGI can function as a set-piece (at the opening or ending of an episode) but can also work ‘invisibly’ to render a setting realistically. Shooting on location using HD cameras provides a rich and detailed image texture, but also highlights mistakes and especially problems of lighting. The reduction of Doctor Who’s budget has led to Steven Moffat’s episodes relying less on visual extravagance, connecting back both to Russell T. Davies’s concern to show off the BBC’s investment in the series but also to reference British traditions of gritty and intimate social drama. Pressures to capitalise on Doctor Who as a branded product are the final aspect of the chapter’s analysis, where the role of Moffat as ‘showrunner’ links him to an American (not British) style of television production where the preservation of format and brand values give him unusual power over the look of the series.
Resumo:
Previous research has shown that listening to stories supports vocabulary growth in preschool and school-aged children and that lexical entries for even very difficult or rare words can be established if these are defined when they are first introduced. However, little is known about the nature of the lexical representations children form for the words they encounter while listening to stories, or whether these are sufficiently robust to support the child’s own use of such ‘high-level’ vocabulary. This study explored these questions by administering multiple assessments of children’s knowledge about a set of newly-acquired vocabulary. Four- and 6-year-old children were introduced to nine difficult new words (including nouns, verbs and adjectives) through three exposures to a story read by their class teacher. The story included a definition of each new word at its first encounter. Learning of the target vocabulary was assessed by means of two tests of semantic understanding – a forced choice picture-selection task and a definition production task – and a grammaticality judgment task, which asked children to choose between a syntactically-appropriate and syntactically-inappropriate usage of the word. Children in both age groups selected the correct pictorial representation and provided an appropriate definition for the target words in all three word classes significantly more often than they did for a matched set of non-exposed control words. However, only the older group was able to identify the syntactically-appropriate sentence frames in the grammaticality judgment task. Further analyses elucidate some of the components of the lexical representations children lay down when they hear difficult new vocabulary in stories and how different tests of word knowledge might overlap in their assessment of these components.
Resumo:
Name agreement is the extent to which different people agree on a name for a particular picture. Previous studies have found that it takes longer to name low name agreement pictures than high name agreement pictures. To examine the effect of name agreement in the online process of picture naming, we compared event-related potentials (ERPs) recorded whilst 19 healthy, native English speakers silently named pictures which had either high or low name agreement. A series of ERP components was examined: P1 approximately 120ms from picture onset, N1 around 170ms, P2 around 220ms, N2 around 290ms, and P3 around 400ms. Additionally, a late time window from 800 to 900ms was considered. Name agreement had an early effect, starting at P1 and possibly resulting from uncertainty of picture identity, and continuing into N2, possibly resulting from alternative names for pictures. These results support the idea that name agreement affects two consecutive processes: first, object recognition, and second, lexical selection and/or phonological encoding.
Resumo:
The management of information in engineering organisations is facing a particular challenge in the ever-increasing volume of information. It has been recognised that an effective methodology is required to evaluate information in order to avoid information overload and to retain the right information for reuse. By using, as a starting point, a number of the current tools and techniques which attempt to obtain ‘the value’ of information, it is proposed that an assessment or filter mechanism for information is needed to be developed. This paper addresses this issue firstly by briefly reviewing the information overload problem, the definition of value, and related research work on the value of information in various areas. Then a “characteristic” based framework of information evaluation is introduced using the key characteristics identified from related work as an example. A Bayesian Network diagram method is introduced to the framework to build the linkage between the characteristics and information value in order to quantitatively calculate the quality and value of information. The training and verification process for the model is then described using 60 real engineering documents as a sample. The model gives a reasonable accurate result and the differences between the model calculation and training judgements are summarised as the potential causes are discussed. Finally, several further issues including the challenge of the framework and the implementations of this evaluation assessment method are raised.
Resumo:
The prevalence of the metabolic syndrome (MetS), CVD and type 2 diabetes (T2D) is known to be higher in populations from the Indian subcontinent compared with the general UK population. While identification of this increased risk is crucial to allow for effective treatment, there is controversy over the applicability of diagnostic criteria, and particularly measures of adiposity in ethnic minorities. Diagnostic cut-offs for BMI and waist circumference have been largely derived from predominantly white Caucasian populations and, therefore, have been inappropriate and not transferable to Asian groups. Many Asian populations, particularly South Asians, have a higher total and central adiposity for a similar body weight compared with matched Caucasians and greater CVD risk associated with a lower BMI. Although the causes of CVD and T2D are multi-factorial, diet is thought to make a substantial contribution to the development of these diseases. Low dietary intakes and tissue levels of long-chain (LC) n-3 PUFA in South Asian populations have been linked to high-risk abnormalities in the MetS. Conversely, increasing the dietary intake of LC n-3 PUFA in South Asians has proved an effective strategy for correcting such abnormalities as dyslipidaemia in the MetS. Appropriate diagnostic criteria that include a modified definition of adiposity must be in place to facilitate the early detection and thus targeted treatment of increased risk in ethnic minorities.
Resumo:
The usefulness of any simulation of atmospheric tracers using low-resolution winds relies on both the dominance of large spatial scales in the strain and time dependence that results in a cascade in tracer scales. Here, a quantitative study on the accuracy of such tracer studies is made using the contour advection technique. It is shown that, although contour stretching rates are very insensitive to the spatial truncation of the wind field, the displacement errors in filament position are sensitive. A knowledge of displacement characteristics is essential if Lagrangian simulations are to be used for the inference of airmass origin. A quantitative lower estimate is obtained for the tracer scale factor (TSF): the ratio of the smallest resolved scale in the advecting wind field to the smallest “trustworthy” scale in the tracer field. For a baroclinic wave life cycle the TSF = 6.1 ± 0.3 while for the Northern Hemisphere wintertime lower stratosphere the TSF = 5.5 ± 0.5, when using the most stringent definition of the trustworthy scale. The similarity in the TSF for the two flows is striking and an explanation is discussed in terms of the activity of potential vorticity (PV) filaments. Uncertainty in contour initialization is investigated for the stratospheric case. The effect of smoothing initial contours is to introduce a spinup time, after which wind field truncation errors take over from initialization errors (2–3 days). It is also shown that false detail from the proliferation of finescale filaments limits the useful lifetime of such contour advection simulations to 3σ−1 days, where σ is the filament thinning rate, unless filaments narrower than the trustworthy scale are removed by contour surgery. In addition, PV analysis error and diabatic effects are so strong that only PV filaments wider than 50 km are at all believable, even for very high-resolution winds. The minimum wind field resolution required to accurately simulate filaments down to the erosion scale in the stratosphere (given an initial contour) is estimated and the implications for the modeling of atmospheric chemistry are briefly discussed.
Resumo:
Illustrations are an integral part of many dictionaries, but the selection, placing, and sizing of illustrations is often highly conservative, and can appear to reflect the editorial concerns and technological constraints of previous eras. We might start with the question ‘why not illustrate?’, especially when we consider the ability of an illustration to simplify the definition of technical terms. How do illustrations affect the reader’s view of a dictionary as objective, and how illustrations reinforce the pedagogic aims of the dictionary? By their graphic nature, illustrations stand out from the field of text against which they stand, and they can immediately indicate to the reader the level of seriousness or popularity of the book’s approach, or the age-range that it is intended for. And illustrations are expensive to create and can add to printing costs, so it is not surprising that there is much direct and indirect copying from dictionary to dictionary, and simple re-use. This article surveys developments in illustrating dictionaries, considering the difference between distributing individual illustrations through the text of the dictionary and grouping illustrations into larger synoptic illustrations; the graphic style of illustrations; and the role of illustrations in ‘feature-led’ dictionary marketing.
Resumo:
Interannual anomalies in vertical profiles of stratospheric ozone, in both equatorial and extratropical regions, have been shown to exhibit a strong seasonal persistence, namely, extended temporal autocorrelations during certain times of the calendar year. Here we investigate the relationship between this seasonal persistence of equatorial and extratropical ozone anomalies using the SAGE‐corrected SBUV data set, which provides a long‐term ozone profile time series. For the regions of the stratosphere where ozone is under purely dynamical or purely photochemical control, the seasonal persistence of equatorial and extratropical ozone anomalies arises from distinct mechanisms but preserves an anticorrelation between tropical and extratropical anomalies established during the winter period. In the 16–10 hPa layer, where ozone is controlled by both dynamical and photochemical processes, equatorial ozone anomalies exhibit a completely different behavior compared to ozone anomalies above and below in terms of variability, seasonal persistence, and especially the relationship between equatorial and extratropical ozone. Cross‐latitude‐time correlations show that for the 16–10 hPa layer, Northern Hemisphere (NH) extratropical ozone anomalies show the same variability as equatorial ozone anomalies but lagged by 3–6 months. High correlation coefficients are observed during the time frame of seasonal persistence of ozone anomalies, which is June– December for equatorial ozone and shifts by approximately 3–6 months when going from the equatorial region to NH extratropics. Thus in the transition zone between dynamical and photochemical control, equatorial ozone anomalies established in boreal summer/autumn are mirrored by NH extratropical ozone anomalies with a time lag similar to transport time scales. Equatorial ozone anomalies established in boreal winter/spring are likewise correlated with ozone anomalies in the Southern Hemisphere extratropics with a time lag comparable to transport time scales, similar to what is seen in the NH. However, the correlations between equatorial and SH extratropical ozone in the 10–16 hPa layer are weak.
Resumo:
The asymmetries in the convective flows, current systems, and particle precipitation in the high-latitude dayside ionosphere which are related to the equatorial plane components of the interplanetary magnetic field (IMF) are discussed in relation to the results of several recent observational studies. It is argued that all of the effects reported to date which are ascribed to the y component of the IMF can be understood, at least qualitatively, in terms of a simple theoretical picture in which the effects result from the stresses exerted on the magnetosphere consequent on the interconnection of terrestrial and interplanetary fields. In particular, relaxation under the action of these stresses allows, in effect, a partial penetration of the IMF into the magnetospheric cavity, such that the sense of the expected asymmetry effects on closed field lines can be understood, to zeroth order, in terms of the “dipole plus uniform field” model. In particular, in response to IMF By, the dayside cusp should be displaced in longitude about noon in the same sense as By in the northern hemisphere, and in the opposite sense to By in the southern hemisphere, while simultaneously the auroral oval as a whole should be shifted in the dawn-dusk direction in the opposite sense with respect to By. These expected displacements are found to be consistent with recently published observations. Similar considerations lead to the suggestion that the auroral oval may also undergo displacements in the noon-midnight direction which are associated with the x component of the IMF. We show that a previously published study of the position of the auroral oval contains strong initial evidence for the existence of this effect. However, recent results on variations in the latitude of the cusp are more ambiguous. This topic therefore requires further study before definitive conclusions can be drawn.
Resumo:
Location is of paramount importance within the retail sector, yet defining locational obsolescence remains overlooked, despite significant concerns over the viability of parts of the complex sector. This paper reviews the existing literature and, through this, explores retail locational obsolescence, including the multi-spatial nature of the driving forces that range from the global economy, local markets and submarkets, to individual property-specific factors; and, crucially, the need to disentangle locational obsolescence from other important concepts such as depreciation and functional obsolescence that are often mistakenly used. Through this, a conceptual model, definition and diagnostic criteria are presented to guide future studies, policy development and the allocation of resources. Importantly, three stages are presented to enable the operationalization of the model, essential to future academic and industry studies as well as the ongoing development of policy in this economically important, complex and contentious area.