983 resultados para Number representation format
Resumo:
There has been considerable interest in the climate impact of trends in stratospheric water vapor (SWV). However, the representation of the radiative properties of water vapor under stratospheric conditions remains poorly constrained across different radiation codes. This study examines the sensitivity of a detailed line-by-line (LBL) code, a Malkmus narrow-band model and two broadband GCM radiation codes to a uniform perturbation in SWV in the longwave spectral region. The choice of sampling rate in wave number space (Δν) in the LBL code is shown to be important for calculations of the instantaneous change in heating rate (ΔQ) and the instantaneous longwave radiative forcing (ΔFtrop). ΔQ varies by up to 50% for values of Δν spanning 5 orders of magnitude, and ΔFtrop varies by up to 10%. In the three less detailed codes, ΔQ differs by up to 45% at 100 hPa and 50% at 1 hPa compared to a LBL calculation. This causes differences of up to 70% in the equilibrium fixed dynamical heating temperature change due to the SWV perturbation. The stratosphere-adjusted radiative forcing differs by up to 96% across the less detailed codes. The results highlight an important source of uncertainty in quantifying and modeling the links between SWV trends and climate.
Resumo:
Alverata: a typeface design for Europe This typeface is a response to the extraordinarily diverse forms of letters of the Latin alphabet in manuscripts and inscriptions in the Romanesque period (c. 1000–1200). While the Romanesque did provide inspiration for architectural lettering in the nineteenth century, these letterforms have not until now been systematically considered and redrawn as a working typeface. The defining characteristic of the Romanesque letterform is variety: within an individual inscription or written text, letters such as A, C, E and G might appear with different forms at each appearance. Some of these forms relate to earlier Roman inscriptional forms and are therefore familiar to us, but others are highly geometric and resemble insular and uncial forms. The research underlying the typeface involved the collection of a large number of references for lettering of this period, from library research and direct on-site ivestigation. This investigation traced the wide dispersal of the Romanesque lettering tradition across the whole of Europe. The variety of letter widths and weights encountered, as well as variant shapes for individual letters, offered both direct models and stylistic inspiration for the characters and for the widths and weight variants of the typeface. The ability of the OpenType format to handle multiple stylistic variants of any one character has been exploited to reflect the multiplicity of forms available to stonecutters and scribes of the period. To make a typeface that functions in a contemporary environment, a lower case has been added, and formal and informal variants supported. The pan-European nature of the Romanesque design tradition has inspired an pan-European approach to the character set of the typeface, allowing for text composition in all European languages, and the typeface has been extended into Greek and Cyrillic, so that the broadest representation of European languages can be achieved.
Resumo:
This chapter looks into the gap between presentational realism and the representation of physical experience in Werner Herzog's work so as to retrieve the indexical trace – or the absolute materiality of death. To that end, it draws links between Herzog and other directors akin to realism in its various forms, including surrealism. In particular, it focuses on François Truffaut and Glauber Rocha, representing respectively the Nouvelle Vague and the Cinema Novo, whose works had a decisive weight on Herzog’s aesthetic choices to the point of originating distinct phases of his outputs. The analyses, though restricted to a small number of films, intends to re-evaluate Herzog’s position within, and contribution to, film history.
Resumo:
A number of recent studies demonstrate that bilinguals with languages that differ in grammatical and lexical categories may shift their cognitive representation of those categories towards that of monolingual speakers of their second language. The current paper extended that investigation to the domain of colour in Greek–English bilinguals with different levels of bilingualism, and English monolinguals. Greek differentiates the blue region of colour space into a darker shade called ble and a lighter shade called ghalazio. Results showed a semantic shift of category prototypes with level of bilingualism and acculturation, while the way bilinguals judged the perceptual similarity between within- and cross-category stimulus pairs depended strongly on the availability of the relevant colour terms in semantic memory, and the amount of time spent in the L2-speaking country. These results suggest that cognition is tightly linked to semantic memory for specific linguistic categories, and to cultural immersion in the L2-speaking country.
Resumo:
Recent research shows that speakers of languages with obligatory plural marking (English) preferentially categorize objects based on common shape, whereas speakers of nonplural-marking classifier languages (Yucatec and Japanese) preferentially categorize objects based on common material. The current study extends that investigation to the domain of bilingualism. Japanese and English monolinguals, and Japanese–English bilinguals were asked to match novel objects based on either common shape or color. Results showed that English monolinguals selected shape significantly more than Japanese monolinguals, whereas the bilinguals shifted their cognitive preferences as a function of their second language proficiency. The implications of these findings for conceptual representation and cognitive processing in bilinguals are discussed.
Resumo:
Svalgaard (2014) has recently pointed out that the calibration of the Helsinki magnetic observatory’s H component variometer was probably in error in published data for the years 1866–1874.5 and that this makes the interdiurnal variation index based on daily means, IDV(1d), (Lockwood et al., 2013a), and the interplanetary magnetic field strength derived from it (Lockwood et al., 2013b), too low around the peak of solar cycle 11. We use data from the modern Nurmijarvi station, relatively close to the site of the original Helsinki Observatory, to confirm a 30% underestimation in this interval and hence our results are fully consistent with the correction derived by Svalgaard. We show that the best method for recalibration uses the Helsinki Ak(H) and aa indices and is accurate to ±10 %. This makes it preferable to recalibration using either the sunspot number or the diurnal range of geomagnetic activity which we find to be accurate to ±20 %. In the case of Helsinki data during cycle 11, the two recalibration methods produce very similar corrections which are here confirmed using newly digitised data from the nearby St Petersburg observatory and also using declination data from Helsinki. However, we show that the IDV index is, compared to later years, too similar to sunspot number before 1872, revealing independence of the two data series has been lost; either because the geomagnetic data used to compile IDV has been corrected using sunspot numbers, or vice versa, or both. We present corrected data sequences for both the IDV(1d) index and the reconstructed IMF (interplanetary magnetic field).We also analyse the relationship between the derived near-Earth IMF and the sunspot number and point out the relevance of the prior history of solar activity, in addition to the contemporaneous value, to estimating any “floor” value of the near-Earth interplanetary field.
Resumo:
Model intercomparisons have identified important deficits in the representation of the stable boundary layer by turbulence parametrizations used in current weather and climate models. However, detrimental impacts of more realistic schemes on the large-scale flow have hindered progress in this area. Here we implement a total turbulent energy scheme into the climate model ECHAM6. The total turbulent energy scheme considers the effects of Earth’s rotation and static stability on the turbulence length scale. In contrast to the previously used turbulence scheme, the TTE scheme also implicitly represents entrainment flux in a dry convective boundary layer. Reducing the previously exaggerated surface drag in stable boundary layers indeed causes an increase in southern hemispheric zonal winds and large-scale pressure gradients beyond observed values. These biases can be largely removed by increasing the parametrized orographic drag. Reducing the neutral limit turbulent Prandtl number warms and moistens low-latitude boundary layers and acts to reduce longstanding radiation biases in the stratocumulus regions, the Southern Ocean and the equatorial cold tongue that are common to many climate models.
Resumo:
This paper proposes an efficient pattern extraction algorithm that can be applied on melodic sequences that are represented as strings of abstract intervallic symbols; the melodic representation introduces special “binary don’t care” symbols for intervals that may belong to two partially overlapping intervallic categories. As a special case the well established “step–leap” representation is examined. In the step–leap representation, each melodic diatonic interval is classified as a step (±s), a leap (±l) or a unison (u). Binary don’t care symbols are used to represent the possible overlapping between the various abstract categories e.g. *=s, *=l and #=-s, #=-l. We propose an O(n+d(n-d)+z)-time algorithm for computing all maximal-pairs in a given sequence x=x[1..n], where x contains d occurrences of binary don’t cares and z is the number of reported maximal-pairs.
Resumo:
This paper proposes an efficient pattern extraction algorithm that can be applied on melodic sequences that are represented as strings of abstract intervallic symbols; the melodic representation introduces special “binary don’t care” symbols for intervals that may belong to two partially overlapping intervallic categories. As a special case the well established “step–leap” representation is examined. In the step–leap representation, each melodic diatonic interval is classified as a step (±s), a leap (±l) or a unison (u). Binary don’t care symbols are used to represent the possible overlapping between the various abstract categories e.g. *=s, *=l and #=-s, #=-l. We propose an O(n+d(n-d)+z)-time algorithm for computing all maximal-pairs in a given sequence x=x[1..n], where x contains d occurrences of binary don’t cares and z is the number of reported maximal-pairs.
Resumo:
Due to advances in the manufacturing process of orthopedic prostheses, the need for better quality shape reading techniques (i.e. with less uncertainty) of the residual limb of amputees became a challenge. To overcome these problems means to be able in obtaining accurate geometry information of the limb and, consequently, better manufacturing processes of both transfemural and transtibial prosthetic sockets. The key point for this task is to customize these readings trying to be as faithful as possible to the real profile of each patient. Within this context, firstly two prototype versions (α and β) of a 3D mechanical scanner for reading residual limbs shape based on reverse engineering techniques were designed. Prototype β is an improved version of prototype α, despite remaining working in analogical mode. Both prototypes are capable of producing a CAD representation of the limb via appropriated graphical sheets and were conceived to work purely by mechanical means. The first results were encouraging as they were able to achieve a great decrease concerning the degree of uncertainty of measurements when compared to traditional methods that are very inaccurate and outdated. For instance, it's not unusual to see these archaic methods in action by making use of ordinary home kind measure-tapes for exploring the limb's shape. Although prototype β improved the readings, it still required someone to input the plotted points (i.e. those marked in disk shape graphical sheets) to an academic CAD software called OrtoCAD. This task is performed by manual typing which is time consuming and carries very limited reliability. Furthermore, the number of coordinates obtained from the purely mechanical system is limited to sub-divisions of the graphical sheet (it records a point every 10 degrees with a resolution of one millimeter). These drawbacks were overcome by designing the second release of prototype β in which it was developed an electronic variation of the reading table components now capable of performing an automatic reading (i.e. no human intervention in digital mode). An interface software (i.e. drive) was built to facilitate data transfer. Much better results were obtained meaning less degree of uncertainty (it records a point every 2 degrees with a resolution of 1/10 mm). Additionally, it was proposed an algorithm to convert the CAD geometry, used by OrtoCAD, to an appropriate format and enabling the use of rapid prototyping equipment aiming future automation of the manufacturing process of prosthetic sockets.
Resumo:
Economic dispatch (ED) problems have recently been solved by artificial neural network approaches. Systems based on artificial neural networks have high computational rates due to the use of a massive number of simple processing elements and the high degree of connectivity between these elements. The ability of neural networks to realize some complex non-linear function makes them attractive for system optimization. All ED models solved by neural approaches described in the literature fail to represent the transmission system. Therefore, such procedures may calculate dispatch policies, which do not take into account important active power constraints. Another drawback pointed out in the literature is that some of the neural approaches fail to converge efficiently toward feasible equilibrium points. A modified Hopfield approach designed to solve ED problems with transmission system representation is presented in this paper. The transmission system is represented through linear load flow equations and constraints on active power flows. The internal parameters of such modified Hopfield networks are computed using the valid-subspace technique. These parameters guarantee the network convergence to feasible equilibrium points, which represent the solution for the ED problem. Simulation results and a sensitivity analysis involving IEEE 14-bus test system are presented to illustrate efficiency of the proposed approach. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The analysis of large amounts of data is better performed by humans when represented in a graphical format. Therefore, a new research area called the Visual Data Mining is being developed endeavoring to use the number crunching power of computers to prepare data for visualization, allied to the ability of humans to interpret data presented graphically.This work presents the results of applying a visual data mining tool, called FastMapDB to detect the behavioral pattern exhibited by a dataset of clinical information about hemoglobinopathies known as thalassemia. FastMapDB is a visual data mining tool that get tabular data stored in a relational database such as dates, numbers and texts, and by considering them as points in a multidimensional space, maps them to a three-dimensional space. The intuitive three-dimensional representation of objects enables a data analyst to see the behavior of the characteristics from abnormal forms of hemoglobin, highlighting the differences when compared to data from a group without alteration.
Resumo:
In this work we implement the spontaneous breaking of lepton number in version II of the 3-3-1 models and study their phenomenological consequences. The main result of this work is that our majoron is invisible even though it belongs to a triplet representation by the 3-3-1 symmetry.
Resumo:
In this Letter we consider that assuming: (a) that the only left-handed neutral fermions are the active neutrinos, (b) that B - L is a gauge symmetry, and (c) that the L assignment is restricted to the integer numbers, the anomaly cancellation imply that at least three right-handed neutrinos must be added to the minimal representation content of the electroweak standard model. However, two types of models arise: (i) the usual one where each of the three identical right-handed neutrinos has total lepton number L = 1: (ii) and the other one in which two of them carry L = 4 while the third one carries L = -5. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Comparative genomic hybridization (CGH) was used to identify chromosomal imbalances in 19 samples of squamous cell carcinoma of the head and neck (HNSCC). The chromosome arms most often or er-represented were 3q (48%), 8q (42%), and 7p (32%); in many cases, these changes were observed at high copy number. Other commonly over-represented sites were 1q, 2q, 6p, 6q, and 18q. The most frequently under-represented segments were 3p and 22q. Loss of heterozygosity of two polymorphic microsatellite loci from chromosome 22 was observed in two tongue tumors, in agreement with the CGH analysis. Gains of 1q and 2q material were detected in patients exhibiting a clinical history of recurrence and/or metastasis followed by terminal disease. This association suggests that gain of 1q and 2q map be a new marker of head and neck tumors with a refractory clinical response. (C) 2000 Elsevier B.V. All rights reserved.