986 resultados para Number sense
Resumo:
In an essay, "The Books of Last Things", Delia Falconer discusses the emergence of a new genre in publishing - microhistories. She cites a number of recent titles in non-fiction and fiction - Longitude, Cod, Tulips, Pushkin's Button, Nathaniel's Nutmeg, Zarafa, The Surgeon of Crowthorne, The Potato, The Perfect Storm. Delia Falconer observes of this tradition: "One has the sense, reading these books, of a surprising weight, of pleasant shock. In part, it is because we are looking at things which are generally present around us, but modestly out of sight and mind - historical nitty gritty like cod, potatoes, longitudinal clocks - which the authors have thrust suddenly, like a Biblical visitation of frogs or locusts, in our face. Things like spice and buttons and clocks are generally seen to enable history on the large scale, but are not often viewed as its worthy subjects. And by the same grand logic of history, more unusual phenomena like cabinets of curiosities or glass-making or farm lore or sailors' knots are simply odd blips on its radar screen, interesting footnotes. These new books, microhistories, reverse the usual order of history, which argues from the general to the particular, in order to prove its inevitable progress. They start from the footnotes. But by reversing the process, and walking through the back door of history, you don't necessarily end up at the front of the same house." Delia Falconer speculates about the reasons for the popularity of microhistories. She concludes: "I would like to think that reading them is not simply an exercise in nostalgia, but a challenge to the present". In Mauve, Simon Garfield provides a new way of thinking and writing about the history of intellectual property. Instead of providing a grand historical narrative of intellectual property, he tells the story of a particular invention, and its exploitation. Simon Garfield relates how English chemist William Perkin accidentally discovered a way to mass-produce colour mauve in a factory. Working on a treatment for malaria in his London home laboratory, Perkin failed to produce artificial quinine. Instead he created a dark oily sludge that turned silk a beautiful light purple. The colour was unique and became the most desirable shade in the fashion houses of Paris and London. ... The book Mauve will have a number of contemporary resonances for intellectual property lawyers and academics. Simon Garfield emphasizes the difficulties inherent in commercialising an invention and managing intellectual property. He investigates the uneasy collaboration between industry and science. Simon Garfield suggests that complaints about the efficacy of patent offices are perennial. He also highlights the problems faced by courts and law-makers in accommodating new technologies within the logic of patent law. In his elegant microhistory of the colour mauve, Simon Garfield confirms the conclusion of Brad Sherman and Lionel Bently that many aspects of modern intellectual property law can only be understood through an understanding of the past: "The image of intellectual property law that developed during the 19th century and the narrative of identity which this engendered played and continue to play an important role in the way we think about and understand intellectual property law".
Resumo:
This research investigated the use of DNA fingerprinting to characterise the bacteria Streptococcus pneumoniae or pneumococcus, and hence gain insight into the development of new vaccines or antibiotics. Different bacterial DNA fingerprinting methods were studied, and a novel method was developed and validated, which characterises different cell coatings that pneumococci produce. This method was used to study the epidemiology of pneumococci in Queensland before and after the introduction of the current pneumococcal vaccine. This study demonstrated that pneumococcal disease is highly prevalent in children under four years, that the bacteria can `switch' its cell coating to evade the vaccine, and that some DNA fingerprinting methods are more discriminatory than others. This has an impact on understanding which strains are more prone to cause invasive disease. Evidence of the excellent research findings have been published in high impact internationally refereed journals.
Resumo:
Objective We examined whether exposure to a greater number of fruits, vegetables, and noncore foods (ie, nutrient poor and high in saturated fats, added sugars, or added salt) at age 14 months was related to children’s preference for and intake of these foods as well as maternal-reported food fussiness and measured child weight status at age 3.7 years. Methods This study reports secondary analyses of longitudinal data from mothers and children (n=340) participating in the NOURISH randomized controlled trial. Exposure was quantified as the number of food items (n=55) tried by a child from specified lists at age 14 months. At age 3.7 years, food preferences, intake patterns, and fussiness (also at age 14 months) were assessed using maternal-completed, established questionnaires. Child weight and length/height were measured by study staff at both age points. Multivariable linear regression models were tested to predict food preferences, intake patterns, fussy eating, and body mass index z score at age 3.7 years adjusting for a range of maternal and child covariates. Results Having tried a greater number of vegetables, fruits, and noncore foods at age 14 months predicted corresponding preferences and higher intakes at age 3.7 years but did not predict child body mass index z score. Adjusting for fussiness at age 14 months, having tried more vegetables at age 14 months was associated with lower fussiness at age 3.7 years. Conclusions These prospective analyses support the hypothesis that early taste and texture experiences influence subsequent food preferences and acceptance. These findings indicate introduction to a variety of fruits and vegetables and limited noncore food exposure from an early age are important strategies to improve later diet quality.
Early mathematical learning: Number processing skills and executive function at 5 and 8 years of age
Resumo:
This research investigated differences and associations in performance in number processing and executive function for children attending primary school in a large Australian metropolitan city. In a cross-sectional study, performance of 25 children in the first full-time year of school, (Prep; mean age = 5.5 years) and 21 children in Year 3 (mean age = 8.5 years) completed three number processing tasks and three executive function tasks. Year 3 children consistently outperformed the Prep year children on measures of accuracy and reaction time, on the tasks of number comparison, calculation, shifting, and inhibition but not on number line estimation. The components of executive function (shifting, inhibition, and working memory) showed different patterns of correlation to performance on number processing tasks across the early years of school. Findings could be used to enhance teachers’ understanding about the role of the cognitive processes employed by children in numeracy learning, and so inform teachers’ classroom practices.
Resumo:
Pt/TiO2 sensitized by the cheap and organic ortho-dihydroxyl-9,10-anthraquinone dyes, such as Alizarin and Alizarin Red, achieved a TON of approximately 10 000 (TOF > 250 h−1 for the first ten hours) during >80 hours of visible light irradiation (>420 nm) for photocatalytic hydrogen evolution when triethanolamine was used as the sacrificial donor. The stability and activity enhancements can be attributed to the two highly serviceable redox reactions involving the 9,10-dicarbonyl and ortho-dihydroxyl groups of the anthracene ring, respectively
Resumo:
Flow patterns and aerodynamic characteristics behind three side-by-side square cylinders has been found depending upon the unequal gap spacing (g1 = s1/d and g2 = s2/d) between the three cylinders and the Reynolds number (Re) using the Lattice Boltzmann method. The effect of Reynolds numbers on the flow behind three cylinders are numerically studied for 75 ≤ Re ≤ 175 and chosen unequal gap spacings such as (g1, g2) = (1.5, 1), (3, 4) and (7, 6). We also investigate the effect of g2 while keeping g1 fixed for Re = 150. It is found that a Reynolds number have a strong effect on the flow at small unequal gap spacing (g1, g2) = (1.5, 1.0). It is also found that the secondary cylinder interaction frequency significantly contributes for unequal gap spacing for all chosen Reynolds numbers. It is observed that at intermediate unequal gap spacing (g1, g2) = (3, 4) the primary vortex shedding frequency plays a major role and the effect of secondary cylinder interaction frequencies almost disappear. Some vortices merge near the exit and as a result small modulation found in drag and lift coefficients. This means that with the increase in the Reynolds numbers and unequal gap spacing shows weakens wakes interaction between the cylinders. At large unequal gap spacing (g1, g2) = (7, 6) the flow is fully periodic and no small modulation found in drag and lift coefficients signals. It is found that the jet flows for unequal gap spacing strongly influenced the wake interaction by varying the Reynolds number. These unequal gap spacing separate wake patterns for different Reynolds numbers: flip-flopping, in-phase and anti-phase modulation synchronized, in-phase and anti-phase synchronized. It is also observed that in case of equal gap spacing between the cylinders the effect of gap spacing is stronger than the Reynolds number. On the other hand, in case of unequal gap spacing between the cylinders the wake patterns strongly depends on both unequal gap spacing and Reynolds number. The vorticity contour visualization, time history analysis of drag and lift coefficients, power spectrum analysis of lift coefficient and force statistics are systematically discussed for all chosen unequal gap spacings and Reynolds numbers to fully understand this valuable and practical problem.
Resumo:
A computed tomography number to relative electron density (CT-RED) calibration is performed when commissioning a radiotherapy CT scanner by imaging a calibration phantom with inserts of specified RED and recording the CT number displayed. In this work, CT-RED calibrations were generated using several commercially available phantoms to observe the effect of phantom geometry on conversion to electron density and, ultimately, the dose calculation in a treatment planning system. Using an anthropomorphic phantom as a gold standard, the CT number of a material was found to depend strongly on the amount and type of scattering material surrounding the volume of interest, with the largest variation observed for the highest density material tested, cortical bone. Cortical bone gave a maximum CT number difference of 1,110 when a cylindrical insert of diameter 28 mm scanned free in air was compared to that in the form of a 30 × 30 cm2 slab. The effect of using each CT-RED calibration on planned dose to a patient was quantified using a commercially available treatment planning system. When all calibrations were compared to the anthropomorphic calibration, the largest percentage dose difference was 4.2 % which occurred when the CT-RED calibration curve was acquired with heterogeneity inserts removed from the phantom and scanned free in air. The maximum dose difference observed between two dedicated CT-RED phantoms was ±2.1 %. A phantom that is to be used for CT-RED calibrations must have sufficient water equivalent scattering material surrounding the heterogeneous objects that are to be used for calibration.
Resumo:
Copy number variations (CNVs) as described in the healthy population are purported to contribute significantly to genetic heterogeneity. Recent studies have described CNVs using lymphoblastoid cell lines or by application of specifically developed algorithms to interrogate previously described data. However, the full extent of CNVs remains unclear. Using high-density SNP array, we have undertaken a comprehensive investigation of chromosome 18 for CNV discovery and characterisation of distribution and association with chromosome architecture. We identified 399 CNVs, of which loss represents 98%, 58% are less than 2.5 kb in size and 71% are intergenic. Intronic deletions account for the majority of copy number changes with gene involvement. Furthermore, one-third of CNVs do not have putative breakpoints within repetitive sequences. We conclude that replicative processes, mediated either by repetitive elements or microhomology, account for the majority of CNVs in the healthy population. Genomic instability involving the formation of a non-B structure is demonstrated in one region.
Resumo:
Identity crime is argued to be one of the most significant crime problems of today. This paper examines identity crime, through the attitudes and practices of a group of seniors in Queensland, Australia. It examines their own actions towards the protection of their personal data in response to a fraudulent email request. Applying the concept of a prudential citizen (as one who is responsible for self-regulating their behaviour to maintain the integrity of one’s identity) it will be argued that seniors often expose identity information through their actions. However, this is demonstrated to be the result of flawed assumptions and misguided beliefs over the perceived risk and likelihood of identity crime, rather than a deliberate act. This paper concludes that to protect seniors from identity crime, greater awareness of appropriate risk-management strategies towards disclosure of their personal details is required to reduce their inadvertent exposure to identity crime.
Resumo:
In order to understand the role of translational modes in the orientational relaxation in dense dipolar liquids, we have carried out a computer ''experiment'' where a random dipolar lattice was generated by quenching only the translational motion of the molecules of an equilibrated dipolar liquid. The lattice so generated was orientationally disordered and positionally random. The detailed study of orientational relaxation in this random dipolar lattice revealed interesting differences from those of the corresponding dipolar liquid. In particular, we found that the relaxation of the collective orientational correlation functions at the intermediate wave numbers was markedly slower at the long times for the random lattice than that of the liquid. This verified the important role of the translational modes in this regime, as predicted recently by the molecular theories. The single-particle orientational correlation functions of the random lattice also decayed significantly slowly at long times, compared to those of the dipolar liquid.
Resumo:
In this paper, a new high precision focused word sense disambiguation (WSD) approach is proposed, which not only attempts to identify the proper sense for a word but also provides the probabilistic evaluation for the identification confidence at the same time. A novel Instance Knowledge Network (IKN) is built to generate and maintain semantic knowledge at the word, type synonym set and instance levels. Related algorithms based on graph matching are developed to train IKN with probabilistic knowledge and to use IKN for probabilistic word sense disambiguation. Based on the Senseval-3 all-words task, we run extensive experiments to show the performance enhancements in different precision ranges and the rationality of probabilistic based automatic confidence evaluation of disambiguation. We combine our WSD algorithm with five best WSD algorithms in senseval-3 all words tasks. The results show that the combined algorithms all outperform the corresponding algorithms.
Resumo:
Doppler weather radars with fast scanning rates must estimate spectral moments based on a small number of echo samples. This paper concerns the estimation of mean Doppler velocity in a coherent radar using a short complex time series. Specific results are presented based on 16 samples. A wide range of signal-to-noise ratios are considered, and attention is given to ease of implementation. It is shown that FFT estimators fare poorly in low SNR and/or high spectrum-width situations. Several variants of a vector pulse-pair processor are postulated and an algorithm is developed for the resolution of phase angle ambiguity. This processor is found to be better than conventional processors at very low SNR values. A feasible approximation to the maximum entropy estimator is derived as well as a technique utilizing the maximization of the periodogram. It is found that a vector pulse-pair processor operating with four lags for clear air observation and a single lag (pulse-pair mode) for storm observation may be a good way to estimate Doppler velocities over the entire gamut of weather phenomena.
Resumo:
The literature contains many examples of digital procedures for the analytical treatment of electroencephalograms, but there is as yet no standard by which those techniques may be judged or compared. This paper proposes one method of generating an EEG, based on a computer program for Zetterberg's simulation. It is assumed that the statistical properties of an EEG may be represented by stationary processes having rational transfer functions and achieved by a system of software fillers and random number generators.The model represents neither the neurological mechanism response for generating the EEG, nor any particular type of EEG record; transient phenomena such as spikes, sharp waves and alpha bursts also are excluded. The basis of the program is a valid ‘partial’ statistical description of the EEG; that description is then used to produce a digital representation of a signal which if plotted sequentially, might or might not by chance resemble an EEG, that is unimportant. What is important is that the statistical properties of the series remain those of a real EEG; it is in this sense that the output is a simulation of the EEG. There is considerable flexibility in the form of the output, i.e. its alpha, beta and delta content, which may be selected by the user, the same selected parameters always producing the same statistical output. The filtered outputs from the random number sequences may be scaled to provide realistic power distributions in the accepted EEG frequency bands and then summed to create a digital output signal, the ‘stationary EEG’. It is suggested that the simulator might act as a test input to digital analytical techniques for the EEG, a simulator which would enable at least a substantial part of those techniques to be compared and assessed in an objective manner. The equations necessary to implement the model are given. The program has been run on a DEC1090 computer but is suitable for any microcomputer having more than 32 kBytes of memory; the execution time required to generate a 25 s simulated EEG is in the region of 15 s.