976 resultados para HIGH-FIELD
Resumo:
The genetic structure of rice tungro bacilliform virus (RTBV) populations within and between growing sites was analyzed in a collection of natural field isolates from different rice varieties grown in eight tungro-endemic sites of the Philippines. Total DNA extracts from 345 isolates were digested with EcoRV restriction enzyme and hybridized with a full-length probe of RTBV, a procedure shown in preliminary experiments capable of revealing high levels of polymorphism in RTBV field isolates. In the total population, 17 distinct EcoRV-based genome profiles (genotypes) were identified and used as indicators for virus diversity. Distinct sets of genotypes occurred in Isabela and North Cotabato provinces suggesting a geographic isolation of virus populations. However, among the sites in each province, there were few significant differences in the genotype compositions of virus populations. The number of genotypes detected at a site varied from two to nine with a few genotypes dominating. In general the isolates at a site persisted from season to season indicating a genetic stability for the local virus population. Over the sampling time, IRRI rice varieties, which have green leafhopper resistance genes, supported similar virus populations to those supported by other varieties, indicating that the variety of the host exerted no apparent selection pressures. Insect transmission experiments on selected RTBV field isolates showed that dramatic shifts in genotype and phenotype distributions can occur in response to host /environmental shifts.
Resumo:
Vector field visualisation is one of the classic sub-fields of scientific data visualisation. The need for effective visualisation of flow data arises in many scientific domains ranging from medical sciences to aerodynamics. Though there has been much research on the topic, the question of how to communicate flow information effectively in real, practical situations is still largely an unsolved problem. This is particularly true for complex 3D flows. In this presentation we give a brief introduction and background to vector field visualisation and comment on the effectiveness of the most common solutions. We will then give some examples of current development on texture-based techniques, and given practical examples of their use in CFD research and hydrodynamic applications.
Resumo:
Purpose: To examine the relationship between visual impairment and functional status in a community-dwelling sample of older adults with glaucoma. Methods: This study included 74 community-dwelling older adults with open-angle glaucoma (aged 74 ± 6 years). Assessment of central vision included high-contrast visual acuity and Pelli-Robson contrast sensitivity. Binocular integrated visual fields were derived from merged monocular Humphrey Field Analyser visual field plots. Functional status outcome measures included physical performance tests (6-min walk test, timed up and go test and lower limb strength), a physical activity questionnaire (Physical Activity Scale for the Elderly) and an overall functional status score. Correlation and linear regression analyses, adjusting for age and gender, examined the association between visual impairment and functional status outcomes. Results: Greater levels of visual impairment were significantly associated with lower levels of functional status among community-dwelling older adults with glaucoma, independent of age and gender. Specifically, lower levels of visual function were associated with slower timed up and go performance, weaker lower limb strength, lower self-reported physical activity, and lower overall functional status scores. Of the components of vision examined, the inferior visual field and contrast factors were the strongest predictors of these functional outcomes, whereas the superior visual field factor was not related to functional status. Conclusions: Greater visual impairment, particularly in the inferior visual field and loss of contrast sensitivity, was associated with poorer functional status among older adults with glaucoma. The findings of this study highlight the potential links between visual impairment and the onset of functional decline. Interventions which promote physical activity among older adults with glaucoma may assist in preventing functional decline, frailty and falls, and improve overall health and well-being.
Resumo:
BACKGROUND: There has been some difficulty getting standard laboratory rats to voluntarily consume large amounts of ethanol without the use of initiation procedures. It has previously been shown that standard laboratory rats will voluntarily consume high levels of ethanol if given intermittent-access to 20% ethanol in a 2-bottle-choice setting [Wise, Psychopharmacologia 29 (1973), 203]. In this study, we have further characterized this drinking model. METHODS: Ethanol-naïve Long-Evans rats were given intermittent-access to 20% ethanol (three 24-hour sessions per week). No sucrose fading was needed and water was always available ad libitum. Ethanol consumption, preference, and long-term drinking behaviors were investigated. Furthermore, to pharmacologically validate the intermittent-access 20% ethanol drinking paradigm, the efficacy of acamprosate and naltrexone in decreasing ethanol consumption were compared with those of groups given continuous-access to 10 or 20% ethanol, respectively. Additionally, ethanol consumption was investigated in Wistar and out-bred alcohol preferring (P) rats following intermittent-access to 20% ethanol. RESULTS: The intermittent-access 20% ethanol 2-bottle-choice drinking paradigm led standard laboratory rats to escalate their ethanol intake over the first 5 to 6 drinking sessions, reaching stable baseline consumption of high amounts of ethanol (Long-Evans: 5.1 +/- 0.6; Wistar: 5.8 +/- 0.8 g/kg/24 h, respectively). Furthermore, the cycles of excessive drinking and abstinence led to an increase in ethanol preference and increased efficacy of both acamprosate and naltrexone in Long-Evans rats. P-rats initiate drinking at a higher level than both Long-Evans and Wistar rats using the intermittent-access 20% ethanol paradigm and showed a trend toward a further escalation in ethanol intake over time (mean ethanol intake: 6.3 +/- 0.8 g/kg/24 h). CONCLUSION: Standard laboratory rats will voluntarily consume ethanol using the intermittent-access 20% ethanol drinking paradigm without the use of any initiation procedures. This model promises to be a valuable tool in the alcohol research field.
Resumo:
PURPOSE: To examine the visual predictors of falls and injurious falls among older adults with glaucoma. METHODS: Prospective falls data were collected for 71 community-dwelling adults with primary open-angle glaucoma, mean age 73.9 ± 5.7 years, for one year using monthly falls diaries. Baseline assessment of central visual function included high-contrast visual acuity and Pelli-Robson contrast sensitivity. Binocular integrated visual fields were derived from monocular Humphrey Field Analyser plots. Rate ratios (RR) for falls and injurious falls with 95% confidence intervals (CIs) were based on negative binomial regression models. RESULTS: During the one year follow-up, 31 (44%) participants experienced at least one fall and 22 (31%) experienced falls that resulted in an injury. Greater visual impairment was associated with increased falls rate, independent of age and gender. In a multivariate model, more extensive field loss in the inferior region was associated with higher rate of falls (RR 1.57, 95%CI 1.06, 2.32) and falls with injury (RR 1.80, 95%CI 1.12, 2.98), adjusted for all other vision measures and potential confounding factors. Visual acuity, contrast sensitivity, and superior field loss were not associated with the rate of falls; topical beta-blocker use was also not associated with increased falls risk. CONCLUSIONS: Falls are common among older adults with glaucoma and occur more frequently in those with greater visual impairment, particularly in the inferior field region. This finding highlights the importance of the inferior visual field region in falls risk and assists in identifying older adults with glaucoma at risk of future falls, for whom potential interventions should be targeted. KEY WORDS: glaucoma, visual field, visual impairment, falls, injury
Resumo:
Barreto-Lynn-Scott (BLS) curves are a stand-out candidate for implementing high-security pairings. This paper shows that particular choices of the pairing-friendly search parameter give rise to four subfami- lies of BLS curves, all of which offer highly efficient and implementation- friendly pairing instantiations. Curves from these particular subfamilies are defined over prime fields that support very efficient towering options for the full extension field. The coefficients for a specific curve and its correct twist are automat-ically determined without any computational effort. The choice of an extremely sparse search parameter is immediately reflected by a highly efficient optimal ate Miller loop and final exponentiation. As a resource for implementors, we give a list with examples of implementation-friendly BLS curves through several high-security levels.
Resumo:
Demands for delivering high instantaneous power in a compressed form (pulse shape) have widely increased during recent decades. The flexible shapes with variable pulse specifications offered by pulsed power have made it a practical and effective supply method for an extensive range of applications. In particular, the release of basic subatomic particles (i.e. electron, proton and neutron) in an atom (ionization process) and the synthesizing of molecules to form ions or other molecules are among those reactions that necessitate large amount of instantaneous power. In addition to the decomposition process, there have recently been requests for pulsed power in other areas such as in the combination of molecules (i.e. fusion, material joining), gessoes radiations (i.e. electron beams, laser, and radar), explosions (i.e. concrete recycling), wastewater, exhausted gas, and material surface treatments. These pulses are widely employed in the silent discharge process in all types of materials (including gas, fluid and solid); in some cases, to form the plasma and consequently accelerate the associated process. Due to this fast growing demand for pulsed power in industrial and environmental applications, the exigency of having more efficient and flexible pulse modulators is now receiving greater consideration. Sensitive applications, such as plasma fusion and laser guns also require more precisely produced repetitive pulses with a higher quality. Many research studies are being conducted in different areas that need a flexible pulse modulator to vary pulse features to investigate the influence of these variations on the application. In addition, there is the need to prevent the waste of a considerable amount of energy caused by the arc phenomena that frequently occur after the plasma process. The control over power flow during the supply process is a critical skill that enables the pulse supply to halt the supply process at any stage. Different pulse modulators which utilise different accumulation techniques including Marx Generators (MG), Magnetic Pulse Compressors (MPC), Pulse Forming Networks (PFN) and Multistage Blumlein Lines (MBL) are currently employed to supply a wide range of applications. Gas/Magnetic switching technologies (such as spark gap and hydrogen thyratron) have conventionally been used as switching devices in pulse modulator structures because of their high voltage ratings and considerably low rising times. However, they also suffer from serious drawbacks such as, their low efficiency, reliability and repetition rate, and also their short life span. Being bulky, heavy and expensive are the other disadvantages associated with these devices. Recently developed solid-state switching technology is an appropriate substitution for these switching devices due to the benefits they bring to the pulse supplies. Besides being compact, efficient, reasonable and reliable, and having a long life span, their high frequency switching skill allows repetitive operation of pulsed power supply. The main concerns in using solid-state transistors are the voltage rating and the rising time of available switches that, in some cases, cannot satisfy the application’s requirements. However, there are several power electronics configurations and techniques that make solid-state utilisation feasible for high voltage pulse generation. Therefore, the design and development of novel methods and topologies with higher efficiency and flexibility for pulsed power generators have been considered as the main scope of this research work. This aim is pursued through several innovative proposals that can be classified under the following two principal objectives. • To innovate and develop novel solid-state based topologies for pulsed power generation • To improve available technologies that have the potential to accommodate solid-state technology by revising, reconfiguring and adjusting their structure and control algorithms. The quest to distinguish novel topologies for a proper pulsed power production was begun with a deep and through review of conventional pulse generators and useful power electronics topologies. As a result of this study, it appears that efficiency and flexibility are the most significant demands of plasma applications that have not been met by state-of-the-art methods. Many solid-state based configurations were considered and simulated in order to evaluate their potential to be utilised in the pulsed power area. Parts of this literature review are documented in Chapter 1 of this thesis. Current source topologies demonstrate valuable advantages in supplying the loads with capacitive characteristics such as plasma applications. To investigate the influence of switching transients associated with solid-state devices on rise time of pulses, simulation based studies have been undertaken. A variable current source is considered to pump different current levels to a capacitive load, and it was evident that dissimilar dv/dts are produced at the output. Thereby, transient effects on pulse rising time are denied regarding the evidence acquired from this examination. A detailed report of this study is given in Chapter 6 of this thesis. This study inspired the design of a solid-state based topology that take advantage of both current and voltage sources. A series of switch-resistor-capacitor units at the output splits the produced voltage to lower levels, so it can be shared by the switches. A smart but complicated switching strategy is also designed to discharge the residual energy after each supply cycle. To prevent reverse power flow and to reduce the complexity of the control algorithm in this system, the resistors in common paths of units are substituted with diode rectifiers (switch-diode-capacitor). This modification not only gives the feasibility of stopping the load supply process to the supplier at any stage (and consequently saving energy), but also enables the converter to operate in a two-stroke mode with asymmetrical capacitors. The components’ determination and exchanging energy calculations are accomplished with respect to application specifications and demands. Both topologies were simply modelled and simulation studies have been carried out with the simplified models. Experimental assessments were also executed on implemented hardware and the approaches verified the initial analysis. Reports on details of both converters are thoroughly discussed in Chapters 2 and 3 of the thesis. Conventional MGs have been recently modified to use solid-state transistors (i.e. Insulated gate bipolar transistors) instead of magnetic/gas switching devices. Resistive insulators previously used in their structures are substituted by diode rectifiers to adjust MGs for a proper voltage sharing. However, despite utilizing solid-state technology in MGs configurations, further design and control amendments can still be made to achieve an improved performance with fewer components. Considering a number of charging techniques, resonant phenomenon is adopted in a proposal to charge the capacitors. In addition to charging the capacitors at twice the input voltage, triggering switches at the moment at which the conducted current through switches is zero significantly reduces the switching losses. Another configuration is also introduced in this research for Marx topology based on commutation circuits that use a current source to charge the capacitors. According to this design, diode-capacitor units, each including two Marx stages, are connected in cascade through solid-state devices and aggregate the voltages across the capacitors to produce a high voltage pulse. The polarity of voltage across one capacitor in each unit is reversed in an intermediate mode by connecting the commutation circuit to the capacitor. The insulation of input side from load side is provided in this topology by disconnecting the load from the current source during the supply process. Furthermore, the number of required fast switching devices in both designs is reduced to half of the number used in a conventional MG; they are replaced with slower switches (such as Thyristors) that need simpler driving modules. In addition, the contributing switches in discharging paths are decreased to half; this decrease leads to a reduction in conduction losses. Associated models are simulated, and hardware tests are performed to verify the validity of proposed topologies. Chapters 4, 5 and 7 of the thesis present all relevant analysis and approaches according to these topologies.
Resumo:
Diminished student interest in science, technology, engineering and mathematics (STEM) is recognised by educators, researchers and public policy makers as a concerning global trend. Inviting stakeholders like scientists and industry specialists to discuss their work is one means schools use to facilitate student engagement in the sciences. However, these visits generally comprise one-off sessions with minimal relevance to students’ particular and ongoing learning needs. This case study investigated coteaching and cogenerative dialoguing with parents in teaching a Year-8 multidisciplinary unit with science and technology foci. Two parents cotaught alongside the resident teacher and researcher over eight months. This paper concentrates on one parent, a medical scientist by profession. Data sources included video and audio recordings of cogenerative dialogues and classroom interactions, student work samples and journal entries. Data were interrogated using the sociological constructs of fields and capitals and the dialectic of structure|agency. The findings reveal how (a) the parent’s science and technology knowledge was tailored to the students’ needs initially and continually and (b) student-generated data indicated enhanced engagement in science and technology. The research speaks to schools and governments about enhancing STEM education by furthering collaborative relationships with relevant stakeholders.
Resumo:
For many people, a relatively large proportion of daily exposure to a multitude of pollutants may occur inside an automobile. A key determinant of exposure is the amount of outdoor air entering the cabin (i.e. air change or flow rate). We have quantified this parameter in six passenger vehicles ranging in age from 18 years to <1 year, at three vehicle speeds and under four different ventilation settings. Average infiltration into the cabin with all operable air entry pathways closed was between 1 and 33.1 air changes per hour (ACH) at a vehicle speed of 60 km/h, and between 2.6 and 47.3 ACH at 110 km/h, with these results representing the most (2005 Volkswagen Golf) and least air-tight (1989 Mazda 121) vehicles, respectively. Average infiltration into stationary vehicles parked outdoors varied between ~0 and 1.4 ACH and was moderately related to wind speed. Measurements were also performed under an air recirculation setting with low fan speed, while airflow rate measurements were conducted under two non-recirculate ventilation settings with low and high fan speeds. The windows were closed in all cases, and over 200 measurements were performed. The results can be applied to estimate pollutant exposure inside vehicles.
Resumo:
Modern technology now has the ability to generate large datasets over space and time. Such data typically exhibit high autocorrelations over all dimensions. The field trial data motivating the methods of this paper were collected to examine the behaviour of traditional cropping and to determine a cropping system which could maximise water use for grain production while minimising leakage below the crop root zone. They consist of moisture measurements made at 15 depths across 3 rows and 18 columns, in the lattice framework of an agricultural field. Bayesian conditional autoregressive (CAR) models are used to account for local site correlations. Conditional autoregressive models have not been widely used in analyses of agricultural data. This paper serves to illustrate the usefulness of these models in this field, along with the ease of implementation in WinBUGS, a freely available software package. The innovation is the fitting of separate conditional autoregressive models for each depth layer, the ‘layered CAR model’, while simultaneously estimating depth profile functions for each site treatment. Modelling interest also lay in how best to model the treatment effect depth profiles, and in the choice of neighbourhood structure for the spatial autocorrelation model. The favoured model fitted the treatment effects as splines over depth, and treated depth, the basis for the regression model, as measured with error, while fitting CAR neighbourhood models by depth layer. It is hierarchical, with separate onditional autoregressive spatial variance components at each depth, and the fixed terms which involve an errors-in-measurement model treat depth errors as interval-censored measurement error. The Bayesian framework permits transparent specification and easy comparison of the various complex models compared.
Resumo:
The underlying objective of this study was to develop a novel approach to evaluate the potential for commercialisation of a new technology. More specifically, this study examined the 'ex-ante'. evaluation of the technology transfer process. For this purpose, a technology originating from the high technology sector was used. The technology relates to the application of software for the detection of weak signals from space, which is an established method of signal processing in the field of radio astronomy. This technology has the potential to be used in commercial and industrial areas other than astronomy, such as detecting water leakages in pipes. Its applicability to detecting water leakage was chosen owing to several problems with detection in the industry as well as the impact it can have on saving water in the environment. This study, therefore, will demonstrate the importance of interdisciplinary technology transfer. The study employed both technical and business evaluation methods including laboratory experiments and the Delphi technique to address the research questions. There are several findings from this study. Firstly, scientific experiments were conducted and these resulted in a proof of concept stage of the chosen technology. Secondly, validation as well as refinement of criteria from literature that can be used for „ex-ante. evaluation of technology transfer has been undertaken. Additionally, after testing the chosen technology.s overall transfer potential using the modified set of criteria, it was found that the technology is still in its early stages and will require further development for it to be commercialised. Furthermore, a final evaluation framework was developed encompassing all the criteria found to be important. This framework can help in assessing the overall readiness of the technology for transfer as well as in recommending a viable mechanism for commercialisation. On the whole, the commercial potential of the chosen technology was tested through expert opinion, thereby focusing on the impact of a new technology and the feasibility of alternate applications and potential future applications.
Resumo:
A simple phenomenological model for the relationship between structure and composition of the high Tc cuprates is presented. The model is based on two simple crystal chemistry principles: unit cell doping and charge balance within unit cells. These principles are inspired by key experimental observations of how the materials accommodate large deviations from stoichiometry. Consistent explanations for significant HTSC properties can be explained without any additional assumptions while retaining valuable insight for geometric interpretation. Combining these two chemical principles with a review of Crystal Field Theory (CFT) or Ligand Field Theory (LFT), it becomes clear that the two oxidation states in the conduction planes (typically d8 and d9) belong to the most strongly divergent d-levels as a function of deformation from regular octahedral coordination. This observation offers a link to a range of coupling effects relating vibrations and spin waves through application of Hund’s rules. An indication of this model’s capacity to predict physical properties for HTSC is provided and will be elaborated in subsequent publications. Simple criteria for the relationship between structure and composition in HTSC systems may guide chemical syntheses within new material systems.
Resumo:
Positive and negative small ions, aerosol ion and number concentration and dc electric fields were monitored at an overhead high-voltage power line site. We show that the emission of corona ions was not spatially uniform along the lines and occurred from discrete components such as a particular set of spacers. Maximum ion concentrations and atmospheric dc electric fields were observed at a point 20 m downwind of the lines. It was estimated that less than 7% of the total number of aerosol particles was charged. The electrical parameters decreased steadily with further downwind distance but remained significantly higher than background.
Resumo:
In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.