152 resultados para Isidoros son of Dioskoros (see also O.Mich. I, 332)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Australia and many other countries worldwide, water used in the manufacture of concrete must be potable. At present, it is currently thought that concrete properties are highly influenced by the water type used and its proportion in the concrete mix, but actually there is little knowledge of the effects of different, alternative water sources used in concrete mix design. Therefore, the identification of the level and nature of contamination in available water sources and their subsequent influence on concrete properties is becoming increasingly important. Of most interest, is the recycled washout water currently used by batch plants as mixing water for concrete. Recycled washout water is the water used onsite for a variety of purposes, including washing of truck agitator bowls, wetting down of aggregate and run off. This report presents current information on the quality of concrete mixing water in terms of mandatory limits and guidelines on impurities as well as investigating the impact of recycled washout water on concrete performance. It also explores new sources of recycled water in terms of their quality and suitability for use in concrete production. The complete recycling of washout water has been considered for use in concrete mixing plants because of the great benefit in terms of reducing the cost of waste disposal cost and environmental conservation. The objective of this study was to investigate the effects of using washout water on the properties of fresh and hardened concrete. This was carried out by utilizing a 10 week sampling program from three representative sites across South East Queensland. The sample sites chosen represented a cross-section of plant recycling methods, from most effective to least effective. The washout water samples collected from each site were then analysed in accordance with Standards Association of Australia AS/NZS 5667.1 :1998. These tests revealed that, compared with tap water, the washout water was higher in alkalinity, pH, and total dissolved solids content. However, washout water with a total dissolved solids content of less than 6% could be used in the production of concrete with acceptable strength and durability. These results were then interpreted using chemometric techniques of Principal Component Analysis, SIMCA and the Multi-Criteria Decision Making methods PROMETHEE and GAIA were used to rank the samples from cleanest to unclean. It was found that even the simplest purifying processes provided water suitable for the manufacture of concrete form wash out water. These results were compared to a series of alternative water sources. The water sources included treated effluent, sea water and dam water and were subject to the same testing parameters as the reference set. Analysis of these results also found that despite having higher levels of both organic and inorganic properties, the waters complied with the parameter thresholds given in the American Standard Test Method (ASTM) C913-08. All of the alternative sources were found to be suitable sources of water for the manufacture of plain concrete.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

People suffering from pain due to osteoarthritic or rheumatoidal changes in the joints are still waiting for a better treatment. Although some studies have achieved success in repairing small cartilage defects, there is no widely accepted method for complete repair of osteochondral defects. Also joint replacements have not yet succeeded in replacing of natural cartilage without complications. Therefore, there is room for a new medical approach, which outperforms currently used methods. The aim of this study is to show potential of using a tissue engineering approach for regeneration of osteochondral defects. The critical review of currently used methods for treatment of osteochondral defects is also provided. In this study, two kinds of hybrid scaffolds developed in Hutmacher's group have been analysed. The first biphasic scaffold consists of fibrin and PCL. The fibrin serves as a cartilage phase while the porous PCL scaffold acts as the subchondral phase. The second system comprises of PCL and PCL-TCP. The scaffolds were fabricated via fused deposition modeling which is a rapid prototyping system. Bone marrow-derived mesenchymal cells were isolated from New Zealand White rabbits, cultured in vitro and seeded into the scaffolds. Bone regenerations of the subchondral phases were quantified via micro CT analysis and the results demonstrated the potential of the porous PCL and PCL-TCP scaffolds in promoting bone healing. Fibrin was found to be lacking in this aspect as it degrades rapidly. On the other hand, the porous PCL scaffold degrades slowly hence it provides an effective mechanical support. This study shows that in the field of cartilage repair or replacement, tissue engineering may have big impact in the future. In vivo bone and cartilage engineering via combining a novel composite, biphasic scaffold technology with a MSC has been shown a high potential in the knee defect regeneration in the animal models. However, the clinical application of tissue engineering requires the future research work due to several problems, such as scaffold design, cellular delivery and implantation strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Matrix Metalloproteinases (MMP) play a key role in osteoarthritis (OA) development. The aim of the present study was to investigate whether, the cross-talk between subchondral bone osteoblasts (SBOs) and articular cartilage chondrocytes (ACCs) in OA alters the expression and regulation of MMPs, and also to test the potential involvement of mitogen activated protein kinase (MAPK) signalling pathway during this process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identification of hot spots, also known as the sites with promise, black spots, accident-prone locations, or priority investigation locations, is an important and routine activity for improving the overall safety of roadway networks. Extensive literature focuses on methods for hot spot identification (HSID). A subset of this considerable literature is dedicated to conducting performance assessments of various HSID methods. A central issue in comparing HSID methods is the development and selection of quantitative and qualitative performance measures or criteria. The authors contend that currently employed HSID assessment criteria—namely false positives and false negatives—are necessary but not sufficient, and additional criteria are needed to exploit the ordinal nature of site ranking data. With the intent to equip road safety professionals and researchers with more useful tools to compare the performances of various HSID methods and to improve the level of HSID assessments, this paper proposes four quantitative HSID evaluation tests that are, to the authors’ knowledge, new and unique. These tests evaluate different aspects of HSID method performance, including reliability of results, ranking consistency, and false identification consistency and reliability. It is intended that road safety professionals apply these different evaluation tests in addition to existing tests to compare the performances of various HSID methods, and then select the most appropriate HSID method to screen road networks to identify sites that require further analysis. This work demonstrates four new criteria using 3 years of Arizona road section accident data and four commonly applied HSID methods [accident frequency ranking, accident rate ranking, accident reduction potential, and empirical Bayes (EB)]. The EB HSID method reveals itself as the superior method in most of the evaluation tests. In contrast, identifying hot spots using accident rate rankings performs the least well among the tests. The accident frequency and accident reduction potential methods perform similarly, with slight differences explained. The authors believe that the four new evaluation tests offer insight into HSID performance heretofore unavailable to analysts and researchers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A substantial body of research is focused on understanding the relationships between socio-demographics, land-use characteristics, and mode specific attributes on travel mode choice and time-use patterns. Residential and commercial densities, inter-mixing of land uses, and route directness in conjunction with transportation performance characteristics interact to influence accessibility to destinations as well as time spent traveling and engaging in activities. This study uniquely examines the activity durations undertaken for out-of-home subsistence; maintenance, and discretionary activities. Also examined are total tour durations (summing all activity categories within a tour). Cross-sectional activities are obtained from household activity travel survey data from the Atlanta Metropolitan Region. Time durations allocated to weekdays and weekends are compared. The censoring and endogeneity between activity categories and within individuals are captured using multiple equations Tobit models. The analysis and modeling reveal that land-use characteristics such as net residential density and the number of commercial parcels within a kilometer of a residence are associated with differences in weekday and weekend time-use allocations. Household type and structure are significant predictors across the three activity categories, but not for overall travel times. Tour characteristics such as time-of-day and primary travel mode of the tours also affect traveler's out-of-home activity-tour time-use patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose. To investigate the effect of various presbyopic vision corrections on nighttime driving performance on a closed-road driving circuit. Methods. Participants were 11 presbyopes (mean age, 57.3 ± 5.8 years), with a mean best sphere distance refractive error of R+0.23±1.53 DS and L+0.20±1.50 DS, whose only experience of wearing presbyopic vision correction was reading spectacles. The study involved a repeated-measures design by which a participant's nighttime driving performance was assessed on a closed-road circuit while wearing each of four power-matched vision corrections. These included single-vision distance lenses (SV), progressive-addition spectacle lenses (PAL), monovision contact lenses (MV), and multifocal contact lenses (MTF CL) worn in a randomized order. Measures included low-contrast road hazard detection and avoidance, road sign and near target recognition, lane-keeping, driving time, and legibility distance for street signs. Eye movement data (fixation duration and number of fixations) were also recorded. Results. Street sign legibility distances were shorter when wearing MV and MTF CL than SV and PAL (P < 0.001), and participants drove more slowly with MTF CL than with PALs (P = 0.048). Wearing SV resulted in more errors (P < 0.001) and in more (P = 0.002) and longer (P < 0.001) fixations when responding to near targets. Fixation duration was also longer when viewing distant signs with MTF CL than with PAL (P = 0.031). Conclusions. Presbyopic vision corrections worn by naive, unadapted wearers affected nighttime driving. Overall, spectacle corrections (PAL and SV) performed well for distance driving tasks, but SV negatively affected viewing near dashboard targets. MTF CL resulted in the shortest legibility distance for street signs and longer fixation times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a thorough thermal study on a fleet of DC traction motors which were found to suffer from overheating after 3 years of full operation. Overheating of these traction motors is attributed partly because of the higher than expected number of starts and stops between train terminals. Another probable cause of overheating is the design of the traction motor and/or its control strategy. According to the motor manufacturer, a current shunt is permanently connected across the motor field winding. Hence, some of the armature current is bypassed into the current shunt. The motor then runs above its rated speed in the field weakening mode. In this study, a finite difference model has been developed to simulate the temperature profile at different parts inside the traction motor. In order to validate the simulation result, an empty vehicle loaded with drums of water was also used to simulate the full pay-load of a light rail vehicle experimentally. The authors report that the simulation results agree reasonably well with experimental data, and it is likely that the armature of the traction motor will run cooler if its field shunt is disconnected at low speeds

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: There is no global definition of a heatwave because local acclimatisation and adaptation influence the impact of extreme heat. Even at a local level there can be multiple heatwave definitions, based on varying temperature levels or time periods. We investigated the relationship between heatwaves and health outcomes using ten different heatwave definitions in Brisbane, Australia. ---------- Methodology/Principal Findings: We used daily data on climate, air pollution, and emergency hospital admissions in Brisbane between January 1996 and December 2005; and mortality between January 1996 and November 2004. Case-crossover analyses were used to assess the relationship between each of the ten heatwave definitions and health outcomes. During heatwaves there was a statistically significant increase in emergency hospital admissions for all ten definitions, with odds ratios ranging from 1.03 to 1.18. A statistically significant increase in the odds ratios of mortality was also found for eight definitions. The size of the heat-related impact varied between definitions.---------- Conclusions/Significance Even a small change in the heatwave definition had an appreciable effect on the estimated health impact. It is important to identify an appropriate definition of heatwave locally and to understand its health effects in order to develop appropriate public health intervention strategies to prevent and mitigate the impact of heatwaves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated preservice teachers’ perceptions for teaching and sustaining gifted and talented students while developing, modifying and implementing activities to cater for the diverse learner. Participants were surveyed at the end of a gifted and talented education program on their perceptions to differentiate the curriculum for meeting the needs of the student (n=22). SPSS data analysis with the five-part Likert scale indicated these preservice teachers agreed or strongly agreed they had developed skills in curriculum planning (91%) with well-designed activities (96%), and lesson preparation skills (96%). They also claimed they were enthusiastic for teaching (91%) and understanding of school practices and policies (96%). However, 46% agreed they had knowledge of syllabus documents with 50% claiming an ability to provide written feedback on student’s learning. Furthermore, nearly two-thirds suggested they had educational language from the syllabus and effective student management strategies. Preservice teachers require more direction on how to cater for diversity and begin creating sustainable societies by building knowledge from direct GAT experiences. Designing diagnostic surveys associated with university coursework can be used to determine further development for specific preservice teacher development in GAT education. Preservice teachers need to create opportunities for students to realise their potential by involving cognitive challenges through a differentiated curriculum. Differentiation requires modification of four primary areas of curriculum development (Maker, 1975) content (what we teach), process (how we teach), product (what we expect the students to do or show) and learning environment (where we teach/our class culture). Ashman and Elkins (2009) and Glasson (2008) emphasise the need for preservice teachers, teachers and other professionals to be able to identify what gifted and talented (GAT) students know and how they learn in relation to effective teaching. Glasson (2008) recommends that educators keep up to date with practices in pedagogy, support, monitoring and profiling of GAT students to create an environment conducive to achieving. Oral feedback is one method to communicate to learners about their progress but has advantages and disadvantages for some students. Oral feedback provides immediate information to the student on progress and performance (Ashman & Elkins, 2009). However, preservice teachers must have clear understandings of key concepts to assist the GAT student. Implementing teaching strategies to engage innovate and extend students is valuable to the preservice teacher in focusing on GAT student learning in the classroom (Killen, 2007). Practical teaching strategies (Harris & Hemming, 2008; Tomlinson et al., 1994) facilitate diverse ways for assisting GAT students to achieve learning outcomes. Such strategies include activities to enhance creativity, co-operative learning and problem-solving activities (Chessman, 2005; NSW Department of Education and Training, 2004; Taylor & Milton, 2006) for GAT students to develop a sense of identity, belonging and self esteem towards becoming an autonomous learner. Preservice teachers need to understand that GAT students learn in a different way and therefore should be assessed differently. Assessment can be through diverse options to demonstrate the student’s competence, demonstrate their understanding of the material in a way that highlights their natural abilities (Glasson, 2008; Mack, 2008). Preservice teachers often are unprepared to assess students understanding but this may be overcome with teacher education training promoting effective communication and collaboration in the classroom, including the provision of a variety of assessment strategies to improve teaching and learning (Callahan et al., 2003; Tomlinson et al., 1994). It is also critical that preservice teachers have enthusiasm for teaching to demonstrate inclusion, involvement and the excitement to communicate to GAT students in the learning process (Baum, 2002). Evaluating and reflecting on teaching practices must be part of a preservice teacher’s repertoire for GAT education. Evaluating teaching practices can assist to further enhance student learning (Mayer, 2008). Evaluation gauges the success or otherwise of specific activities and teaching in general (Mayer, 2008), and ensures that preservice teachers and teachers are well prepared and maintain their commitment to their students and the community. Long and Harris (1999) advocate that reflective practices assist teachers in creating improvements in educational practices. Reflective practices help preservice teachers and teachers to improve their ability to pursue improved learning outcomes and professional growth (Long & Harris, 1999). Context This study is set at a small regional campus of a large university in Queensland. As a way to address departmental policies and the need to prepare preservice teachers for engaging a diverse range of learners (see Queensland College of Teachers, Professional Standards for Teachers, 2006), preservice teachers at this campus completed four elective units within their Bachelor of Education (primary) degree. The electives include: 1. Middle years students and schools 2. Teaching strategies for engaging learners 3. Teaching students with learning difficulties, and 4. Middle-years curriculum, pedagogy and assessment. In the university-based component of this unit, preservice teachers engaged in learning about middle years students and schools, and gained knowledge of government policies pertaining to GAT students. Further explored within in this unit was the importance of: collaboration between teachers, parents/carers and school personnel in supporting middle years GAT students; incorporating challenging learning experiences that promoted higher order thinking and problem solving skills; real world learning experiences for students and; the alignment and design of curriculum, pedagogy and assessment that is relevant to the students development, interests and needs. The participants were third-year Bachelor of Education (primary) preservice teachers who were completing an elective unit as part of the middle years of schooling learning with a focus on GAT students. They were assigned one student from a local school. In the six subsequent ninety minute weekly lessons, the preservice teachers were responsible for designing learning activities that would engage and extend the GAT students. Furthermore, preservice teachers made decisions about suitable pedagogical approaches and designed the assessment task to align with the curriculum and the developmental needs of their middle years GAT student. This research aims to describe preservice teachers’ perceptions of their education for teaching gifted and talented students.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This session is titled TRANSFORM! Opportunities and Challenges of Digital Content for Creative Economy. Some of the key concepts for this session include: 1. City / Economy 2. Creativity 3. Digital content 4. Transformation All of us would agree that these terms describe pertinent characteristics of contemporary world, the epithet of which is the ‘network era.’ I was thinking about what I would like to discuss here and what you, leading experts in divergent fields, would be interested to hear about. As the keynote for this session and as one of the first speakers for the entire conference, I see my role as an initiator for imagination, the wilder the better, posing questions rather than answers. Also given the session title Transform!, I wish to change this slightly to Transforming People, Place, and Technology: Towards Re-­creative City in an attempt to take us away a little from the usual image depicted by the given topic. Instead, I intend to sketch a more holistic picture by reflecting on and extrapolating the four key concepts from the urban informatics point of view. To do so, I use ‘city’ as the primary guiding concept for my talk rather than probably more expected ‘digital media’ or ‘creative economy.’ You may wonder what I mean by re-­creative city. I will explain this in time by looking at the key concepts from these four respective angles: 1. Living city 2. Creative city 3. Re-­‐creative city 4. Opportunities and Challenges to arrive at a speculative yet probable image of the city that we may aspire to transform our current cities into. First let us start by considering the ‘living city.’

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of particular third sector organisations, Social Clubs, in supporting gambling through the use of EGMs in venues presents as a difficult social issue. Social Clubs gain revenue from gambling activities; but also contribute to social well-being through the provision of services to communities. The revenues derived from gambling in specific geographic locales has been seen by government as a way to increase economic development particularly in deprived areas. However there are also concerns about accessibility of low-income citizens to Electronic Gaming Machines (EGMS) and the high level of gambling overall in these deprived areas. We argue that social capital can be viewed as a guard against deleterious effects of unconstrained use of EGM gambling in communities. However, it is contended that social capital may also be destroyed by gambling activity if commercial business actors are able to use EGMs without community obligations to service provision. This paper examines access to gambling through EGMs and its relationship to social capital and the consequent effect on community resilience, via an Australian case study. The results highlight the potential two-way relationship between gambling and volunteering, such that volunteering (and social capital more generally) may help protect against problems of gambling, but also that volunteering as an activity may be damaged by increased gambling activity. This suggests that, regardless of the direction of causation, it is necessary to build up social capital via volunteering and other social capital activities in areas where EGMS are concentrated. The study concludes that Social Clubs using EGMs to derive funds are uniquely positioned within the community to develop programs that foster social capital creation and build community resilience in deprived areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since its initial proposal in 1998, alkaline hydrothermal processing has rapidly become an established technology for the production of titanate nanostructures. This simple, highly reproducible process has gained a strong research following since its conception. However, complete understanding and elucidation of nanostructure phase and formation have not yet been achieved. Without fully understanding phase, formation, and other important competing effects of the synthesis parameters on the final structure, the maximum potential of these nanostructures cannot be obtained. Therefore this study examined the influence of synthesis parameters on the formation of titanate nanostructures produced by alkaline hydrothermal treatment. The parameters included alkaline concentration, hydrothermal temperature, the precursor material‘s crystallite size and also the phase of the titanium dioxide precursor (TiO2, or titania). The nanostructure‘s phase and morphology was analysed using X-ray diffraction (XRD), Raman spectroscopy and transmission electron microscopy. X-ray photoelectron spectroscopy (XPS), dynamic light scattering (non-invasive backscattering), nitrogen sorption, and Rietveld analysis were used to determine phase, for particle sizing, surface area determinations, and establishing phase concentrations, respectively. This project rigorously examined the effect of alkaline concentration and hydrothermal temperature on three commercially sourced and two self-prepared TiO2 powders. These precursors consisted of both pure- or mixed-phase anatase and rutile polymorphs, and were selected to cover a range of phase concentrations and crystallite sizes. Typically, these precursors were treated with 5–10 M sodium hydroxide (NaOH) solutions at temperatures between 100–220 °C. Both nanotube and nanoribbon morphologies could be produced depending on the combination of these hydrothermal conditions. Both titania and titanate phases are comprised of TiO6 units which are assembled in different combinations. The arrangement of these atoms affects the binding energy between the Ti–O bonds. Raman spectroscopy and XPS were therefore employed in a preliminary study of phase determination for these materials. The change in binding energy from a titania to a titanate binding energy was investigated in this study, and the transformation of titania precursor into nanotubes and titanate nanoribbons was directly observed by these methods. Evaluation of the Raman and XPS results indicated a strengthening in the binding energies of both the Ti (2p3/2) and O (1s) bands which correlated to an increase in strength and decrease in resolution of the characteristic nanotube doublet observed between 320 and 220 cm.1 in the Raman spectra of these products. The effect of phase and crystallite size on nanotube formation was examined over a series of temperatures (100.200 �‹C in 20 �‹C increments) at a set alkaline concentration (7.5 M NaOH). These parameters were investigated by employing both pure- and mixed- phase precursors of anatase and rutile. This study indicated that both the crystallite size and phase affect nanotube formation, with rutile requiring a greater driving force (essentially �\harsher. hydrothermal conditions) than anatase to form nanotubes, where larger crystallites forms of the precursor also appeared to impede nanotube formation slightly. These parameters were further examined in later studies. The influence of alkaline concentration and hydrothermal temperature were systematically examined for the transformation of Degussa P25 into nanotubes and nanoribbons, and exact conditions for nanostructure synthesis were determined. Correlation of these data sets resulted in the construction of a morphological phase diagram, which is an effective reference for nanostructure formation. This morphological phase diagram effectively provides a .recipe book�e for the formation of titanate nanostructures. Morphological phase diagrams were also constructed for larger, near phase-pure anatase and rutile precursors, to further investigate the influence of hydrothermal reaction parameters on the formation of titanate nanotubes and nanoribbons. The effects of alkaline concentration, hydrothermal temperature, crystallite phase and size are observed when the three morphological phase diagrams are compared. Through the analysis of these results it was determined that alkaline concentration and hydrothermal temperature affect nanotube and nanoribbon formation independently through a complex relationship, where nanotubes are primarily affected by temperature, whilst nanoribbons are strongly influenced by alkaline concentration. Crystallite size and phase also affected the nanostructure formation. Smaller precursor crystallites formed nanostructures at reduced hydrothermal temperature, and rutile displayed a slower rate of precursor consumption compared to anatase, with incomplete conversion observed for most hydrothermal conditions. The incomplete conversion of rutile into nanotubes was examined in detail in the final study. This study selectively examined the kinetics of precursor dissolution in order to understand why rutile incompletely converted. This was achieved by selecting a single hydrothermal condition (9 M NaOH, 160 °C) where nanotubes are known to form from both anatase and rutile, where the synthesis was quenched after 2, 4, 8, 16 and 32 hours. The influence of precursor phase on nanostructure formation was explicitly determined to be due to different dissolution kinetics; where anatase exhibited zero-order dissolution and rutile second-order. This difference in kinetic order cannot be simply explained by the variation in crystallite size, as the inherent surface areas of the two precursors were determined to have first-order relationships with time. Therefore, the crystallite size (and inherent surface area) does not affect the overall kinetic order of dissolution; rather, it determines the rate of reaction. Finally, nanostructure formation was found to be controlled by the availability of dissolved titanium (Ti4+) species in solution, which is mediated by the dissolution kinetics of the precursor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tobacco yellow dwarf virus (TbYDV, family Geminiviridae, genus Mastrevirus) is an economically important pathogen causing summer death and yellow dwarf disease in bean (Phaseolus vulgaris L.) and tobacco (Nicotiana tabacum L.), respectively. Prior to the commencement of this project, little was known about the epidemiology of TbYDV, its vector and host-plant range. As a result, disease control strategies have been restricted to regular poorly timed insecticide applications which are largely ineffective, environmentally hazardous and expensive. In an effort to address this problem, this PhD project was carried out in order to better understand the epidemiology of TbYDV, to identify its host-plant and vectors as well as to characterise the population dynamics and feeding physiology of the main insect vector and other possible vectors. The host-plants and possible leafhopper vectors of TbYDV were assessed over three consecutive growing seasons at seven field sites in the Ovens Valley, Northeastern Victoria, in commercial tobacco and bean growing properties. Leafhoppers and plants were collected and tested for the presence of TbYDV by PCR. Using sweep nets, twenty-three leafhopper species were identified at the seven sites with Orosius orientalis the predominant leafhopper. Of the 23 leafhopper species screened for TbYDV, only Orosius orientalis and Anzygina zealandica tested positive. Forty-two different plant species were also identified at the seven sites and tested. Of these, TbYDV was only detected in four dicotyledonous species, Amaranthus retroflexus, Phaseolus vulgaris, Nicotiana tabacum and Raphanus raphanistrum. Using a quadrat survey, the temporal distribution and diversity of vegetation at four of the field sites was monitored in order to assess the presence of, and changes in, potential host-plants for the leafhopper vector(s) and the virus. These surveys showed that plant composition and the climatic conditions at each site were the major influences on vector numbers, virus presence and the subsequent occurrence of tobacco yellow dwarf and bean summer death diseases. Forty-two plant species were identified from all sites and it was found that sites with the lowest incidence of disease had the highest proportion of monocotyledonous plants that are non hosts for both vector and the virus. In contrast, the sites with the highest disease incidence had more host-plant species for both vector and virus, and experienced higher temperatures and less rainfall. It is likely that these climatic conditions forced the leafhopper to move into the irrigated commercial tobacco and bean crop resulting in disease. In an attempt to understand leafhopper species diversity and abundance, in and around the field borders of commercially grown tobacco crops, leafhoppers were collected from four field sites using three different sampling techniques, namely pan trap, sticky trap and sweep net. Over 51000 leafhopper samples were collected, which comprised 57 species from 11 subfamilies and 19 tribes. Twentythree leafhopper species were recorded for the first time in Victoria in addition to several economically important pest species of crops other than tobacco and bean. The highest number and greatest diversity of leafhoppers were collected in yellow pan traps follow by sticky trap and sweep nets. Orosius orientalis was found to be the most abundant leafhopper collected from all sites with greatest numbers of this leafhopper also caught using the yellow pan trap. Using the three sampling methods mentioned above, the seasonal distribution and population dynamics of O. orientalis was studied at four field sites over three successive growing seasons. The population dynamics of the leafhopper was characterised by trimodal peaks of activity, occurring in the spring and summer months. Although O. orientalis was present in large numbers early in the growing season (September-October), TbYDV was only detected in these leafhoppers between late November and the end of January. The peak in the detection of TbYDV in O. orientalis correlated with the observation of disease symptoms in tobacco and bean and was also associated with warmer temperatures and lower rainfall. To understand the feeding requirements of Orosius orientalis and to enable screening of potential control agents, a chemically-defined artificial diet (designated PT-07) and feeding system was developed. This novel diet formulation allowed survival for O. orientalis for up to 46 days including complete development from first instar through to adulthood. The effect of three selected plant derived proteins, cowpea trypsin inhibitor (CpTi), Galanthus nivalis agglutinin (GNA) and wheat germ agglutinin (WGA), on leafhopper survival and development was assessed. Both GNA and WGA were shown to reduce leafhopper survival and development significantly when incorporated at a 0.1% (w/v) concentration. In contrast, CpTi at the same concentration did not exhibit significant antimetabolic properties. Based on these results, GNA and WGA are potentially useful antimetabolic agents for expression in genetically modified crops to improve the management of O. orientalis, TbYDV and the other pathogens it vectors. Finally, an electrical penetration graph (EPG) was used to study the feeding behaviour of O. orientalis to provide insights into TbYDV acquisition and transmission. Waveforms representing different feeding activity were acquired by EPG from adult O. orientalis feeding on two plant species, Phaseolus vulgaris and Nicotiana tabacum and a simple sucrose-based artificial diet. Five waveforms (designated O1-O5) were observed when O. orientalis fed on P. vulgaris, while only four (O1-O4) and three (O1-O3) waveforms were observed during feeding on N. tabacum and the artificial diet, respectively. The mean duration of each waveform and the waveform type differed markedly depending on the food source. This is the first detailed study on the tritrophic interactions between TbYDV, its leafhopper vector, O. orientalis, and host-plants. The results of this research have provided important fundamental information which can be used to develop more effective control strategies not only for O. orientalis, but also for TbYDV and other pathogens vectored by the leafhopper.