901 resultados para deformable mirror
Resumo:
Rapid advances in educational and information communications technology (ICT)have encouraged some educators to move beyond traditional face to face and distance education correspondence modes toward a rich, technology mediated e-learning environment. Ready access to multimedia at the desktop has provided the opportunity for educators to develop flexible, engaging and interactive learning resources incorporating multimedia and hypermedia. However, despite this opportunity, the adoption and integration of educational technologies by academics across the tertiary sector has typically been slow. This paper presents the findings of a qualitative study that investigated factors influencing the manner in which academics adopt and integrate educational technology and ICT. The research was conducted at a regional Australian university, the University of Southern Queensland (USQ), and focused on the development of e-learning environments. These e-learning environments include a range of multimodal learning objects and multiple representations of content that seek to cater for different learning styles and modal preferences, increase interaction, improve learning outcomes, provide a more inclusive and equitable curriculum and more closely mirror the on campus learning experience. This focus of this paper is primarily on the barriers or inhibitors academics reported in the study, including institutional barriers, individual inhibitors and pedagogical concerns. Strategies for addressing these obstacles are presented and implications and recommendations for educational institutions are discussed.
Resumo:
The novel manuscript Girl in the Shadows tells the story of two teenage girls whose friendship, safety and sanity are pushed to the limits when an unexplained phenomenon invades their lives. Sixteen-year-old Tash has everything a teenage girl could want: good looks, brains and freedom from her busy parents. But when she looks into her mirror, a stranger’s face stares back at her. Her best friend Mal believes it’s an evil spirit and enters the world of the supernatural to find answers. But spell books and ouija boards cannot fix a problem that comes from deep within the soul. It will take a journey to the edge of madness for Tash to face the truth inside her heart and see the evil that lurks in her home. And Mal’s love and courage to pull her back into life. The exegesis examines resilience and coping strategies in adolescence, in particular, the relationship of trauma to brain development in children and teenagers. It draws on recent discoveries in neuroscience and psychology to provide a framework to examine the role of coping strategies in building resilience. Within this broader context, it analyses two works of contemporary young adult fiction, Freaky Green Eyes by Joyce Carol Oates and Sonya Hartnett’s Surrender, their use of the split persona as a coping mechanism within young adult fiction and the potential of young adult literature as a tool to help build resilience in teen readers.
Resumo:
Services in the form of business services or IT-enabled (Web) Services have become a corporate asset of high interest in striving towards the agile organisation. However, while the design and management of a single service is widely studied and well understood, little is known about how a set of services can be managed. This gap motivated this paper, in which we explore the concept of Service Portfolio Management. In particular, we propose a Service Portfolio Management Framework that explicates service portfolio goals, tasks, governance issues, methods and enablers. The Service Portfolio Management Framework is based upon a thorough analysis and consolidation of existing, well-established portfolio management approaches. From an academic point of view, the Service Portfolio Management Framework can be positioned as an extension of portfolio management conceptualisations in the area of service management. Based on the framework, possible directions for future research are provided. From a practical point of view, the Service Portfolio Management Framework provides an organisation with a novel approach to managing its emerging service portfolios.
Resumo:
Information System (IS) success may be the most arguable and important dependent variable in the IS field. The purpose of the present study is to address IS success by empirically assess and compare DeLone and McLean’s (1992) and Gable’s et al. (2008) models of IS success in Australian Universities context. The two models have some commonalities and several important distinctions. Both models integrate and interrelate multiple dimensions of IS success. Hence, it would be useful to compare the models to see which is superior; as it is not clear how IS researchers should respond to this controversy.
Resumo:
In response to a range of contextual drivers, the worldwide adoption of ERP Systems in Higher Education Institutions (HEIs) has increased substantially over the past decade. Though this demand continues to grow, with HEIs now a main target market for ERP vendors, little has been published on the topic. This paper reports a sub-study of a larger research effort that aims to contribute to understanding the phenomenon of ERP adoption and evaluation in HEIs in the Australasian region. It presents a descriptive case study conducted at Queensland University of Technology (QUT) in Australia, with emphasis on challenges with ERP adoption. The case study provides rich contextual details about ERP system selection, customisation, integration and evaluation, and insights into the role of consultants in the HE sector. Through this analysis, the paper (a) provides evidence of the dearth of ERP literature pertaining to the HE sector; (b) yields insights into differentiating factors in the HE sector that warrants specific research attention, and (c) offers evidence of how key ERP decisions such as systems selection, customisation, integration, evaluation, and consultant engagement are influenced by the specificities of the HE sector.
Resumo:
Cultural objects are increasingly generated and stored in digital form, yet effective methods for their indexing and retrieval still remain an important area of research. The main problem arises from the disconnection between the content-based indexing approach used by computer scientists and the description-based approach used by information scientists. There is also a lack of representational schemes that allow the alignment of the semantics and context with keywords and low-level features that can be automatically extracted from the content of these cultural objects. This paper presents an integrated approach to address these problems, taking advantage of both computer science and information science approaches. We firstly discuss the requirements from a number of perspectives: users, content providers, content managers and technical systems. We then present an overview of our system architecture and describe various techniques which underlie the major components of the system. These include: automatic object category detection; user-driven tagging; metadata transform and augmentation, and an expression language for digital cultural objects. In addition, we discuss our experience on testing and evaluating some existing collections, analyse the difficulties encountered and propose ways to address these problems.
Resumo:
In a competitive environment, companies continuously innovate to offer superior services at lower costs. ‘Shared services’ have been extensively adopted in practice as one means for improving organisational performance. Shared services is considered most appropriate for support functions, and is widely adopted in Human Resource Management, Finance and Accounting; more recently being employed across the Information Systems function. IS applications and infrastructure are an important enabler and driver of shared services in all functional areas. As computer based corporate information systems have become de facto and the internet pervasive and increasingly the backbone of administrative systems, the technical impediments to sharing have come down dramatically. As this trend continues, CIOs and IT professionals will need a deeper understanding of the shared services phenomenon and its implications. The advent of shared services has consequential implications for the IS academic discipline. Yet, archival analysis of IS the academic literature reveals that shared services, though mentioned in more than 100 articles, has received little in depth attention. This paper is the first attempt to investigate and report on the current status of shared services in the IS literature. The paper presents detailed review of literature from main IS journals and conferences, findings evidencing a lack of focus and definitions and objectives lacking conceptual rigour. The paper concludes with a tentative operational definition, a list of perceived main objectives of shared services, and an agenda for related future research.
Resumo:
Although comparison phakometry has been used by a number of studies to measure posterior corneal shape, these studies have not calculated the size of the posterior corneal zones of reflection they assessed. This paper develops paraxial equations for calculating posterior corneal zones of reflection, based on standard keratometry equations and equivalent mirror theory. For targets used in previous studies, posterior corneal reflection zone sizes were calculated using paraxial equations and using exact ray tracing, assuming spherical and aspheric corneal surfaces. Paraxial methods and exact ray tracing methods give similar estimates for reflection zone sizes less than 2 mm, but for larger zone sizes ray tracing methods should be used.
Resumo:
Background: Incidence and mortality from skin cancers including melanoma are highest among men 50 years or older. Thorough skin self-examination may be beneficial to improve skin cancers outcomes.--------- Objectives: To develop and conduct a randomized-controlled trial of a video-based intervention to improve skin self-examination behavior among men 50 years or older.--------- Methods: Pilot work ascertained appropriate targeting of the 12-minute intervention video towards men 50 years or older. Overall, 968 men were recruited and 929 completed baseline telephone assessment. Baseline analysis assessed randomization balance and demographic, skin cancer risk and attitudinal factors associated with conducting a whole-body skin self-examination or receiving a whole-body clinical skin examination by a doctor during the past 12 months.--------- Results: Randomization resulted in well-balanced intervention and control groups. Overall 13% of men reported conducting a thorough skin self-examination using a mirror or the help of another person to check difficult to see areas, while 39% reported having received a whole-body skin examination by a doctor within the past 12 months. Confidence in finding time for and receiving advice or instructions by a doctor to perform a skin self-examination were among the factors associated with thorough skin self-examination at baseline.---------- Conclusions: Men 50 years or older can successfully be recruited to a video-based intervention trial with the aim reduce their burden through skin cancer. Randomization by computer generated randomization list resulted in good balance between control and intervention group and baseline analysis determined factors associated with skin cancer early detection behavior at baseline.
Resumo:
Transition metal oxides are functional materials that have advanced applications in many areas, because of their diverse properties (optical, electrical, magnetic, etc.), hardness, thermal stability and chemical resistance. Novel applications of the nanostructures of these oxides are attracting significant interest as new synthesis methods are developed and new structures are reported. Hydrothermal synthesis is an effective process to prepare various delicate structures of metal oxides on the scales from a few to tens of nanometres, specifically, the highly dispersed intermediate structures which are hardly obtained through pyro-synthesis. In this thesis, a range of new metal oxide (stable and metastable titanate, niobate) nanostructures, namely nanotubes and nanofibres, were synthesised via a hydrothermal process. Further structure modifications were conducted and potential applications in catalysis, photocatalysis, adsorption and construction of ceramic membrane were studied. The morphology evolution during the hydrothermal reaction between Nb2O5 particles and concentrated NaOH was monitored. The study demonstrates that by optimising the reaction parameters (temperature, amount of reactants), one can obtain a variety of nanostructured solids, from intermediate phases niobate bars and fibres to the stable phase cubes. Trititanate (Na2Ti3O7) nanofibres and nanotubes were obtained by the hydrothermal reaction between TiO2 powders or a titanium compound (e.g. TiOSO4·xH2O) and concentrated NaOH solution by controlling the reaction temperature and NaOH concentration. The trititanate possesses a layered structure, and the Na ions that exist between the negative charged titanate layers are exchangeable with other metal ions or H+ ions. The ion-exchange has crucial influence on the phase transition of the exchanged products. The exchange of the sodium ions in the titanate with H+ ions yields protonated titanate (H-titanate) and subsequent phase transformation of the H-titanate enable various TiO2 structures with retained morphology. H-titanate, either nanofibres or tubes, can be converted to pure TiO2(B), pure anatase, mixed TiO2(B) and anatase phases by controlled calcination and by a two-step process of acid-treatment and subsequent calcination. While the controlled calcination of the sodium titanate yield new titanate structures (metastable titanate with formula Na1.5H0.5Ti3O7, with retained fibril morphology) that can be used for removal of radioactive ions and heavy metal ions from water. The structures and morphologies of the metal oxides were characterised by advanced techniques. Titania nanofibres of mixed anatase and TiO2(B) phases, pure anatase and pure TiO2(B) were obtained by calcining H-titanate nanofibres at different temperatures between 300 and 700 °C. The fibril morphology was retained after calcination, which is suitable for transmission electron microscopy (TEM) analysis. It has been found by TEM analysis that in mixed-phase structure the interfaces between anatase and TiO2(B) phases are not random contacts between the engaged crystals of the two phases, but form from the well matched lattice planes of the two phases. For instance, (101) planes in anatase and (101) planes of TiO2(B) are similar in d spaces (~0.18 nm), and they join together to form a stable interface. The interfaces between the two phases act as an one-way valve that permit the transfer of photogenerated charge from anatase to TiO2(B). This reduces the recombination of photogenerated electrons and holes in anatase, enhancing the activity for photocatalytic oxidation. Therefore, the mixed-phase nanofibres exhibited higher photocatalytic activity for degradation of sulforhodamine B (SRB) dye under ultraviolet (UV) light than the nanofibres of either pure phase alone, or the mechanical mixtures (which have no interfaces) of the two pure phase nanofibres with a similar phase composition. This verifies the theory that the difference between the conduction band edges of the two phases may result in charge transfer from one phase to the other, which results in effectively the photogenerated charge separation and thus facilitates the redox reaction involving these charges. Such an interface structure facilitates charge transfer crossing the interfaces. The knowledge acquired in this study is important not only for design of efficient TiO2 photocatalysts but also for understanding the photocatalysis process. Moreover, the fibril titania photocatalysts are of great advantage when they are separated from a liquid for reuse by filtration, sedimentation, or centrifugation, compared to nanoparticles of the same scale. The surface structure of TiO2 also plays a significant role in catalysis and photocatalysis. Four types of large surface area TiO2 nanotubes with different phase compositions (labelled as NTA, NTBA, NTMA and NTM) were synthesised from calcination and acid treatment of the H-titanate nanotubes. Using the in situ FTIR emission spectrescopy (IES), desorption and re-adsorption process of surface OH-groups on oxide surface can be trailed. In this work, the surface OH-group regeneration ability of the TiO2 nanotubes was investigated. The ability of the four samples distinctively different, having the order: NTA > NTBA > NTMA > NTM. The same order was observed for the catalytic when the samples served as photocatalysts for the decomposition of synthetic dye SRB under UV light, as the supports of gold (Au) catalysts (where gold particles were loaded by a colloid-based method) for photodecomposition of formaldehyde under visible light and for catalytic oxidation of CO at low temperatures. Therefore, the ability of TiO2 nanotubes to generate surface OH-groups is an indicator of the catalytic activity. The reason behind the correlation is that the oxygen vacancies at bridging O2- sites of TiO2 surface can generate surface OH-groups and these groups facilitate adsorption and activation of O2 molecules, which is the key step of the oxidation reactions. The structure of the oxygen vacancies at bridging O2- sites is proposed. Also a new mechanism for the photocatalytic formaldehyde decomposition with the Au-TiO2 catalysts is proposed: The visible light absorbed by the gold nanoparticles, due to surface plasmon resonance effect, induces transition of the 6sp electrons of gold to high energy levels. These energetic electrons can migrate to the conduction band of TiO2 and are seized by oxygen molecules. Meanwhile, the gold nanoparticles capture electrons from the formaldehyde molecules adsorbed on them because of gold’s high electronegativity. O2 adsorbed on the TiO2 supports surface are the major electron acceptor. The more O2 adsorbed, the higher the oxidation activity of the photocatalyst will exhibit. The last part of this thesis demonstrates two innovative applications of the titanate nanostructures. Firstly, trititanate and metastable titanate (Na1.5H0.5Ti3O7) nanofibres are used as intelligent absorbents for removal of radioactive cations and heavy metal ions, utilizing the properties of the ion exchange ability, deformable layered structure, and fibril morphology. Environmental contamination with radioactive ions and heavy metal ions can cause a serious threat to the health of a large part of the population. Treatment of the wastes is needed to produce a waste product suitable for long-term storage and disposal. The ion-exchange ability of layered titanate structure permitted adsorption of bivalence toxic cations (Sr2+, Ra2+, Pb2+) from aqueous solution. More importantly, the adsorption is irreversible, due to the deformation of the structure induced by the strong interaction between the adsorbed bivalent cations and negatively charged TiO6 octahedra, and results in permanent entrapment of the toxic bivalent cations in the fibres so that the toxic ions can be safely deposited. Compared to conventional clay and zeolite sorbents, the fibril absorbents are of great advantage as they can be readily dispersed into and separated from a liquid. Secondly, new generation membranes were constructed by using large titanate and small ã-alumina nanofibres as intermediate and top layers, respectively, on a porous alumina substrate via a spin-coating process. Compared to conventional ceramic membranes constructed by spherical particles, the ceramic membrane constructed by the fibres permits high flux because of the large porosity of their separation layers. The voids in the separation layer determine the selectivity and flux of a separation membrane. When the sizes of the voids are similar (which means a similar selectivity of the separation layer), the flux passing through the membrane increases with the volume of the voids which are filtration passages. For the ideal and simplest texture, a mesh constructed with the nanofibres 10 nm thick and having a uniform pore size of 60 nm, the porosity is greater than 73.5 %. In contrast, the porosity of the separation layer that possesses the same pore size but is constructed with metal oxide spherical particles, as in conventional ceramic membranes, is 36% or less. The membrane constructed by titanate nanofibres and a layer of randomly oriented alumina nanofibres was able to filter out 96.8% of latex spheres of 60 nm size, while maintaining a high flux rate between 600 and 900 Lm–2 h–1, more than 15 times higher than the conventional membrane reported in the most recent study.
Resumo:
The over representation of novice drivers in crashes is alarming. Research indicates that one in five drivers’ crashes within their first year of driving. Driver training is one of the interventions aimed at decreasing the number of crashes that involve young drivers. Currently, there is a need to develop comprehensive driver evaluation system that benefits from the advances in Driver Assistance Systems. Since driving is dependent on fuzzy inputs from the driver (i.e. approximate distance calculation from the other vehicles, approximate assumption of the other vehicle speed), it is necessary that the evaluation system is based on criteria and rules that handles uncertain and fuzzy characteristics of the drive. This paper presents a system that evaluates the data stream acquired from multiple in-vehicle sensors (acquired from Driver Vehicle Environment-DVE) using fuzzy rules and classifies the driving manoeuvres (i.e. overtake, lane change and turn) as low risk or high risk. The fuzzy rules use parameters such as following distance, frequency of mirror checks, gaze depth and scan area, distance with respect to lanes and excessive acceleration or braking during the manoeuvre to assess risk. The fuzzy rules to estimate risk are designed after analysing the selected driving manoeuvres performed by driver trainers. This paper focuses mainly on the difference in gaze pattern for experienced and novice drivers during the selected manoeuvres. Using this system, trainers of novice drivers would be able to empirically evaluate and give feedback to the novice drivers regarding their driving behaviour.
Resumo:
The notion of pedagogy for anyone in the teaching profession is innocuous. The term itself, is steeped in history but the details of the practice can be elusive. What does it mean for an academic to be embracing pedagogy? The problem is not limited to academics; most teachers baulk at the introduction of a pedagogic agenda and resist attempts to have them reflect on their classroom teaching practice, where ever that classroom might be constituted. This paper explores the application of a pedagogic model (Education Queensland, 2001) which was developed in the context of primary and secondary teaching and was part of a schooling agenda to improve pedagogy. As a teacher educator I introduced the model to classroom teachers (Hill, 2002) using an Appreciative Inquiry (Cooperrider and Srivastva 1987) model and at the same time applied the model to my own pedagogy as an academic. Despite being instigated as a model for classroom teachers, I found through my own practitioner investigation that the model was useful for exploring my own pedagogy as a university academic (Hill, 2007, 2008). Cooperrider, D.L. and Srivastva, S. (1987) Appreciative inquiry in organisational life, in Passmore, W. and Woodman, R. (Eds) Research in Organisational Changes and Development (Vol 1) Greenwich, CT: JAI Press. Pp 129-69 Education Queensland (2001) School Reform Longitudinal Study (QSRLS), Brisbane, Queensland Government. Hill, G. (2002, December ) Reflecting on professional practice with a cracked mirror: Productive Pedagogy experiences. Australian Association for Research in Education Conference. Brisbane, Australia. Hill, G. (2007) Making the assessment criteria explicit through writing feedback: A pedagogical approach to developing academic writing. International Journal of Pedagogies and Learning 3(1), 59-66. Hill, G. (2008) Supervising Practice Based Research. Studies in Learning, Evaluation, Innovation and Development, 5(4), 78-87
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
The legal power to declare war has traditionally been a part of a prerogative to be exercised solely on advice that passed from the King to the Governor-General no later than 1942. In 2003, the Governor- General was not involved in the decision by the Prime Minister and Cabinet to commit Australian troops to the invasion of Iraq. The authors explore the alternative legal means by which Australia can go to war - means the government in fact used in 2003 - and the constitutional basis of those means. While the prerogative power can be regulated and/or devolved by legislation, and just possibly by practice, there does not seem to be a sound legal basis to assert that the power has been devolved to any other person. It appears that in 2003 the Defence Minister used his legal powers under the Defence Act 1903 (Cth) (as amended in 1975) to give instructions to the service head(s). A powerful argument could be made that the relevant sections of the Defence Act were not intended to be used for the decision to go to war, and that such instructions are for peacetime or in bello decisions. If so, the power to make war remains within the prerogative to be exercised on advice. Interviews with the then Governor-General indicate that Prime Minister Howard had planned to take the matter to the Federal Executive Council 'for noting', but did not do so after the Governor-General sought the views of the then Attorney-General about relevant issues of international law. The exchange raises many issues, but those of interest concern the kinds of questions the Governor-General could and should ask about proposed international action and whether they in any way mirror the assurances that are uncontroversially required for domestic action. In 2003, the Governor-General's scrutiny was the only independent scrutiny available because the legality of the decision to go to war was not a matter that could be determined in the High Court, and the federal government had taken action in March 2002 that effectively prevented the matter coming before the International Court of Justice