322 resultados para Conventional methods
Resumo:
The construction industry is a crucial component of the Hong Kong economy, and the safety and efficiency of workers are two of its main concerns. The current approach to training workers relies primarily on instilling practice and experience in conventional teacher-apprentice settings on and off site. Both have their limitations however, on-site training is very inefficient and interferes with progress on site, while off-site training provides little opportunity to develop the practical skills and awareness needed through hands-on experience. A more effective way is to train workers in safety awareness and efficient working by current novel information technologies. This paper describes a new and innovative prototype system – the Proactive Construction Management System (PCMS) – to train precast installation workers to be highly productive while being fully aware of the hazards involved. PCMS uses Chirp-Spread-Spectrum-based (CSS) real-time location technology and Unity3D-based data visualisation technology to track construction resources (people, equipment, materials, etc.) and provide real-time feedback and post-event visualisation analysis in a training environment. A trial of a precast facade installation on a real site demonstrates the benefits gained by PCMS in comparison with equivalent training using conventional methods. It is concluded that, although the study is based on specific industrial conditions found in Hong Kong construction projects, PCMS may well attract wider interest and use in future.
Resumo:
Self-authored video- where participants are in control of the creation of their own footage- is a means of creating innovative design material and including all members of a family in design activities. This paper describes our adaptation to this process called Self Authored Video Interviews (SAVIs) that we created and prototyped to better understand how families engage with situated technology in the home. We find the methodology produces unique insights into family dynamics in the home, uncovering assumptions and tensions unlikely to be discovered using more conventional methods. The paper outlines a number of challenges and opportunities associated with the methodology, specifically, maximising the value of the insights gathered by appealing to children to champion the cause, and how to counter perceptions of the lingering presence of researchers.
Resumo:
The extended recruitment season for short-lived species such as prawns biases the estimation of growth parameters from length-frequency data when conventional methods are used. We propose a simple method for overcoming this bias given a time series of length-frequency data. The difficulties arising from extended recruitment are eliminated by predicting the growth of the succeeding samples and the length increments of the recruits in previous samples. This method requires that some maximum size at recruitment can be specified. The advantages of this multiple length-frequency method are: it is simple to use; it requires only three parameters; no specific distributions need to be assumed; and the actual seasonal recruitment pattern does not have to be specified. We illustrate the new method with length-frequency data on the tiger prawn Penaeus esculentus from the north-western Gulf of Carpentaria, Australia.
Resumo:
Organochlorine pesticides (OCPs) are ubiquitous environmental contaminants with adverse impacts on aquatic biota, wildlife and human health even at low concentrations. However, conventional methods for their determination in river sediments are resource intensive. This paper presents an approach that is rapid and also reliable for the detection of OCPs. Accelerated Solvent Extraction (ASE) with in-cell silica gel clean-up followed by Triple Quadrupole Gas Chromatograph Mass Spectrometry (GCMS/MS) was used to recover OCPs from sediment samples. Variables such as temperature, solvent ratio, adsorbent mass and extraction cycle were evaluated and optimised for the extraction. With the exception of Aldrin, which was unaffected by any of the variables evaluated, the recovery of OCPs from sediment samples was largely influenced by solvent ratio and adsorbent mass and, to some extent, the number of cycles and temperature. The optimised conditions for OCPs extraction in sediment with good recoveries were determined to be 4 cycles, 4.5 g of silica gel, 105 ᴼC, and 4:3 v/v DCM: hexane mixture. With the exception of two compounds (α-BHC and Aldrin) whose recoveries were low (59.73 and 47.66 % respectively), the recovery of the other pesticides were in the range 85.35 – 117.97% with precision < 10 % RSD. The method developed significantly reduces sample preparation time, the amount of solvent used, matrix interference, and is highly sensitive and selective.
Resumo:
There is a growing interest to autonomously collect or manipulate objects in remote or unknown environments, such as mountains, gullies, bush-land, or rough terrain. There are several limitations of conventional methods using manned or remotely controlled aircraft. The capability of small Unmanned Aerial Vehicles (UAV) used in parallel with robotic manipulators could overcome some of these limitations. By enabling the autonomous exploration of both naturally hazardous environments, or areas which are biologically, chemically, or radioactively contaminated, it is possible to collect samples and data from such environments without directly exposing personnel to such risks. This paper covers the design, integration, and initial testing of a framework for outdoor mobile manipulation UAV. The framework is designed to allow further integration and testing of complex control theories, with the capability to operate outdoors in unknown environments. The results obtained act as a reference for the effectiveness of the integrated sensors and low-level control methods used for the preliminary testing, as well as identifying the key technologies needed for the development of an outdoor capable system.
Resumo:
Rapid growth in the global population requires expansion of building stock, which in turn calls for increased energy demand. This demand varies in time and also between different buildings, yet, conventional methods are only able to provide mean energy levels per zone and are unable to capture this inhomogeneity, which is important to conserve energy. An additional challenge is that some of the attempts to conserve energy, through for example lowering of ventilation rates, have been shown to exacerbate another problem, which is unacceptable indoor air quality (IAQ). The rise of sensing technology over the past decade has shown potential to address both these issues simultaneously by providing high–resolution tempo–spatial data to systematically analyse the energy demand and its consumption as well as the impacts of measures taken to control energy consumption on IAQ. However, challenges remain in the development of affordable services for data analysis, deployment of large–scale real–time sensing network and responding through Building Energy Management Systems. This article presents the fundamental drivers behind the rise of sensing technology for the management of energy and IAQ in urban built environments, highlights major challenges for their large–scale deployment and identifies the research gaps that should be closed by future investigations.
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
Resumo:
Aim: To measure the influence of spherical intraocular lens implantation and conventional myopic laser in situ keratomileusis on peripheral ocular aberrations. Setting: Visual & Ophthalmic Optics Laboratory, School of Optometry & Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia. Methods: Peripheral aberrations were measured using a modified commercial Hartmann-Shack aberrometer across 42° x 32° of the central visual field in 6 subjects after spherical intraocular lens (IOL) implantation and in 6 subjects after conventional laser in situ keratomileusis (LASIK) for myopia. The results were compared with those of age matched emmetropic and myopic control groups. Results: The IOL group showed a greater rate of quadratic change of spherical equivalent refraction across the visual field, higher spherical aberration, and greater rates of change of higher-order root-mean-square aberrations and total root-mean-square aberrations across the visual field than its emmetropic control group. However, coma trends were similar for the two groups. The LASIK group had a greater rate of quadratic change of spherical equivalent refraction across the visual field, higher spherical aberration, the opposite trend in coma across the field, and greater higher-order root-mean-square aberrations and total root-mean-square aberrations than its myopic control group. Conclusion: Spherical IOL implantation and conventional myopia LASIK increase ocular peripheral aberrations. They cause considerable increase in spherical aberration across the visual field. LASIK reverses the sign of the rate of change in coma across the field relative to that of the other groups. Keywords: refractive surgery, LASIK, IOL implantation, aberrations, peripheral aberrations
Resumo:
The paper compares three different methods of inclusion of current phasor measurements by phasor measurement units (PMUs) in the conventional power system state estimator. For each of the three methods, comprehensive formulation of the hybrid state estimator in the presence of conventional and PMU measurements is presented. The performance of the state estimator in the presence of conventional measurements and optimally placed PMUs is evaluated in terms of convergence characteristics and estimator accuracy. Test results on the IEEE 14-bus and IEEE 300-bus systems are analyzed to determine the best possible method of inclusion of PMU current phasor measurements.
Resumo:
Background Late stage Ovarian Cancer is essentially incurable primarily due to late diagnosis and its inherent heterogeneity. Single agent treatments are inadequate and generally lead to severe side effects at therapeutic doses. It is crucial to develop clinically relevant novel combination regimens involving synergistic modalities that target a wider repertoire of cells and lead to lowered individual doses. Stemming from this premise, this is the first report of two- and three-way synergies between Adenovirus-mediated Purine Nucleoside Phosphorylase based gene directed enzyme prodrug therapy (PNP-GDEPT), docetaxel and/or carboplatin in multidrug-resistant ovarian cancer cells. Methods The effects of PNP-GDEPT on different cellular processes were determined using Shotgun Proteomics analyses. The in vitro cell growth inhibition in differentially treated drug resistant human ovarian cancer cell lines was established using a cell-viability assay. The extent of synergy, additivity, or antagonism between treatments was evaluated using CalcuSyn statistical analyses. The involvement of apoptosis and implicated proteins in effects of different treatments was established using flow cytometry based detection of M30 (an early marker of apoptosis), cell cycle analyses and finally western blot based analyses. Results Efficacy of the trimodal treatment was significantly greater than that achieved with bimodal- or individual treatments with potential for 10-50 fold dose reduction compared to that required for individual treatments. Of note was the marked enhancement in apoptosis that specifically accompanied the combinations that included PNP-GDEPT and accordingly correlated with a shift in the expression of anti- and pro-apoptotic proteins. PNP-GDEPT mediated enhancement of apoptosis was reinforced by cell cycle analyses. Proteomic analyses of PNP-GDEPT treated cells indicated a dowregulation of proteins involved in oncogenesis or cancer drug resistance in treated cells with accompanying upregulation of apoptotic- and tumour- suppressor proteins. Conclusion Inclusion of PNP-GDEPT in regular chemotherapy regimens can lead to significant enhancement of the cancer cell susceptibility to the combined treatment. Overall, these data will underpin the development of regimens that can benefit patients with late stage ovarian cancer leading to significantly improved efficacy and increased quality of life.
Resumo:
BACKGROUND & AIMS Metabolomics is comprehensive analysis of low-molecular-weight endogenous metabolites in a biological sample. It could enable mapping of perturbations of early biochemical changes in diseases and hence provide an opportunity to develop predictive biomarkers that could provide valuable insights into the mechanisms of diseases. The aim of this study was to elucidate the changes in endogenous metabolites and to phenotype the metabolic profiling of d-galactosamine (GalN)-inducing acute hepatitis in rats by UPLC-ESI MS. METHODS The systemic biochemical actions of GalN administration (ip, 400 mg/kg) have been investigated in male wistar rats using conventional clinical chemistry, liver histopathology and metabolomic analysis of UPLC- ESI MS of urine. The urine was collected predose (-24 to 0 h) and 0-24, 24-48, 48-72, 72-96 h post-dose. Mass spectrometry of the urine was analysed visually and via conjunction with multivariate data analysis. RESULTS Results demonstrated that there was a time-dependent biochemical effect of GalN dosed on the levels of a range of low-molecular-weight metabolites in urine, which was correlated with developing phase of the GalN-inducing acute hepatitis. Urinary excretion of beta-hydroxybutanoic acid and citric acid was decreased following GalN dosing, whereas that of glycocholic acid, indole-3-acetic acid, sphinganine, n-acetyl-l-phenylalanine, cholic acid and creatinine excretion was increased, which suggests that several key metabolic pathways such as energy metabolism, lipid metabolism and amino acid metabolism were perturbed by GalN. CONCLUSION This metabolomic investigation demonstrates that this robust non-invasive tool offers insight into the metabolic states of diseases.
Resumo:
Fire incident in buildings is common, so the fire safety design of the framed structure is imperative, especially for the unprotected or partly protected bare steel frames. However, software for structural fire analysis is not widely available. As a result, the performance-based structural fire design is urged on the basis of using user-friendly and conventional nonlinear computer analysis programs so that engineers do not need to acquire new structural analysis software for structural fire analysis and design. The tool is desired to have the capacity of simulating the different fire scenarios and associated detrimental effects efficiently, which includes second-order P-D and P-d effects and material yielding. Also the nonlinear behaviour of large-scale structure becomes complicated when under fire, and thus its simulation relies on an efficient and effective numerical analysis to cope with intricate nonlinear effects due to fire. To this end, the present fire study utilizes a second order elastic/plastic analysis software NIDA to predict structural behaviour of bare steel framed structures at elevated temperatures. This fire study considers thermal expansion and material degradation due to heating. Degradation of material strength with increasing temperature is included by a set of temperature-stress-strain curves according to BS5950 Part 8 mainly, which implicitly allows for creep deformation. This finite element stiffness formulation of beam-column elements is derived from the fifth-order PEP element which facilitates the computer modeling by one member per element. The Newton-Raphson method is used in the nonlinear solution procedure in order to trace the nonlinear equilibrium path at specified elevated temperatures. Several numerical and experimental verifications of framed structures are presented and compared against solutions in literature. The proposed method permits engineers to adopt the performance-based structural fire analysis and design using typical second-order nonlinear structural analysis software.
Resumo:
Plant food materials have a very high demand in the consumer market and therefore, improved food products and efficient processing techniques are concurrently being researched in food engineering. In this context, numerical modelling and simulation techniques have a very high potential to reveal fundamentals of the underlying mechanisms involved. However, numerical modelling of plant food materials during drying becomes quite challenging, mainly due to the complexity of the multiphase microstructure of the material, which undergoes excessive deformations during drying. In this regard, conventional grid-based modelling techniques have limited applicability due to their inflexible grid-based fundamental limitations. As a result, meshfree methods have recently been developed which offer a more adaptable approach to problem domains of this nature, due to their fundamental grid-free advantages. In this work, a recently developed meshfree based two-dimensional plant tissue model is used for a comparative study of microscale morphological changes of several food materials during drying. The model involves Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) to represent fluid and solid phases of the cellular structure. Simulation are conducted on apple, potato, carrot and grape tissues and the results are qualitatively and quantitatively compared and related with experimental findings obtained from the literature. The study revealed that cellular deformations are highly sensitive to cell dimensions, cell wall physical and mechanical properties, middle lamella properties and turgor pressure. In particular, the meshfree model is well capable of simulating critically dried tissues at lower moisture content and turgor pressure, which lead to cell wall wrinkling. The findings further highlighted the potential applicability of the meshfree approach to model large deformations of the plant tissue microstructure during drying, providing a distinct advantage over the state of the art grid-based approaches.