311 resultados para Research Subject Categories::TECHNOLOGY::Civil engineering and architecture::Other civil engineering and architecture


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Practice-led journalism research techniques were used in this study to produce a ‘first draft of history’ recording the human experience of survivors and rescuers during the January 2011 flash flood disaster in Toowoomba and the Lockyer Valley in Queensland, Australia. The study aimed to discover what can be learnt from engaging in journalistic reporting of natural disasters. This exegesis demonstrates that journalism can be both a creative practice and a research methodology. About 120 survivors, rescuers and family members of victims participated in extended interviews about what happened to them and how they survived. Their stories are the basis for two creative outputs of the study: a radio documentary and a non-fiction book, that document how and why people died, or survived, or were rescued. Listeners and readers are taken "into the flood" where they feel anxious for those in peril, relief when people are saved, and devastated when babies, children and adults are swept away to their deaths. In undertaking reporting about the human experience of the floods, several significant elements about journalistic reportage of disasters were exposed. The first related to the vital role that the online social media played during the disaster for individuals, citizen reporters, journalists and emergency services organisations. Online social media offer reporters powerful new reporting tools for both gathering and disseminating news. The second related to the performance of journalists in covering events involving traumatic experiences. Journalists are often required to cover trauma and are often amongst the first-responders to disasters. This study found that almost all of the disaster survivors who were approached were willing to talk in detail about their traumatic experiences. A finding of this project is that journalists who interview trauma survivors can develop techniques for improving their ability to interview people who have experienced traumatic events. These include being flexible with interview timing and selecting a location; empowering interviewees to understand they don’t have to answer every question they are asked; providing emotional security for interviewees; and by being committed to accuracy. Survivors may exhibit posttraumatic stress symptoms but some exhibit and report posttraumatic growth. The willingness of a high proportion of the flood survivors to participate in the flood research made it possible to document a relatively unstudied question within the literature about journalism and trauma – when and why disaster survivors will want to speak to reporters. The study sheds light on the reasons why a group of traumatised people chose to speak about their experiences. Their reasons fell into six categories: lessons need to be learned from the disaster; a desire for the public to know what had happened; a sense of duty to make sure warning systems and disaster responses to be improved in future; personal recovery; the financial disinterest of reporters in listening to survivors; and the timing of the request for an interview. Feedback to the creative-practice component of this thesis - the book and radio documentary - shows that these issues are not purely matters of ethics. By following appropriate protocols, it is possible to produce stories that engender strong audience responses such as that the program was "amazing and deeply emotional" and "community storytelling at its most important". Participants reported that the experience of the interview process was "healing" and that the creative outcome resulted in "a very precious record of an afternoon of tragedy and triumph and the bitter-sweetness of survival".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Community Service-learning Lab (the Lab) was initiated as a university-wide service-learning experience at an Australian university. The Lab engages students, academics, and key community organisations in interdisciplinary action research projects to support student learning and to explore complex and ongoing problems nominated by the community partners. The current study uses feedback from the first offering of the Lab and focuses on exploring student experiences of the service learning project using an action research framework. Student reflections on this experience have revealed some positive outcomes of the Lab such as an appreciation for positive and strengths-based change. These outcomes are corroborated by collected reflections from community partners and academics. The students also identified challenges balancing the requirements for assessment and their goals to serve the community partner’s needs. This feedback has provided vital information for the academic team, highlighting the difficulties in balancing the agenda of the academic framework and the desire to give students authentic experiences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last several decades, the quality of natural resources and their services have been exposed to significant degradation from increased urban populations combined with the sprawl of settlements, development of transportation networks and industrial activities (Dorsey, 2003; Pauleit et al., 2005). As a result of this environmental degradation, a sustainable framework for urban development is required to provide the resilience of natural resources and ecosystems. Sustainable urban development refers to the management of cities with adequate infrastructure to support the needs of its population for the present and future generations as well as maintain the sustainability of its ecosystems (UNEP/IETC, 2002; Yigitcanlar, 2010). One of the important strategic approaches for planning sustainable cities is „ecological planning‟. Ecological planning is a multi-dimensional concept that aims to preserve biodiversity richness and ecosystem productivity through the sustainable management of natural resources (Barnes et al., 2005). As stated by Baldwin (1985, p.4), ecological planning is the initiation and operation of activities to direct and control the acquisition, transformation, disruption and disposal of resources in a manner capable of sustaining human activities with a minimum disruption of ecosystem processes. Therefore, ecological planning is a powerful method for creating sustainable urban ecosystems. In order to explore the city as an ecosystem and investigate the interaction between the urban ecosystem and human activities, a holistic urban ecosystem sustainability assessment approach is required. Urban ecosystem sustainability assessment serves as a tool that helps policy and decision-makers in improving their actions towards sustainable urban development. There are several methods used in urban ecosystem sustainability assessment among which sustainability indicators and composite indices are the most commonly used tools for assessing the progress towards sustainable land use and urban management. Currently, a variety of composite indices are available to measure the sustainability at the local, national and international levels. However, the main conclusion drawn from the literature review is that they are too broad to be applied to assess local and micro level sustainability and no benchmark value for most of the indicators exists due to limited data availability and non-comparable data across countries. Mayer (2008, p. 280) advocates that by stating "as different as the indices may seem, many of them incorporate the same underlying data because of the small number of available sustainability datasets". Mori and Christodoulou (2011) also argue that this relative evaluation and comparison brings along biased assessments, as data only exists for some entities, which also means excluding many nations from evaluation and comparison. Thus, there is a need for developing an accurate and comprehensive micro-level urban ecosystem sustainability assessment method. In order to develop such a model, it is practical to adopt an approach that uses a method to utilise indicators for collecting data, designate certain threshold values or ranges, perform a comparative sustainability assessment via indices at the micro-level, and aggregate these assessment findings to the local level. Hereby, through this approach and model, it is possible to produce sufficient and reliable data to enable comparison at the local level, and provide useful results to inform the local planning, conservation and development decision-making process to secure sustainable ecosystems and urban futures. To advance research in this area, this study investigated the environmental impacts of an existing urban context by using a composite index with an aim to identify the interaction between urban ecosystems and human activities in the context of environmental sustainability. In this respect, this study developed a new comprehensive urban ecosystem sustainability assessment tool entitled the „Micro-level Urban-ecosystem Sustainability IndeX‟ (MUSIX). The MUSIX model is an indicator-based indexing model that investigates the factors affecting urban sustainability in a local context. The model outputs provide local and micro-level sustainability reporting guidance to help policy-making concerning environmental issues. A multi-method research approach, which is based on both quantitative analysis and qualitative analysis, was employed in the construction of the MUSIX model. First, a qualitative research was conducted through an interpretive and critical literature review in developing a theoretical framework and indicator selection. Afterwards, a quantitative research was conducted through statistical and spatial analyses in data collection, processing and model application. The MUSIX model was tested in four pilot study sites selected from the Gold Coast City, Queensland, Australia. The model results detected the sustainability performance of current urban settings referring to six main issues of urban development: (1) hydrology, (2) ecology, (3) pollution, (4) location, (5) design, and; (6) efficiency. For each category, a set of core indicators was assigned which are intended to: (1) benchmark the current situation, strengths and weaknesses, (2) evaluate the efficiency of implemented plans, and; (3) measure the progress towards sustainable development. While the indicator set of the model provided specific information about the environmental impacts in the area at the parcel scale, the composite index score provided general information about the sustainability of the area at the neighbourhood scale. Finally, in light of the model findings, integrated ecological planning strategies were developed to guide the preparation and assessment of development and local area plans in conjunction with the Gold Coast Planning Scheme, which establishes regulatory provisions to achieve ecological sustainability through the formulation of place codes, development codes, constraint codes and other assessment criteria that provide guidance for best practice development solutions. These relevant strategies can be summarised as follows: • Establishing hydrological conservation through sustainable stormwater management in order to preserve the Earth’s water cycle and aquatic ecosystems; • Providing ecological conservation through sustainable ecosystem management in order to protect biological diversity and maintain the integrity of natural ecosystems; • Improving environmental quality through developing pollution prevention regulations and policies in order to promote high quality water resources, clean air and enhanced ecosystem health; • Creating sustainable mobility and accessibility through designing better local services and walkable neighbourhoods in order to promote safe environments and healthy communities; • Sustainable design of urban environment through climate responsive design in order to increase the efficient use of solar energy to provide thermal comfort, and; • Use of renewable resources through creating efficient communities in order to provide long-term management of natural resources for the sustainability of future generations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The nature and characteristics of how learners learn today are changing. As technology use in learning and teaching continues to grow, its integration to facilitate deep learning and critical thinking becomes a primary consideration. The implications for learner use, implementation strategies, design of integration frameworks and evaluation of their effectiveness in learning environments cannot be overlooked. This study specifically looked at the impact that technology-enhanced learning environments have on different learners’ critical thinking in relation to eductive ability, technological self-efficacy, and approaches to learning and motivation in collaborative groups. These were explored within an instructional design framework called CoLeCTTE (collaborative learning and critical thinking in technology-enhanced environments) which was proposed, revised and used across three cases. The field of investigation was restricted to three key questions: 1) Do learner skill bases (learning approach and eductive ability) influence critical thinking within the proposed CoLeCTTE framework? If so, how?; 2) Do learning technologies influence the facilitation of deep learning and critical thinking within the proposed CoLeCTTE framework? If so, how?; and 3) How might learning be designed to facilitate the acquisition of deep learning and critical thinking within a technology-enabled collaborative environment? The rationale, assumptions and method of research for using a mixed method and naturalistic case study approach are discussed; and three cases are explored and analysed. The study was conducted at the tertiary level (undergraduate and postgraduate) where participants were engaged in critical technical discourse within their own disciplines. Group behaviour was observed and coded, attributes or skill bases were measured, and participants interviewed to acquire deeper insights into their experiences. A progressive case study approach was used, allowing case investigation to be implemented in a "ladder-like" manner. Cases 1 and 2 used the proposed CoLeCTTE framework with more in-depth analysis conducted for Case 2 resulting in a revision of the CoLeCTTE framework. Case 3 used the revised CoLeCTTE framework and in-depth analysis was conducted. The findings led to the final version of the framework. In Cases 1, 2 and 3, content analysis of group work was conducted to determine critical thinking performance. Thus, the researcher used three small groups where learner skill bases of eductive ability, technological self-efficacy, and approaches to learning and motivation were measured. Cases 2 and 3 participants were interviewed and observations provided more in-depth analysis. The main outcome of this study is analysis of the nature of critical thinking within collaborative groups and technology-enhanced environments positioned in a theoretical instructional design framework called CoLeCTTE. The findings of the study revealed the importance of the Achieving Motive dimension of a student’s learning approach and how direct intervention and strategies can positively influence critical thinking performance. The findings also identified factors that can adversely affect critical thinking performance and include poor learning skills, frustration, stress and poor self-confidence, prioritisations over learning; and inadequate appropriation of group role and tasks. These findings are set out as instructional design guidelines for the judicious integration of learning technologies into learning and teaching practice for higher education that will support deep learning and critical thinking in collaborative groups. These guidelines are presented in two key areas: technology and tools; and activity design, monitoring, control and feedback.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Osteocyte cells are the most abundant cells in human bone tissue. Due to their unique morphology and location, osteocyte cells are thought to act as regulators in the bone remodelling process, and are believed to play an important role in astronauts’ bone mass loss after long-term space missions. There is increasing evidence showing that an osteocyte’s functions are highly affected by its morphology. However, changes in an osteocyte’s morphology under an altered gravity environment are still not well documented. Several in vitro studies have been recently conducted to investigate the morphological response of osteocyte cells to the microgravity environment, where osteocyte cells were cultured on a two-dimensional flat surface for at least 24 hours before microgravity experiments. Morphology changes of osteocyte cells in microgravity were then studied by comparing the cell area to 1g control cells. However, osteocyte cells found in vivo are with a more 3D morphology, and both cell body and dendritic processes are found sensitive to mechanical loadings. A round shape osteocyte’s cells support a less stiff cytoskeleton and are more sensitive to mechanical stimulations compared with flat cellular morphology. Thus, the relative flat and spread shape of isolated osteocytes in 2D culture may greatly hamper their sensitivity to a mechanical stimulus, and the lack of knowledge on the osteocyte’s morphological characteristics in culture may lead to subjective and noncomprehensive conclusions of how altered gravity impacts on an osteocyte’s morphology. Through this work empirical models were developed to quantitatively predicate the changes of morphology in osteocyte cell lines (MLO-Y4) in culture, and the response of osteocyte cells, which are relatively round in shape, to hyper-gravity stimulation has also been investigated. The morphology changes of MLO-Y4 cells in culture were quantified by measuring cell area and three dimensionless shape features including aspect ratio, circularity and solidity by using widely accepted image analysis software (ImageJTM). MLO-Y4 cells were cultured at low density (5×103 per well) and the changes in morphology were recorded over 10 hours. Based on the data obtained from the imaging analysis, empirical models were developed using the non-linear regression method. The developed empirical models accurately predict the morphology of MLO-Y4 cells for different culture times and can, therefore, be used as a reference model for analysing MLO-Y4 cell morphology changes within various biological/mechanical studies, as necessary. The morphological response of MLO-Y4 cells with a relatively round morphology to hyper-gravity environment has been investigated using a centrifuge. After 2 hours culture, MLO-Y4 cells were exposed to 20g for 30mins. Changes in the morphology of MLO-Y4 cells are quantitatively analysed by measuring the average value of cell area and dimensionless shape factors such as aspect ratio, solidity and circularity. In this study, no significant morphology changes were detected in MLO-Y4 cells under a hyper-gravity environment (20g for 30 mins) compared with 1g control cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently there has been significant interest of researchers and practitioners on the use of Bluetooth as a complementary transport data. However, literature is limited with the understanding of the Bluetooth MAC Scanner (BMS) based data acquisition process and the properties of the data being collected. This paper first provides an insight on the BMS data acquisition process. Thereafter, it discovers the interesting facts from analysis of the real BMS data from both motorway and arterial networks of Brisbane, Australia. The knowledge gained is helpful for researchers and practitioners to understand the BMS data being collected which is vital to the development of management and control algorithms using the data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Bluetooth technology is being increasingly used to track vehicles throughout their trips, within urban networks and across freeway stretches. One important opportunity offered by this type of data is the measurement of Origin-Destination patterns, emerging from the aggregation and clustering of individual trips. In order to obtain accurate estimations, however, a number of issues need to be addressed, through data filtering and correction techniques. These issues mainly stem from the use of the Bluetooth technology amongst drivers, and the physical properties of the Bluetooth sensors themselves. First, not all cars are equipped with discoverable Bluetooth devices and the Bluetooth-enabled vehicles may belong to some small socio-economic groups of users. Second, the Bluetooth datasets include data from various transport modes; such as pedestrian, bicycles, cars, taxi driver, buses and trains. Third, the Bluetooth sensors may fail to detect all of the nearby Bluetooth-enabled vehicles. As a consequence, the exact journey for some vehicles may become a latent pattern that will need to be extracted from the data. Finally, sensors that are in close proximity to each other may have overlapping detection areas, thus making the task of retrieving the correct travelled path even more challenging. The aim of this paper is twofold. We first give a comprehensive overview of the aforementioned issues. Further, we propose a methodology that can be followed, in order to cleanse, correct and aggregate Bluetooth data. We postulate that the methods introduced by this paper are the first crucial steps that need to be followed in order to compute accurate Origin-Destination matrices in urban road networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research has successfully applied super-resolution and multiple modality fusion techniques to address the major challenges of human identification at a distance using face and iris. The outcome of the research is useful for security applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lean strategies have been developed to eliminate or reduce manufacturing waste and thus improve operational efficiency in manufacturing processes. However, implementing lean strategies requires a large amount of resources and, in practice, manufacturers encounter difficulties in selecting appropriate lean strategies within their resource constraints. There is currently no systematic methodology available for selecting appropriate lean strategies within a manufacturer's resource constraints. In the lean transformation process, it is also critical to measure the current and desired leanness levels in order to clearly evaluate lean implementation efforts. Despite the fact that many lean strategies are utilized to reduce or eliminate manufacturing waste, little effort has been directed towards properly assessing the leanness of manufacturing organizations. In practice, a single or specific group of metrics (either qualitative or quantitative) will only partially measure the overall leanness. Existing leanness assessment methodologies do not offer a comprehensive evaluation method, integrating both quantitative and qualitative lean measures into a single quantitative value for measuring the overall leanness of an organization. This research aims to develop mathematical models and a systematic methodology for selecting appropriate lean strategies and evaluating the leanness levels in manufacturing organizations. Mathematical models were formulated and a methodology was developed for selecting appropriate lean strategies within manufacturers' limited amount of available resources to reduce their identified wastes. A leanness assessment model was developed by using the fuzzy concept to assess the leanness level and to recommend an optimum leanness value for a manufacturing organization. In the proposed leanness assessment model, both quantitative and qualitative input factors have been taken into account. Based on program developed in MATLAB and C#, a decision support tool (DST) was developed for decision makers to select lean strategies and evaluate the leanness value based on the proposed models and methodology hence sustain the lean implementation efforts. A case study was conducted to demonstrate the effectiveness of these proposed models and methodology. Case study results suggested that out of 10 wastes identified, the case organization (ABC Limited) is able to improve a maximum of six wastes from the selected workstation within their resource limitations. The selected wastes are: unnecessary motion, setup time, unnecessary transportation, inappropriate processing, work in process and raw material inventory and suggested lean strategies are: 5S, Just-In-Time, Kanban System, the Visual Management System (VMS), Cellular Manufacturing, Standard Work Process using method-time measurement (MTM), and Single Minute Exchange of Die (SMED). From the suggested lean strategies, the impact of 5S was demonstrated by measuring the leanness level of two different situations in ABC. After that, MTM was suggested as a standard work process for further improvement of the current leanness value. The initial status of the organization showed a leanness value of 0.12. By applying 5S, the leanness level significantly improved to reach 0.19 and the simulation of MTM as a standard work method shows the leanness value could be improved to 0.31. The optimum leanness value of ABC was calculated to be 0.64. These leanness values provided a quantitative indication of the impacts of improvement initiatives in terms of the overall leanness level to the case organization. Sensitivity analsysis and a t-test were also performed to validate the model proposed. This research advances the current knowledge base by developing mathematical models and methodologies to overcome lean strategy selection and leanness assessment problems. By selecting appropriate lean strategies, a manufacturer can better prioritize implementation efforts and resources to maximize the benefits of implementing lean strategies in their organization. The leanness index is used to evaluate an organization's current (before lean implementation) leanness state against the state after lean implementation and to establish benchmarking (the optimum leanness state). Hence, this research provides a continuous improvement tool for a lean manufacturing organization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-speed broadband internet access is widely recognised as a catalyst to social and economic development. However, the provision of broadband Internet services with the existing solutions to rural population, scattered over an extensive geographical area, remains both an economic and technical challenge. As a feasible solution, the Commonwealth Scientific and Industrial Research Organization (CSIRO) proposed a highly spectrally efficient, innovative and cost-effective fixed wireless broadband access technology, which uses analogue TV frequency spectrum and Multi-User MIMO (MUMIMO) technology with Orthogonal-Frequency-Division-Multiplexing (OFDM). MIMO systems have emerged as a promising solution for the increasing demand of higher data rates, better quality of service, and higher network capacity. However, the performance of MIMO systems can be significantly affected by different types of propagation environments e.g., indoor, outdoor urban, or outdoor rural and operating frequencies. For instance, large spectral efficiencies associated with MIMO systems, which assume a rich scattering environment in urban environments, may not be valid for all propagation environments, such as outdoor rural environments, due to the presence of less scatterer densities. Since this is the first time a MU-MIMO-OFDM fixed broadband wireless access solution is deployed in a rural environment, questions from both theoretical and practical standpoints arise; For example, what capacity gains are available for the proposed solution under realistic rural propagation conditions?. Currently, no comprehensive channel measurement and capacity analysis results are available for MU-MIMO-OFDM fixed broadband wireless access systems which employ large scale multiple antennas at the Access Point (AP) and analogue TV frequency spectrum in rural environments. Moreover, according to the literature, no deterministic MU-MIMO channel models exist that define rural wireless channels by accounting for terrain effects. This thesis fills the aforementioned knowledge gaps with channel measurements, channel modeling and comprehensive capacity analysis for MU-MIMO-OFDM fixed wireless broadband access systems in rural environments. For the first time, channel measurements were conducted in a rural farmland near Smithton, Tasmania using CSIRO's broadband wireless access solution. A novel deterministic MU-MIMO-OFDM channel model, which can be used for accurate performance prediction of rural MUMIMO channels with dominant Line-of-Sight (LoS) paths, was developed under this research. Results show that the proposed solution can achieve 43.7 bits/s/Hz at a Signal-to- Noise Ratio (SNR) of 20 dB in rural environments. Based on channel measurement results, this thesis verifies that the deterministic channel model accurately predicts channel capacity in rural environments with a Root Mean Square (RMS) error of 0.18 bits/s/Hz. Moreover, this study presents a comprehensive capacity analysis of rural MU-MIMOOFDM channels using experimental, simulated and theoretical models. Based on the validated deterministic model, further investigations on channel capacity and the eects of capacity variation, with different user distribution angles (θ) around the AP, were analysed. For instance, when SNR = 20dB, the capacity increases from 15.5 bits/s/Hz to 43.7 bits/s/Hz as θ increases from 10° to 360°. Strategies to mitigate these capacity degradation effects are also presented by employing a suitable user grouping method. Outcomes of this thesis have already been used by CSIRO scientists to determine optimum user distribution angles around the AP, and are of great significance for researchers and MU-MUMO-OFDM system developers to understand the advantages and potential capacity gains of MU-MIMO systems in rural environments. Also, results of this study are useful to further improve the performance of MU-MIMO-OFDM systems in rural environments. Ultimately, this knowledge contribution will be useful in delivering efficient, cost-effective high-speed wireless broadband systems that are tailor-made for rural environments, thus, improving the quality of life and economic prosperity of rural populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Development and application of inorganic adsorbent materials have been continuously investigated due to their variability and versatility. This Master thesis has expanded the knowledge in the field of adsorption targeting radioactive iodine waste and proteins using modified inorganic materials. Industrial treatment of radioactive waste and safety disposal of nuclear waste is a constant concern around the world with the development of radioactive materials applications. To address the current problems, laminar titanate with large surface area (143 m2 g−1) was synthesized from inorganic titanium compounds by hydrothermal reactions at 433 K. Ag2O nanocrystals of particle size ranging from 5–30 nm were anchored on the titanate lamina surface which has crystallographic similarity to that of Ag2O nanocrystals. Therefore, the deposited Ag2O nanocrystals and titanate substrate could join together at these surfaces between which there forms a coherent interface. Such coherence between the two phases reduces the overall energy by minimizing surface energy and maintains the Ag2O nanocrystals firmly on the outer surface of the titanate structure. The combined adsorbent was then applied as efficient adsorbent to remove radioactive iodine from water (one gram adsorbent can capture up to 3.4 mmol of I- anions) and the composite adsorbent can be recovered easily for safe disposal. The structure changes of the titanate lamina and the composite adsorbent were characterized via various techniques. The isotherm and kinetics of iodine adsorption, competitive adsorption and column adsorption using the adsorbent were studied to determine the iodine removal abilities of the adsorbent. It is shown that the adsorbent exhibited excellent trapping ability towards iodine in the fix-bed column despite the presence of competitive ions. Hence, Ag2O deposited titanate lamina could serve as an effective adsorbent for removing iodine from radioactive waste. Surface hydroxyl group of the inorganic materials is widely applied for modification purposes and modification of inorganic materials for biomolecule adsorption can also be achieved. Specifically, γ-Al2O3 nanofibre material is converted via calcinations from boehmite precursor which is synthesised by hydrothermal chemical reactions under directing of surfactant. These γ-Al2O3 nanofibres possess large surface area (243 m2 g-1), good stability under extreme chemical conditions, good mechanical strength and rich surface hydroxyl groups making it an ideal candidate in industrialized separation column. The fibrous morphology of the adsorbent also guarantees facile recovery from aqueous solution under both centrifuge and sedimentation approaches. By chemically bonding the dyes molecules, the charge property of γ-Al2O3 is changed in the aim of selectively capturing of lysozyme from chicken egg white solution. The highest Lysozyme adsorption amount was obtained at around 600 mg/g and its proportion is elevated from around 5% to 69% in chicken egg white solution. It was found from the adsorption test under different solution pH that electrostatic force played the key role in the good selectivity and high adsorption rate of surface modified γ-Al2O3 nanofibre adsorbents. Overall, surface modified fibrous γ-Al2O3 could be applied potentially as an efficient adsorbent for capturing of various biomolecules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electrostatic discharges have been identified as the most likely cause in a number of incidents of fire and explosion with unexplained ignitions. The lack of data and suitable models for this ignition mechanism creates a void in the analysis to quantify the importance of static electricity as a credible ignition mechanism. Quantifiable hazard analysis of the risk of ignition by static discharge cannot, therefore, be entirely carried out with our current understanding of this phenomenon. The study of electrostatics has been ongoing for a long time. However, it was not until the wide spread use of electronics that research was developed for the protection of electronics from electrostatic discharges. Current experimental models for electrostatic discharge developed for intrinsic safety with electronics are inadequate for ignition analysis and typically are not supported by theoretical analysis. A preliminary simulation and experiment with low voltage was designed to investigate the characteristics of energy dissipation and provided a basis for a high voltage investigation. It was seen that for a low voltage the discharge energy represents about 10% of the initial capacitive energy available and that the energy dissipation was within 10 ns of the initial discharge. The potential difference is greatest at the initial break down when the largest amount of the energy is dissipated. The discharge pathway is then established and minimal energy is dissipated as energy dissipation becomes greatly influenced by other components and stray resistance in the discharge circuit. From the initial low voltage simulation work, the importance of the energy dissipation and the characteristic of the discharge were determined. After the preliminary low voltage work was completed, a high voltage discharge experiment was designed and fabricated. Voltage and current measurement were recorded on the discharge circuit allowing the discharge characteristic to be recorded and energy dissipation in the discharge circuit calculated. Discharge energy calculations show consistency with the low voltage work relating to discharge energy with about 30-40% of the total initial capacitive energy being discharged in the resulting high voltage arc. After the system was characterised and operation validated, high voltage ignition energy measurements were conducted on a solution of n-Pentane evaporating in a 250 cm3 chamber. A series of ignition experiments were conducted to determine the minimum ignition energy of n-Pentane. The data from the ignition work was analysed with standard statistical regression methods for tests that return binary (yes/no) data and found to be in agreement with recent publications. The research demonstrates that energy dissipation is heavily dependent on the circuit configuration and most especially by the discharge circuit's capacitance and resistance. The analysis established a discharge profile for the discharges studied and validates the application of this methodology for further research into different materials and atmospheres; by systematically looking at discharge profiles of test materials with various parameters (e.g., capacitance, inductance, and resistance). Systematic experiments looking at the discharge characteristics of the spark will also help understand the way energy is dissipated in an electrostatic discharge enabling a better understanding of the ignition characteristics of materials in terms of energy and the dissipation of that energy in an electrostatic discharge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study was undertaken to examine the influence that a set of Professional Development (PD) initiatives had on faculty use of Moodle, a well known Course Management System. The context of the study was a private language university just outside Tokyo, Japan. Specifically, it aimed to identify the way in which the PD initiatives adhered to professional development best practice criteria; how faculty members perceived the PD initiatives; what impact the PD initiatives had on faculty use of Moodle; and other variables that may have influenced faculty in their use of Moodle. The study utilised a mixed methods approach. Participants in the study were 42 teachers who worked at the university in the academic year 2008/9. The online survey consisted of 115 items, factored into 10 constructs. Data was collected through an online survey, semi-structured face-to-face interviews, post-workshop surveys, and a collection of textual artefacts. The quantitative data were analysed in SPSS, using descriptive statistics, Spearman's Rank Order correlation tests and a Kruskal-Wallis means test. The qualitative data was used to develop and expand findings and ideas. The results indicated that the PD initiatives adhered closely to criteria posited in technology-related professional development best practice criteria. Further, results from the online survey, post workshop surveys, and follow up face-to-face interviews indicated that while the PD initiatives that were implemented were positively perceived by faculty, they did not have the anticipated impact on Moodle use among faculty. Further results indicated that other variables, such as perceptions of Moodle, and institutional issues, had a considerable influence on Moodle use. The findings of the study further strengthened the idea that the five variables Everett Rogers lists in his Diffusion of Innovations model, including perceived attributes of an innovation; type of innovation decision; communication channels; nature of the social system; extent of change agents' promotion efforts, most influence the adoption of an innovation. However, the results also indicated that some of the variables in Rogers' DOI seem to have more of an influence than others, particularly the perceived attributes of an innovation variable. In addition, the findings of the study could serve to inform universities that have Course Management Systems (CMS), such as Moodle, about how to utilise them most efficiently and effectively. The findings could also help to inform universities about how to help faculty members acquire the skills necessary to incorporate CMSs into curricula and teaching practice. A limitation of this study was the use of a non-randomised sample, which could appear to have limited the generalisations of the findings to this particular Japanese context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Expert searchers engage with information as information brokers, researchers, reference librarians, information architects, faculty who teach advanced search, and in a variety of other information-intensive professions. Their experiences are characterized by a profound understanding of information concepts and skills and they have an agile ability to apply this knowledge to interacting with and having an impact on the information environment. This study explored the learning experiences of searchers to understand the acquisition of search expertise. The research question was: What can be learned about becoming an expert searcher from the learning experiences of proficient novice searchers and highly experienced searchers? The key objectives were: (1) to explore the existence of threshold concepts in search expertise; (2) to improve our understanding of how search expertise is acquired and how novice searchers, intent on becoming experts, can learn to search in more expertlike ways. The participant sample drew from two population groups: (1) highly experienced searchers with a minimum of 20 years of relevant professional experience, including LIS faculty who teach advanced search, information brokers, and search engine developers (11 subjects); and (2) MLIS students who had completed coursework in information retrieval and online searching and demonstrated exceptional ability (9 subjects). Using these two groups allowed a nuanced understanding of the experience of learning to search in expertlike ways, with data from those who search at a very high level as well as those who may be actively developing expertise. The study used semi-structured interviews, search tasks with think-aloud narratives, and talk-after protocols. Searches were screen-captured with simultaneous audio-recording of the think-aloud narrative. Data were coded and analyzed using NVivo9 and manually. Grounded theory allowed categories and themes to emerge from the data. Categories represented conceptual knowledge and attributes of expert searchers. In accord with grounded theory method, once theoretical saturation was achieved, during the final stage of analysis the data were viewed through lenses of existing theoretical frameworks. For this study, threshold concept theory (Meyer & Land, 2003) was used to explore which concepts might be threshold concepts. Threshold concepts have been used to explore transformative learning portals in subjects ranging from economics to mathematics. A threshold concept has five defining characteristics: transformative (causing a shift in perception), irreversible (unlikely to be forgotten), integrative (unifying separate concepts), troublesome (initially counter-intuitive), and may be bounded. Themes that emerged provided evidence of four concepts which had the characteristics of threshold concepts. These were: information environment: the total information environment is perceived and understood; information structures: content, index structures, and retrieval algorithms are understood; information vocabularies: fluency in search behaviors related to language, including natural language, controlled vocabulary, and finesse using proximity, truncation, and other language-based tools. The fourth threshold concept was concept fusion, the integration of the other three threshold concepts and further defined by three properties: visioning (anticipating next moves), being light on one's 'search feet' (dancing property), and profound ontological shift (identity as searcher). In addition to the threshold concepts, findings were reported that were not concept-based, including praxes and traits of expert searchers. A model of search expertise is proposed with the four threshold concepts at its core that also integrates the traits and praxes elicited from the study, attributes which are likewise long recognized in LIS research as present in professional searchers. The research provides a deeper understanding of the transformative learning experiences involved in the acquisition of search expertise. It adds to our understanding of search expertise in the context of today's information environment and has implications for teaching advanced search, for research more broadly within library and information science, and for methodologies used to explore threshold concepts.