399 resultados para Processing methods
Resumo:
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
Resumo:
Many substation applications require accurate time-stamping. The performance of systems such as Network Time Protocol (NTP), IRIG-B and one pulse per second (1-PPS) have been sufficient to date. However, new applications, including IEC 61850-9-2 process bus and phasor measurement, require accuracy of one microsecond or better. Furthermore, process bus applications are taking time synchronisation out into high voltage switchyards where cable lengths may have an impact on timing accuracy. IEEE Std 1588, Precision Time Protocol (PTP), is the means preferred by the smart grid standardisation roadmaps (from both the IEC and US National Institute of Standards and Technology) of achieving this higher level of performance, and integrates well into Ethernet based substation automation systems. Significant benefits of PTP include automatic path length compensation, support for redundant time sources and the cabling efficiency of a shared network. This paper benchmarks the performance of established IRIG-B and 1-PPS synchronisation methods over a range of path lengths representative of a transmission substation. The performance of PTP using the same distribution system is then evaluated and compared to the existing methods to determine if the performance justifies the additional complexity. Experimental results show that a PTP timing system maintains the synchronising performance of 1-PPS and IRIG-B timing systems, when using the same fibre optic cables, and further meets the needs of process buses in large substations.
Resumo:
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
Resumo:
Plasma enhanced chemical vapour deposition silicon nitride thin films are widely used in microelectromechanical system devices as structural materials because the mechanical properties of those films can be tailored by adjusting deposition conditions. However, accurate measurement of the mechanical properties, such as hardness, of films with thicknesses at nanometric scale is challenging. In the present study, the hardness of the silicon nitride films deposited on silicon substrate under different deposit conditions was characterised using nanoindentation and nanoscratch deconvolution methods. The hardness values obtained from the two methods were compared. The effect of substrate on the measured results was discussed.
Resumo:
Problems involving the solution of advection-diffusion-reaction equations on domains and subdomains whose growth affects and is affected by these equations, commonly arise in developmental biology. Here, a mathematical framework for these situations, together with methods for obtaining spatio-temporal solutions and steady states of models built from this framework, is presented. The framework and methods are applied to a recently published model of epidermal skin substitutes. Despite the use of Eulerian schemes, excellent agreement is obtained between the numerical spatio-temporal, numerical steady state, and analytical solutions of the model.
Resumo:
Background: Gender differences in cycling are well-documented. However, most analyses of gender differences make broad comparisons, with few studies modeling male and female cycling patterns separately for recreational and transport cycling. This modeling is important, in order to improve our efforts to promote cycling to women and men in countries like Australia with low rates of transport cycling. The main aim of this study was to examine gender differences in cycling patterns and in motivators and constraints to cycling, separately for recreational and transport cycling. Methods: Adult members of a Queensland, Australia, community bicycling organization completed an online survey about their cycling patterns; cycling purposes; and personal, social and perceived environmental motivators and constraints (47% response rate). Closed and open-end questions were completed. Using the quantitative data, multivariable linear, logistic and ordinal regression models were used to examine associations between gender and cycling patterns, motivators and constraints. The qualitative data were thematically analysed to expand upon the quantitative findings. Results: In this sample of 1862 bicyclists, men were more likely than women to cycle for recreation and for transport, and they cycled for longer. Most transport cycling was for commuting, with men more likely than women to commute by bicycle. Men were more likely to cycle on-road, and women off-road. However, most men and women did not prefer to cycle on-road without designed bicycle lanes, and qualitative data indicated a strong preference by men and women for bicycle-only off-road paths. Both genders reported personal factors (health and enjoyment related) as motivators for cycling, although women were more likely to agree that other personal, social and environmental factors were also motivating. The main constraints for both genders and both cycling purposes were perceived environmental factors related to traffic conditions, motorist aggression and safety. Women, however, reported more constraints, and were more likely to report as constraints other environmental factors and personal factors. Conclusion: Differences found in men’s and women’s cycling patterns, motivators and constraints should be considered in efforts to promote cycling, particularly in efforts to increase cycling for transport.
Resumo:
The Texas Department of Transportation (TxDOT) is concerned about the widening gap between pavement preservation needs and available funding. Thus, the TxDOT Austin District Pavement Engineer (DPE) has investigated methods to strategically allocate available pavement funding to potential projects that improve the overall performance of the District and Texas highway systems. The primary objective of the study presented in this paper is to develop a network-level project screening and ranking method that supports the Austin District 4-year pavement management plan development. The study developed candidate project selection and ranking algorithms that evaluated pavement conditions of each project candidate using data contained in the Pavement Management Information system (PMIS) database and incorporated insights from Austin District pavement experts; and implemented the developed method and supporting algorithm. This process previously required weeks to complete, but now requires about 10 minutes including data preparation and running the analysis algorithm, which enables the Austin DPE to devote more time and resources to conducting field visits, performing project-level evaluation and testing candidate projects. The case study results showed that the proposed method assisted the DPE in evaluating and prioritizing projects and allocating funds to the right projects at the right time.
Resumo:
Background: Mesenchymal stromal cells (MSC) with similar properties to bone marrow derived mesenchymal stromal cells (BM-MSC) have recently been grown from the limbus of the human cornea. We presently contribute to this novel area of research by evaluating methods for culturing human limbal MSC (L-MSC). Methods: Four basic strategies are compared: serum-supplemented medium (10% foetal bovine serum; FBS), standard serum-free medium supplemented with B-27, epidermal growth factor, and fibroblast growth factor 2, or one of two commercial serum-free media including Defined Keratinocyte Serum Free Medium (Invitrogen), and MesenCult-XF (Stem Cell Technologies). The phenotype of resulting cultures was examined using photography, flow cytometry (for CD34, CD45, CD73, CD90, CD105, CD141, CD271), immunocytochemistry (α-sma), differentiation assays (osteogenesis, adipogenesis, chrondrogenesis), and co-culture experiments with human limbal epithelial (HLE) cells. Results: While all techniques supported to varying degrees establishment of cultures, sustained growth and serial propagation was only achieved in 10% FBS medium or MesenCult-XF medium. Cultures established in 10% FBS medium were 70-80% CD34-/CD45-/CD90+/CD73+/CD105+, approximately 25% α-sma+, and displayed multi-potency. Cultures established in MesenCult-XF were >95% CD34-/CD45-/CD90+/CD73+/CD105+, 40% CD141+, rarely expressed α-sma, and displayed multi-potency. L-MSC supported growth of HLE cells, with the largest epithelial islands being observed in the presence of MesenCult-XF-grown L-MSC. All HLE cultures supported by L-MSC widely expressed the progenitor cell marker ∆Np63, along with the corneal differentiation marker cytokeratin 3. Conclusions: We conclude that MesenCult-XF® is a superior culture system for L-MSC, but further studies are required to explore the significance of CD141 expression in these cells.
Resumo:
Dengue virus is the most significant human viral pathogen spread by the bite of an infected mosquito. With no vaccine or antiviral therapy currently available, disease prevention relies largely on surveillance and mosquito control. Preventing the onset of dengue outbreaks and effective vector management would be considerably enhanced through surveillance of dengue virus prevalence in natural mosquito populations. However, current approaches to the identification of virus in field-caught mosquitoes require relatively slow and labor intensive techniques such as virus isolation or RT-PCR involving specialized facilities and personnel. A rapid and portable method for detecting dengue virus-infected mosquitoes is described. Using a hand held battery operated homogenizer and a dengue diagnostic rapid strip the viral protein NS1 was detected as a marker of dengue virus infection. This method could be performed in less than 30 min in the field, requiring no downstream processing, and is able to detect a single infected mosquito in a pool of at least 50 uninfected mosquitoes. The method described in this study allows rapid, real-time monitoring of dengue virus presence in mosquito populations and could be a useful addition to effective monitoring and vector control responses.
Resumo:
BACKGROUND: Demineralized freeze-dried bone allografts (DFDBAs) have been proposed as a useful adjunct in periodontal therapy to induce periodontal regeneration through the induction of new bone formation. The presence of bone morphogenetic proteins (BMPs) within the demineralized matrix has been proposed as a possible mechanism through which DFDBA may exert its biologic effect. However, in recent years, the predictability of results using DFDBA has been variable and has led to its use being questioned. One reason for the variability in tissue response may be attributed to differences in the processing of DFDBA, which may lead to loss of activity of any bioactive substances within the DFDBA matrix. Therefore, the purpose of this investigation was to determine whether there are detectable levels of bone morphogenetic proteins in commercial DFDBA preparations. METHODS: A single preparation of DFDBA was obtained from three commercial sources. Each preparation was studied in triplicate. Proteins within the DFDBA samples were first extracted with 4M guanidinium HCI for seven days at 40 degrees celsius and the residue was further extracted with 4M guanidinium HCL/EDTA for seven days at 40 degrees celsius. Two anti-human BMP-2 and -4 antibodies were used for the detection of the presence of BMP's in the extracts. RESULTS: Neither BMP-2 nor BMP-4 was detected in any of the extracts. When recombinant human BMP-2 and -4 were added throughout the extraction process of DFDBA extraction, not only were intact proteins detected but smaller molecular weight fragments were also noted in the extract. CONCLUSIONS: These results indicate that all of the DFDBA samples tested had no detectable amounts of BMP-2 and -4. In addition, an unknown substance present in the DFDBA may be responsible for degradation of whatever BMPs might be present.
Resumo:
In the context of ambiguity resolution (AR) of Global Navigation Satellite Systems (GNSS), decorrelation among entries of an ambiguity vector, integer ambiguity search and ambiguity validations are three standard procedures for solving integer least-squares problems. This paper contributes to AR issues from three aspects. Firstly, the orthogonality defect is introduced as a new measure of the performance of ambiguity decorrelation methods, and compared with the decorrelation number and with the condition number which are currently used as the judging criterion to measure the correlation of ambiguity variance-covariance matrix. Numerically, the orthogonality defect demonstrates slightly better performance as a measure of the correlation between decorrelation impact and computational efficiency than the condition number measure. Secondly, the paper examines the relationship of the decorrelation number, the condition number, the orthogonality defect and the size of the ambiguity search space with the ambiguity search candidates and search nodes. The size of the ambiguity search space can be properly estimated if the ambiguity matrix is decorrelated well, which is shown to be a significant parameter in the ambiguity search progress. Thirdly, a new ambiguity resolution scheme is proposed to improve ambiguity search efficiency through the control of the size of the ambiguity search space. The new AR scheme combines the LAMBDA search and validation procedures together, which results in a much smaller size of the search space and higher computational efficiency while retaining the same AR validation outcomes. In fact, the new scheme can deal with the case there are only one candidate, while the existing search methods require at least two candidates. If there are more than one candidate, the new scheme turns to the usual ratio-test procedure. Experimental results indicate that this combined method can indeed improve ambiguity search efficiency for both the single constellation and dual constellations respectively, showing the potential for processing high dimension integer parameters in multi-GNSS environment.
Resumo:
This paper describes observational research and verbal protocols methods, how these methods are applied and integrated within different contexts, and how they complement each other. The first case study focuses on nurses’ interaction during bandaging of patients’ lower legs. To maintain research rigor a triangulation approach was applied that links observations of current procedures, ‘talk-aloud’ protocol during interaction and retrospective protocol. Maps of interactions demonstrated that some nurses bandage more intuitively than others. Nurses who bandage intuitively assemble long sequences of bandaging actions while nurses who bandage less intuitively ‘focus-shift’ in between bandaging actions. Thus different levels of expertise have been identified. The second case study consists of two laboratory experiments. It focuses on analysing and comparing software and product design teams and how they approached a design problem. It is based on the observational and verbal data analysis. The coding scheme applied evolved during the analysis of the activity of each team and is identical for all teams. The structure of knowledge captured from the analysis of the design team maps of interaction is identified. The significance of this work is within its methodological approach. The maps of interaction are instrumental for understanding the activities and interactions of the people observed. By examining the maps of interaction, it is possible to draw conclusions about interactions, structure of knowledge captured and level of expertise. This research approach is transferable to other design domains. Designers will be able to transfer the interaction maps outcomes to systems and services they design.
Resumo:
A recent production of Nicholson’s Shadowlands at the Brisbane Powerhouse could have included two advertising lines: “Outspoken American-Jewish poet meets conservative British Oxford scholar” and “Emotive American Method trained actor meets contained British trained actor.” While the fusion of acting methodologies in intercultural acting has been discussed at length, little discussion has focussed on the juxtaposition of diverse acting styles in production in mainstream theatre. This paper explores how the permutation of American Method acting and a more traditional British conservatory acting in Crossbow’s August 2010 production of Shadowlands worked to add extra layers of meaning to the performance text. This sometimes inimical relationship between two acting styles had its beginnings in the rehearsal room and continued onstage. Audience reception to the play in post-performance discussions revealed the audience’s acute awareness of the transatlantic cultural tensions on stage. On one occasion, this resulted in a heated debate on cultural expression, continuing well after the event, during which audience members became co-performers in the cultural discourses of the play.
Resumo:
Airports are vital sources of income to a country and city. Airports are often understood from a management perspective, rather than a passenger perspective. As passengers are a vital customer of airports, a passenger perspective can provide a novel approach in understanding and improving the airport experience. This paper focuses on the study of passenger experiences at airports. This research is built on recent investigations of passenger discretionary activities in airports by the authors, which have provided a new perspective on understanding the airport experience. The research reported in this paper involves field studies at three Australian airports. Seventy one people who had impending travel were recruited to take part in the field study. Data collection methods included video-recorded observation and post-travel interviews. Observations were coded and a list of activities performed was developed. These activities were then classified into an activity taxonomy, depending on the activity location and context. The study demonstrates that there is a wide range of activities performed by passengers as they navigate through the airport. The emerging activity taxonomy consists of eight categories. They include: (i) processing (ii) preparatory (iii) consumptive (iv) social (v) entertainment (vi) passive (vii) queuing and (viii) moving. The research provides a novel perspective to understand the experience of passenger at international airports. It has been applied in airports to improve passenger processing and reduce waiting times. The significance of the taxonomy lies in its potential application to airport terminal design and how it can be utilised to understand and improve the passenger experience.
Resumo:
The purpose of this paper is to report on a methods research project investigating the evaluation of diverse teaching practice in Higher Education. The research method is a single site case study of an Australian university with data collected through published documents, surveys, interviews and focus groups. This project provides evidence of the wide variety of evaluation practice and diverse teaching practice across the university. This breadth identifies the need for greater flexibility of evaluation processes, tools and support to assist teaching staff to evaluate their diverse teaching practice. The employment opportunities for academics benchmark the university nationally and position the case study in the field. Finally this reaffirms the institutional responsibility for services to support teaching staff in an ongoing manner.