188 resultados para Diagnostic techniques and procedures
Resumo:
Survey-based health research is in a boom phase following an increased amount of health spending in OECD countries and the interest in ageing. A general characteristic of survey-based health research is its diversity. Different studies are based on different health questions in different datasets; they use different statistical techniques; they differ in whether they approach health from an ordinal or cardinal perspective; and they differ in whether they measure short-term or long-term effects. The question in this paper is simple: do these differences matter for the findings? We investigate the effects of life-style choices (drinking, smoking, exercise) and income on six measures of health in the US Health and Retirement Study (HRS) between 1992 and 2002: (1) self-assessed general health status, (2) problems with undertaking daily tasks and chores, (3) mental health indicators, (4) BMI, (5) the presence of serious long-term health conditions, and (6) mortality. We compare ordinal models with cardinal models; we compare models with fixed effects to models without fixed-effects; and we compare short-term effects to long-term effects. We find considerable variation in the impact of different determinants on our chosen health outcome measures; we find that it matters whether ordinality or cardinality is assumed; we find substantial differences between estimates that account for fixed effects versus those that do not; and we find that short-run and long-run effects differ greatly. All this implies that health is an even more complicated notion than hitherto thought, defying generalizations from one measure to the others or one methodology to another.
Resumo:
In this study, cell sheets comprising multilayered porcine bone marrow stromal cells (BMSC) were assembled with fully interconnected scaffolds made from medical-grade polycaprolactone–calcium phosphate (mPCL–CaP), for the engineering of structural and functional bone grafts. The BMSC sheets were harvested from culture flasks and wrapped around pre-seeded composite scaffolds. The layered cell sheets integrated well with the scaffold/cell construct and remained viable, with mineralized nodules visible both inside and outside the scaffold for up to 8 weeks culture. Cells within the constructs underwent classical in vitro osteogenic differentiation with the associated elevation of alkaline phosphatase activity and bone-related protein expression. In vivo, two sets of cell-sheet-scaffold/cell constructs were transplanted under the skin of nude rats. The first set of constructs (554mm3) were assembled with BMSC sheets and cultured for 8 weeks before implantation. The second set of constructs (10104mm3) was implanted immediately after assembly with BMSC sheets, with no further in vitro culture. For both groups, neo cortical and well-vascularised cancellous bone were formed within the constructs with up to 40% bone volume. Histological and immunohistochemical examination revealed that neo bone tissue formed from the pool of seeded BMSC and the bone formation followed predominantly an endochondral pathway, with woven bone matrix subsequently maturing into fully mineralized compact bone; exhibiting the histological markers of native bone. These findings demonstrate that large bone tissues similar to native bone can be regenerated utilizing BMSC sheet techniques in conjunction with composite scaffolds whose structures are optimized from a mechanical, nutrient transport and vascularization perspective.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.
Resumo:
Nowadays, business process management is an important approach for managing organizations from an operational perspective. As a consequence, it is common to see organizations develop collections of hundreds or even thousands of business process models. Such large collections of process models bring new challenges and provide new opportunities, as the knowledge that they encapsulate requires to be properly managed. Therefore, a variety of techniques for managing large collections of business process models is being developed. The goal of this paper is to provide an overview of the management techniques that currently exist, as well as the open research challenges that they pose.
Resumo:
The overall aim of this project was to contribute to existing knowledge regarding methods for measuring characteristics of airborne nanoparticles and controlling occupational exposure to airborne nanoparticles, and to gather data on nanoparticle emission and transport in various workplaces. The scope of this study involved investigating the characteristics and behaviour of particles arising from the operation of six nanotechnology processes, subdivided into nine processes for measurement purposes. It did not include the toxicological evaluation of the aerosol and therefore, no direct conclusion was made regarding the health effects of exposure to these particles. Our research included real-time measurement of sub, and supermicrometre particle number and mass concentration, count median diameter, and alveolar deposited surface area using condensation particle counters, an optical particle counter, DustTrak photometer, scanning mobility particle sizer, and nanoparticle surface area monitor, respectively. Off-line particle analysis included scanning and transmission electron microscopy, energy-dispersive x-ray spectrometry, and thermal optical analysis of elemental carbon. Sources of fibrous and non-fibrous particles were included.
Resumo:
CTAC2012 was the 16th biennial Computational Techniques and Applications Conference, and took place at Queensland University of Technology from 23 - 26 September, 2012. The ANZIAM Special Interest Group in Computational Techniques and Applications is responsible for the CTAC meetings, the first of which was held in 1981.
Resumo:
Introduction: Undergraduate students studying the Bachelor of Radiation Therapy at Queensland University of Technology (QUT) attend clinical placements in a number of department sites across Queensland. To ensure that the curriculum prepares students for the most common treatments and current techniques in use in these departments, a curriculum matching exercise was performed. Methods: A cross-sectional census was performed on a pre-determined “Snapshot” date in 2012. This was undertaken by the clinical education staff in each department who used a standardized proforma to count the number of patients as well as prescription, equipment, and technique data for a list of tumour site categories. This information was combined into aggregate anonymized data. Results: All 12 Queensland radiation therapy clinical sites participated in the Snapshot data collection exercise to produce a comprehensive overview of clinical practice on the chosen day. A total of 59 different tumour sites were treated on the chosen day and as expected the most common treatment sites were prostate and breast, comprising 46% of patients treated. Data analysis also indicated that intensity-modulated radiotherapy (IMRT) use is relatively high with 19.6% of patients receiving IMRT treatment on the chosen day. Both IMRT and image-guided radiotherapy (IGRT) indications matched recommendations from the evidence. Conclusion: The Snapshot method proved to be a feasible and efficient method of gathering useful
Resumo:
With the explosive growth of resources available through the Internet, information mismatching and overload have become a severe concern to users. Web users are commonly overwhelmed by huge volume of information and are faced with the challenge of finding the most relevant and reliable information in a timely manner. Personalised information gathering and recommender systems represent state-of-the-art tools for efficient selection of the most relevant and reliable information resources, and the interest in such systems has increased dramatically over the last few years. However, web personalization has not yet been well-exploited; difficulties arise while selecting resources through recommender systems from a technological and social perspective. Aiming to promote high quality research in order to overcome these challenges, this paper provides a comprehensive survey on the recent work and achievements in the areas of personalised web information gathering and recommender systems. The report covers concept-based techniques exploited in personalised information gathering and recommender systems.
Resumo:
In Bermingham v Priest [2002] QSC 057 jones J considered the position of persons seeking to claim damages where the Motor Accident Insurance Act 1994 applies prior to its amendment by the Motor Accident Insurance Amendment Act 2000, and where proceedings are brought close to expiration of the statutory limitation period.
Resumo:
In JLG Industries Inc v Teetree Pty Ltd [2002] QDC 031 the court considered the implications in terms of costs of an offer to settle by the plaintiff under the UCPR where the element of compromise involved only acceptance of the amount of claim without interest.
Resumo:
The decision of Chesterman J in Cross v Queensland Rugby Football Union Ltd [2001] QSC 173 (Supreme Court of Queensland, No 3426 of 1997), Chesterman J, 30.5.2001) opens the possibilities for delivering interrogatories, particularly in the context of interrogatories relating to an opponent's version of events.