188 resultados para Diagnostic techniques and procedures
Resumo:
In an age where digital innovation knows no boundaries, research in the area of brain-computer interface and other neural interface devices go where none have gone before. The possibilities are endless and as dreams become reality, the implications of these amazing developments should be considered. Some of these new devices have been created to correct or minimise the effects of disease or injury so the paper discusses some of the current research and development in the area, including neuroprosthetics. To assist researchers and academics in identifying some of the legal and ethical issues that might arise as a result of research and development of neural interface devices, using both non-invasive techniques and invasive procedures, the paper discusses a number of recent observations of authors in the field. The issue of enhancing human attributes by incorporating these new devices is also considered. Such enhancement may be regarded as freeing the mind from the constraints of the body, but there are legal and moral issues that researchers and academics would be well advised to contemplate as these new devices are developed and used. While different fact situation surround each of these new devices, and those that are yet to come, consideration of the legal and ethical landscape may assist researchers and academics in dealing effectively with matters that arise in these times of transition. Lawyers could seek to facilitate the resolution of the legal disputes that arise in this area of research and development within the existing judicial and legislative frameworks. Whether these frameworks will suffice, or will need to change in order to enable effective resolution, is a broader question to be explored.
Resumo:
Concrete is commonly used as a primary construction material for tall building construction. Load bearing components such as columns and walls in concrete buildings are subjected to instantaneous and long term axial shortening caused by the time dependent effects of "shrinkage", "creep" and "elastic" deformations. Reinforcing steel content, variable concrete modulus, volume to surface area ratio of the elements and environmental conditions govern axial shortening. The impact of differential axial shortening among columns and core shear walls escalate with increasing building height. Differential axial shortening of gravity loaded elements in geometrically complex and irregular buildings result in permanent distortion and deflection of the structural frame which have a significant impact on building envelopes, building services, secondary systems and the life time serviceability and performance of a building. Existing numerical methods commonly used in design to quantify axial shortening are mainly based on elastic analytical techniques and therefore unable to capture the complexity of non-linear time dependent effect. Ambient measurements of axial shortening using vibrating wire, external mechanical strain, and electronic strain gauges are methods that are available to verify pre-estimated values from the design stage. Installing these gauges permanently embedded in or on the surface of concrete components for continuous measurements during and after construction with adequate protection is uneconomical, inconvenient and unreliable. Therefore such methods are rarely if ever used in actual practice of building construction. This research project has developed a rigorous numerical procedure that encompasses linear and non-linear time dependent phenomena for prediction of axial shortening of reinforced concrete structural components at design stage. This procedure takes into consideration (i) construction sequence, (ii) time varying values of Young's Modulus of reinforced concrete and (iii) creep and shrinkage models that account for variability resulting from environmental effects. The capabilities of the procedure are illustrated through examples. In order to update previous predictions of axial shortening during the construction and service stages of the building, this research has also developed a vibration based procedure using ambient measurements. This procedure takes into consideration the changes in vibration characteristic of structure during and after construction. The application of this procedure is illustrated through numerical examples which also highlight the features. The vibration based procedure can also be used as a tool to assess structural health/performance of key structural components in the building during construction and service life.
Resumo:
Starting with the incident now known as the Cow’s Head Protest, this article traces and unpacks the events, techniques, and conditions surrounding the representation of ethno-religious minorities in Malaysia. The author suggests that the Malaysian Indians’ struggle to correct the dominant reading of their community as an impoverished and humbled underclass is a disruption of the dominant cultural order in Malaysia. It is also among the key events to have has set in motion a set of dynamics—the visual turn—introduced by new media into the politics of ethno-communal representation in Malaysia. Believing that this situation requires urgent examination the author attempts to outline the problematics of the task.
Resumo:
In Chapter 10, Adam and Dougherty describe the application of medical image processing to the assessment and treatment of spinal deformity, with a focus on the surgical treatment of idiopathic scoliosis. The natural history of spinal deformity and current approaches to surgical and non-surgical treatment are briefly described, followed by an overview of current clinically used imaging modalities. The key metrics currently used to assess the severity and progression of spinal deformities from medical images are presented, followed by a discussion of the errors and uncertainties involved in manual measurements. This provides the context for an analysis of automated and semi-automated image processing approaches to measure spinal curve shape and severity in two and three dimensions.
Resumo:
Bioinformatics involves analyses of biological data such as DNA sequences, microarrays and protein-protein interaction (PPI) networks. Its two main objectives are the identification of genes or proteins and the prediction of their functions. Biological data often contain uncertain and imprecise information. Fuzzy theory provides useful tools to deal with this type of information, hence has played an important role in analyses of biological data. In this thesis, we aim to develop some new fuzzy techniques and apply them on DNA microarrays and PPI networks. We will focus on three problems: (1) clustering of microarrays; (2) identification of disease-associated genes in microarrays; and (3) identification of protein complexes in PPI networks. The first part of the thesis aims to detect, by the fuzzy C-means (FCM) method, clustering structures in DNA microarrays corrupted by noise. Because of the presence of noise, some clustering structures found in random data may not have any biological significance. In this part, we propose to combine the FCM with the empirical mode decomposition (EMD) for clustering microarray data. The purpose of EMD is to reduce, preferably to remove, the effect of noise, resulting in what is known as denoised data. We call this method the fuzzy C-means method with empirical mode decomposition (FCM-EMD). We applied this method on yeast and serum microarrays, and the silhouette values are used for assessment of the quality of clustering. The results indicate that the clustering structures of denoised data are more reasonable, implying that genes have tighter association with their clusters. Furthermore we found that the estimation of the fuzzy parameter m, which is a difficult step, can be avoided to some extent by analysing denoised microarray data. The second part aims to identify disease-associated genes from DNA microarray data which are generated under different conditions, e.g., patients and normal people. We developed a type-2 fuzzy membership (FM) function for identification of diseaseassociated genes. This approach is applied to diabetes and lung cancer data, and a comparison with the original FM test was carried out. Among the ten best-ranked genes of diabetes identified by the type-2 FM test, seven genes have been confirmed as diabetes-associated genes according to gene description information in Gene Bank and the published literature. An additional gene is further identified. Among the ten best-ranked genes identified in lung cancer data, seven are confirmed that they are associated with lung cancer or its treatment. The type-2 FM-d values are significantly different, which makes the identifications more convincing than the original FM test. The third part of the thesis aims to identify protein complexes in large interaction networks. Identification of protein complexes is crucial to understand the principles of cellular organisation and to predict protein functions. In this part, we proposed a novel method which combines the fuzzy clustering method and interaction probability to identify the overlapping and non-overlapping community structures in PPI networks, then to detect protein complexes in these sub-networks. Our method is based on both the fuzzy relation model and the graph model. We applied the method on several PPI networks and compared with a popular protein complex identification method, the clique percolation method. For the same data, we detected more protein complexes. We also applied our method on two social networks. The results showed our method works well for detecting sub-networks and give a reasonable understanding of these communities.
Resumo:
Sound Thinking provides techniques and approaches to critically listen, think, talk and write about music you hear or make. It provides tips on making music and it encourages regular and deep thinking about music activities, which helps build a musical dialog that leads to deeper understanding.
Resumo:
At the international level, the higher education sector is currently being subjected to increased calls for public accountability and the current move by the OECD to rank universities based on the quality of their teaching and learning outcomes. At the national level, Australian universities and their teaching staff face numerous challenges including financial restrictions, increasing student numbers and the reality of an increasingly diverse student population. The Australian higher education response to these competing policy and accreditation demands focuses on precise explicit systems and procedures which are inflexible and conservative and which ignore the fact that assessment is the single biggest influence on how students approach their learning. By seriously neglecting the quality of student learning outcomes, assessment tasks are often failing to engage students or reflect the tasks students will face in the world of practice. Innovative assessment design, which includes new paradigms of student engagement and learning and pedagogically based technologies have the capacity to provide some measure of relief from these internal and external tensions by significantly enhancing the learning experience for an increasingly time-poor population of students. That is, the assessment process has the ability to deliver program objectives and active learning through a knowledge transfer process which increases student participation and engagement. This social constructivist view highlights the importance of peer review in assisting students to participate and collaborate as equal members of a community of scholars with both their peers and academic staff members. As a result of increasing the student’s desire to learn, peer review leads to more confident, independent and reflective learners who also become more skilled at making independent judgements of their own and others' work. Within this context, in Case Study One of this project, a summative, peer-assessed, weekly, assessment task was introduced in the first “serious” accounting subject offered as part of an undergraduate degree. The positive outcomes achieved included: student failure rates declined 15%; tutorial participation increased fourfold; tutorial engagement increased six-fold; and there was a 100% student-based approval rating for the retention of the assessment task. However, in stark contrast to the positive student response, staff issues related to the loss of research time associated with the administration of the peer-review process threatened its survival. This paper contributes to the core conference topics of new trends and experiences in undergraduate assessment education and in terms of innovative, on-line, learning and teaching practices, by elaborating the Case Study Two “solution” generated to this dilemma. At the heart of the resolution is an e-Learning, peer-review process conducted in conjunction with the University of Melbourne which seeks to both create a virtual sense of belonging and to efficiently and effectively meet academic learning objectives with minimum staff involvement. In outlining the significant level of success achieved, student-based qualitative and quantitative data will be highlighted along with staff views in a comparative analysis of the advantages and disadvantages to both students and staff of the staff-led, peer review process versus its on-line counterpart.
Resumo:
As civil infrastructures such as bridges age, there is a concern for safety and a need for cost-effective and reliable monitoring tool. Different diagnostic techniques are available nowadays for structural health monitoring (SHM) of bridges. Acoustic emission is one such technique with potential of predicting failure. The phenomenon of rapid release of energy within a material by crack initiation or growth in form of stress waves is known as acoustic emission (AE). AEtechnique involves recording the stress waves bymeans of sensors and subsequent analysis of the recorded signals,which then convey information about the nature of the source. AE can be used as a local SHM technique to monitor specific regions with visible presence of cracks or crack prone areas such as welded regions and joints with bolted connection or as a global technique to monitor the whole structure. Strength of AE technique lies in its ability to detect active crack activity, thus helping in prioritising maintenance work by helping focus on active cracks rather than dormant cracks. In spite of being a promising tool, some challenges do still exist behind the successful application of AE technique. One is the generation of large amount of data during the testing; hence an effective data analysis and management is necessary, especially for long term monitoring uses. Complications also arise as a number of spurious sources can giveAEsignals, therefore, different source discrimination strategies are necessary to identify genuine signals from spurious ones. Another major challenge is the quantification of damage level by appropriate analysis of data. Intensity analysis using severity and historic indices as well as b-value analysis are some important methods and will be discussed and applied for analysis of laboratory experimental data in this paper.
Resumo:
Navigational collisions are one of the major safety concerns for many seaports. Continuing growth of shipping traffic in number and sizes is likely to result in increased number of traffic movements, which consequently could result higher risk of collisions in these restricted waters. This continually increasing safety concern warrants a comprehensive technique for modeling collision risk in port waters, particularly for modeling the probability of collision events and the associated consequences (i.e., injuries and fatalities). A number of techniques have been utilized for modeling the risk qualitatively, semi-quantitatively and quantitatively. These traditional techniques mostly rely on historical collision data, often in conjunction with expert judgments. However, these techniques are hampered by several shortcomings, such as randomness and rarity of collision occurrence leading to obtaining insufficient number of collision counts for a sound statistical analysis, insufficiency in explaining collision causation, and reactive approach to safety. A promising alternative approach that overcomes these shortcomings is the navigational traffic conflict technique (NTCT), which uses traffic conflicts as an alternative to the collisions for modeling the probability of collision events quantitatively. This article explores the existing techniques for modeling collision risk in port waters. In particular, it identifies the advantages and limitations of the traditional techniques and highlights the potentials of the NTCT in overcoming the limitations. In view of the principles of the NTCT, a structured method for managing collision risk is proposed. This risk management method allows safety analysts to diagnose safety deficiencies in a proactive manner, which consequently has great potential for managing collision risk in a fast, reliable and efficient manner.
Resumo:
Studies of international youth justice, punishment and control are in their infancy but the issues of globalisation, transnationalisation, policy transfer and localisation are gradually being addressed. There also appears a growing demand in policy and pressure group circles in the UK to learn more about other jurisdictions in order to emulate ‘best practice’ and avoid the worst excesses of punitive populism. However, existing comparative work in this area rarely ventures much beyond country specific descriptions of historical development, powers and procedures. Statistical comparisons – predominantly of custody rates – are becoming more sophisticated but remain beset with problems of partial and inaccurate data collection. The extent to which different countries do things differently, and how and why such difference is maintained, remains a relatively unexcavated territory. This article suggests a conceptually comparative framework in which degrees of international, national and local convergence and divergence can begin to be revealed and assessed.
Resumo:
A major challenge in modern photonics and nano-optics is the diffraction limit of light which does not allow field localisation into regions with dimensions smaller than half the wavelength. Localisation of light into nanoscale regions (beyond its diffraction limit) has applications ranging from the design of optical sensors and measurement techniques with resolutions as high as a few nanometres, to the effective delivery of optical energy into targeted nanoscale regions such as quantum dots, nano-electronic and nano-optical devices. This field has become a major research direction over the last decade. The use of strongly localised surface plasmons in metallic nanostructures is one of the most promising approaches to overcome this problem. Therefore, the aim of this thesis is to investigate the linear and non-linear propagation of surface plasmons in metallic nanostructures. This thesis will focus on two main areas of plasmonic research –– plasmon nanofocusing and plasmon nanoguiding. Plasmon nanofocusing – The main aim of plasmon nanofocusing research is to focus plasmon energy into nanoscale regions using metallic nanostructures and at the same time achieve strong local field enhancement. Various structures for nanofocusing purposes have been proposed and analysed such as sharp metal wedges, tapered metal films on dielectric substrates, tapered metal rods, and dielectric V-grooves in metals. However, a number of important practical issues related to nanofocusing in these structures still remain unclear. Therefore, one of the main aims of this thesis is to address two of the most important of issues which are the coupling efficiency and heating effects of surface plasmons in metallic nanostructures. The method of analysis developed throughout this thesis is a general treatment that can be applied to a diversity of nanofocusing structures, with results shown here for the specific case of sharp metal wedges. Based on the geometrical optics approximation, it is demonstrated that the coupling efficiency from plasmons generated with a metal grating into the nanofocused symmetric or quasi-symmetric modes may vary between ~50% to ~100% depending on the structural parameters. Optimal conditions for nanofocusing with the view to minimise coupling and dissipative losses are also determined and discussed. It is shown that the temperature near the tip of a metal wedge heated by nanosecond plasmonic pulses can increase by several hundred degrees Celsius. This temperature increase is expected to lead to nonlinear effects, self-influence of the focused plasmon, and ultimately self-destruction of the metal tip. This thesis also investigates a different type of nanofocusing structure which consists of a tapered high-index dielectric layer resting on a metal surface. It is shown that the nanofocusing mechanism that occurs in this structure is somewhat different from other structures that have been considered thus far. For example, the surface plasmon experiences significant backreflection and mode transformation at a cut-off thickness. In addition, the reflected plasmon shows negative refraction properties that have not been observed in other nanofocusing structures considered to date. Plasmon nanoguiding – Guiding surface plasmons using metallic nanostructures is important for the development of highly integrated optical components and circuits which are expected to have a superior performance compared to their electronicbased counterparts. A number of different plasmonic waveguides have been considered over the last decade including the recently considered gap and trench plasmon waveguides. The gap and trench plasmon waveguides have proven to be difficult to fabricate. Therefore, this thesis will propose and analyse four different modified gap and trench plasmon waveguides that are expected to be easier to fabricate, and at the same time acquire improved propagation characteristics of the guided mode. In particular, it is demonstrated that the guided modes are significantly screened by the extended metal at the bottom of the structure. This is important for the design of highly integrated optics as it provides the opportunity to place two waveguides close together without significant cross-talk. This thesis also investigates the use of plasmonic nanowires to construct a Fabry-Pérot resonator/interferometer. It is shown that the resonance effect can be achieved with the appropriate resonator length and gap width. Typical quality factors of the Fabry- Pérot cavity are determined and explained in terms of radiative and dissipative losses. The possibility of using a nanowire resonator for the design of plasmonic filters with close to ~100% transmission is also demonstrated. It is expected that the results obtained in this thesis will play a vital role in the development of high resolution near field microscopy and spectroscopy, new measurement techniques and devices for single molecule detection, highly integrated optical devices, and nanobiotechnology devices for diagnostics of living cells.
Resumo:
Mesenchymal stem/stromal cells (MSC) are rapidly becoming a leading candidate for use in tissue regeneration, with first generation of therapies being approved for use in orthopaedic repair applications. Capturing the full potential of MSC will likely require the development of novel in vitro culture techniques and devices. Herein we describe the development of a straightforward surface modification of an existing commercial product to enable the efficient study of three dimensional (3D) human bone marrow-derived MSC osteogenic differentiation. Hundreds of 3D microaggregates, of either 42 or 168 cells each, were cultured in osteogenic induction medium and their differentiation was compared with that occurring in traditional two dimensional (2D) monolayer cultures. Osteogenic gene expression and matrix composition was significantly enhanced in the 3D microaggregate cultures. Additionally, BMP-2 gene expression was significantly up-regulated in 3D cultures at day 3 and 7 by approximately 25- and 30-fold, respectively. The difference in BMP-2 gene expression between 2D and 3D cultures was negligible in the more mature day 14 osteogenic cultures. These data support the notion that BMP-2 autocrine signalling is up-regulated in 3D MSC cultures, enhancing osteogenic differentiation. This study provides both mechanistic insight into MSC differentiation, as well as a platform for the efficient generation of microtissue units for further investigation or use in tissue engineering applications.
Resumo:
Effective, statistically robust sampling and surveillance strategies form an integral component of large agricultural industries such as the grains industry. Intensive in-storage sampling is essential for pest detection, Integrated Pest Management (IPM), to determine grain quality and to satisfy importing nation’s biosecurity concerns, while surveillance over broad geographic regions ensures that biosecurity risks can be excluded, monitored, eradicated or contained within an area. In the grains industry, a number of qualitative and quantitative methodologies for surveillance and in-storage sampling have been considered. Primarily, research has focussed on developing statistical methodologies for in storage sampling strategies concentrating on detection of pest insects within a grain bulk, however, the need for effective and statistically defensible surveillance strategies has also been recognised. Interestingly, although surveillance and in storage sampling have typically been considered independently, many techniques and concepts are common between the two fields of research. This review aims to consider the development of statistically based in storage sampling and surveillance strategies and to identify methods that may be useful for both surveillance and in storage sampling. We discuss the utility of new quantitative and qualitative approaches, such as Bayesian statistics, fault trees and more traditional probabilistic methods and show how these methods may be used in both surveillance and in storage sampling systems.
Resumo:
Objective: Radiation safety principles dictate that imaging procedures should minimise the radiation risks involved, without compromising diagnostic performance. This study aims to define a core set of views that maximises clinical information yield for minimum radiation risk. Angiographers would supplement these views as clinically indicated. Methods: An algorithm was developed to combine published data detailing the quality of information derived for the major coronary artery segments through the use of a common set of views in angiography with data relating to the dose–area product and scatter radiation associated with these views. Results: The optimum view set for the left coronary system comprised four views: left anterior oblique (LAO) with cranial (Cr) tilt, shallow right anterior oblique (AP-RAO) with caudal (Ca) tilt, RAO with Ca tilt and AP-RAO with Cr tilt. For the right coronary system three views were identified: LAO with Cr tilt, RAO and AP-RAO with Cr tilt. An alternative left coronary view set including a left lateral achieved minimally superior efficiency (,5%), but with an ,8% higher radiation dose to the patient and 40% higher cardiologist dose. Conclusion: This algorithm identifies a core set of angiographic views that optimises the information yield and minimises radiation risk. This basic data set would be supplemented by additional clinically determined views selected by the angiographer for each case. The decision to use additional views for diagnostic angiography and interventions would be assisted by referencing a table of relative radiation doses for the views being considered.
Resumo:
Members of the Calliphoridae (blowflies) are significant for medical and veterinary management, due to the ability of some species to consume living flesh as larvae, and for forensic investigations due to the ability of others to develop in corpses. Due to the difficulty of accurately identifying larval blowflies to species there is a need for DNA-based diagnostics for this family, however the widely used DNA-barcoding marker, cox1, has been shown to fail for several groups within this family. Additionally, many phylogenetic relationships within the Calliphoridae are still unresolved, particularly deeper level relationships. Sequencing whole mt genomes has been demonstrated both as an effective method for identifying the most informative diagnostic markers and for resolving phylogenetic relationships. Twenty-seven complete, or nearly so, mt genomes were sequenced representing 13 species, seven genera and four calliphorid subfamilies and a member of the related family Tachinidae. PCR and sequencing primers developed for sequencing one calliphorid species could be reused to sequence related species within the same superfamily with success rates ranging from 61% to 100%, demonstrating the speed and efficiency with which an mt genome dataset can be assembled. Comparison of molecular divergences for each of the 13 protein-coding genes and 2 ribosomal RNA genes, at a range of taxonomic scales identified novel targets for developing as diagnostic markers which were 117–200% more variable than the markers which have been used previously in calliphorids. Phylogenetic analysis of whole mt genome sequences resulted in much stronger support for family and subfamily-level relationships. The Calliphoridae are polyphyletic, with the Polleninae more closely related to the Tachinidae, and the Sarcophagidae are the sister group of the remaining calliphorids. Within the Calliphoridae, there was strong support for the monophyly of the Chrysomyinae and Luciliinae and for the sister-grouping of Luciliinae with Calliphorinae. Relationships within Chrysomya were not well resolved. Whole mt genome data, supported the previously demonstrated paraphyly of Lucilia cuprina with respect to L. sericata and allowed us to conclude that it is due to hybrid introgression prior to the last common ancestor of modern sericata populations, rather than due to recent hybridisation, nuclear pseudogenes or incomplete lineage sorting.