942 resultados para high-level synthesis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research investigated the impact of Education Queensland's employment policy and practices for beginning secondary teachers appointed on temporary engagement. The context was the public secondary school sector within the state of Queensland, Australia. The study was set within a context of the changing nature of work from full-time permanent employment towards casual, fixed-term contracts, temporary and part-time employment, a trend reflected in the employment patterns for teachers within Australia. Two broad categories of literature relating to the research problem of this thesis were reviewed, namely the beginning teacher and permanency or tenure. The focus in the research literature on beginning teachers was the professional experiences of teachers within the classroom and school. There was a paucity of research that considered the working and industrial conditions of temporary employment for beginning teachers or the personal and professional implications of this form of employment. The review of the context and literature was conceptualised as a Beginning Temporary Teacher Theoretical Framework which served to inform the study. Using a qualitative case study methodology, the research techniques employed for the thesis were semi-structured interview and document analysis. A simultaneously conducted research project in which the researcher participated entitled 'Winning the Lottery? Beginning Teachers on Temporary Engagement' foregrounded this thesis in terms of refining the research question, contributing to the literature and in the selection of the participants. For this case study the perspectives of four distinct yet inter-related categories of professionals were sought. These included four beginning secondary teachers, three school administrators, a Senior Personnel Officer with Education Queensland, and a representative from the Queensland Teachers' Union. The research findings indicated that none of the beginning teachers or other professionals viewed starting a career in teaching on temporary engagement as the ideal. The negative features identified were the differential treatment received and the high level of uncertainty associated with temporary employment. Differential treatment tended to indicate 'less' entitlements, in terms of access to induction and professional development, recreational and sick leave, acceptance by and expectations of other colleagues, and avenues of redress in grievance cases. Moreover, interviews indicated a high level of uncertainty in terms of starting within the teaching profession, commencing at a new school, and a regular income. In addition, frequent changes in schools and/or cohorts of students exacerbated levels of uncertainty. The beginning teachers reported significantly decreased motivation, self-esteem and sense of belonging, and increased stress levels. There was an even more marked negative impact on those beginning teachers who had experienced a higher number of temporary engagements and schools in their first year of teaching. Conversely, strong staff support and a reasonable length of time in the one school improved the quality of the beginning teachers' experiences. The overall impact of being on temporary engagement resulted in delayed permanent position appointments, decreased commitment to particular schools and to Education Queensland as the employing authority, and for two of the beginning teachers, it produced a desire to seek alternative employment. The implementation of Education Queensland's policies relating to working conditions and entitlements for these temporary beginning teachers at the school level was revealed to be less than satisfactory. There was a tendency towards 'just-in- time' management of the beginning teacher on temporary engagement. The beginning teachers received 'less-than-messages' about access to and use of departmental documentation, support through induction and professional development, and their transition from temporary to permanent employment. To ensure a more systematic, supportive and inclusive process for managing the temporary beginning teacher, a conceptual framework entitled 'Continuums of Tension' was developed. The four continuums included permanent employment - temporary employment; system perspective - individual perspective; teaching as a profession - teaching as a job; and the permanent beginning teacher - university graduate. The general principles of the human resource policies of Education Queensland were based on a commitment to permanent employment, a system's perspective, viewing teaching as a profession and a homogeneous group of permanent beginning teachers. Contrasting with this, the beginning teacher on temporary engagement tended to operate from the position of temporary employment and a perspective that was individually based. Their priorities therefore included the 'occupational' aspects of being a temporary teacher striving to become permanent. Thus there existed a tension or contradiction between the general principles of human resource policies within Education Queensland and the employment experiences of beginning teachers on temporary engagement. The study proposed three actions for resolution to address the aforementioned tensions. The actions included: (a) the effective provision and targeted communication of information; (b) support, induction and professional development; and (c) a coordinated approach between Education Queensland, Queensland Teachers' Union, the Universities and the beginning teacher. These actions are fm1her refined to include: (a) an induction kit to suppm1 the individual through the pre-employment to permanent employee phases, (b) an extrapolation of the roles and responsibilities of Education Queensland personnel charged with supporting the beginning temporary teacher, and (c) a series of recommendations to effect a coordinated approach amongst the key stakeholders. The theoretical and conceptual frameworks have provided a means of addressing the identified needs of the beginning teacher on temporary engagement. As such, this study has contributed to the research literature on teacher employment and professionalism and aims to provide a beginning temporary teacher with managed professional and occupational support.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Keyword Spotting is the task of detecting keywords of interest within continu- ous speech. The applications of this technology range from call centre dialogue systems to covert speech surveillance devices. Keyword spotting is particularly well suited to data mining tasks such as real-time keyword monitoring and unre- stricted vocabulary audio document indexing. However, to date, many keyword spotting approaches have su®ered from poor detection rates, high false alarm rates, or slow execution times, thus reducing their commercial viability. This work investigates the application of keyword spotting to data mining tasks. The thesis makes a number of major contributions to the ¯eld of keyword spotting. The ¯rst major contribution is the development of a novel keyword veri¯cation method named Cohort Word Veri¯cation. This method combines high level lin- guistic information with cohort-based veri¯cation techniques to obtain dramatic improvements in veri¯cation performance, in particular for the problematic short duration target word class. The second major contribution is the development of a novel audio document indexing technique named Dynamic Match Lattice Spotting. This technique aug- ments lattice-based audio indexing principles with dynamic sequence matching techniques to provide robustness to erroneous lattice realisations. The resulting algorithm obtains signi¯cant improvement in detection rate over lattice-based audio document indexing while still maintaining extremely fast search speeds. The third major contribution is the study of multiple veri¯er fusion for the task of keyword veri¯cation. The reported experiments demonstrate that substantial improvements in veri¯cation performance can be obtained through the fusion of multiple keyword veri¯ers. The research focuses on combinations of speech background model based veri¯ers and cohort word veri¯ers. The ¯nal major contribution is a comprehensive study of the e®ects of limited training data for keyword spotting. This study is performed with consideration as to how these e®ects impact the immediate development and deployment of speech technologies for non-English languages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this paper is to explore a new approach to obtain better traffic demand (Origin-Destination, OD matrices) for dense urban networks. From reviewing existing methods, from static to dynamic OD matrix evaluation, possible deficiencies in the approach could be identified: traffic assignment details for complex urban network and lacks in dynamic approach. To improve the global process of traffic demand estimation, this paper is focussing on a new methodology to determine dynamic OD matrices for urban areas characterized by complex route choice situation and high level of traffic controls. An iterative bi-level approach will be used, the Lower level (traffic assignment) problem will determine, dynamically, the utilisation of the network by vehicles using heuristic data from mesoscopic traffic simulator and the Upper level (matrix adjustment) problem will proceed to an OD estimation using optimization Kalman filtering technique. In this way, a full dynamic and continuous estimation of the final OD matrix could be obtained. First results of the proposed approach and remarks are presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tracking/remote monitoring systems using GNSS are a proven method to enhance the safety and security of personnel and vehicles carrying precious or hazardous cargo. While GNSS tracking appears to mitigate some of these threats, if not adequately secured, it can be a double-edged sword allowing adversaries to obtain sensitive shipment and vehicle position data to better coordinate their attacks, and to provide a false sense of security to monitoring centers. Tracking systems must be designed with the ability to perform route-compliance and thwart attacks ranging from low-level attacks such as the cutting of antenna cables to medium and high-level attacks involving radio jamming and signal / data-level simulation, especially where the goods transported have a potentially high value to terrorists. This paper discusses the use of GNSS in critical tracking applications, addressing the mitigation of GNSS security issues, augmentation systems and communication systems in order to provide highly robust and survivable tracking systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) completed an Australian National Data Service (ANDS) funded “Seeding the Commons Project” to contribute metadata to Research Data Australia. The project employed two Research Data Librarians from October 2009 through to July 2010. Technical support for the project was provided by QUT’s High Performance Computing and Research Support Specialists. ---------- The project identified and described QUT’s category 1 (ARC / NHMRC) research datasets. Metadata for the research datasets was stored in QUT’s Research Data Repository (Architecta Mediaflux). Metadata which was suitable for inclusion in Research Data Australia was made available to the Australian Research Data Commons (ARDC) in RIF-CS format. ---------- Several workflows and processes were developed during the project. 195 data interviews took place in connection with 424 separate research activities which resulted in the identification of 492 datasets. ---------- The project had a high level of technical support from QUT High Performance Computing and Research Support Specialists who developed the Research Data Librarian interface to the data repository that enabled manual entry of interview data and dataset metadata, creation of relationships between repository objects. The Research Data Librarians mapped the QUT metadata repository fields to RIF-CS and an application was created by the HPC and Research Support Specialists to generate RIF-CS files for harvest by the Australian Research Data Commons (ARDC). ---------- This poster will focus on the workflows and processes established for the project including: ---------- • Interview processes and instruments • Data Ingest from existing systems (including mapping to RIF-CS) • Data entry and the Data Librarian interface to Mediaflux • Verification processes • Mapping and creation of RIF-CS for the ARDC

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Students with learning disabilities (LD) often experience significant feelings of loneliness. There is some evidence to suggest that these feelings of loneliness may be related to social difficulties that are linked to their learning disability. Adolescents experience more loneliness than any other age group, primarily because this is a time of identity formation and self-evaluation. Therefore, adolescents with learning disabilities are highly likely to experience the negative feelings of loneliness. Many areas of educational research have highlighted the impact of negative feelings on learning. This begs the question, =are adolescents with learning disabilities doubly disadvantaged in regard to their learning?‘ That is, if their learning experience is already problematic, does loneliness exacerbate these learning difficulties? This thesis reveals the findings of a doctoral project which examined this complicated relationship between loneliness and classroom participation using a social cognitive framework. In this multiple case-study design, narratives were constructed using classroom observations and interviews which were conducted with 4 adolescent students (2 girls and 2 boys, from years 9-12) who were identified as likely to be experiencing learning disabilities. Discussion is provided on the method used to identify students with learning disabilities and the related controversy of using disability labels. A key aspect of the design was that it allowed the students to relate their school experiences and have their stories told. The design included an ethnographic element in its focus on the interactions of the students within the school as a culture and elements of narrative inquiry were used, particularly in reporting the results. The narratives revealed all participants experienced problematic social networks. Further, an alarmingly high level of bullying was discovered. Participants reported that when they were feeling rejected or were missing a valued other they had little cognitive energy for learning and did not want to be in school. Absenteeism amongst the group was high, but this was also true for the rest of the school population. A number of relationships emerged from the narratives using social cognitive theory. These relationships highlighted the impact of cognitive, behavioural and environmental factors in the school experience of lonely students with learning disabilities. This approach reflects the social model of disability that frames the research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

During the last decade many cities have sought to promote creativity by encouraging creative industries as drivers for economic and spatial growth. Among the creative industries, film industry play an important role in establishing high level of success in economic and spatial development of cities by fostering endogenous creativeness, attracting exogenous talent, and contributing to the formation of places that creative cities require. The paper aims to scrutinize the role of creative industries in general and the film industry in particular for place making, spatial development, tourism, and the formation of creative cities, their clustering and locational decisions. This paper investigates the positive effects of the film industry on tourism such as incubating creativity potential, increasing place recognition through locations of movies filmed and film festivals hosted, attracting visitors and establishing interaction among visitors, places and their cultures. This paper reveals the preliminary findings of two case studies from Beyoglu, Istanbul and Soho, London, examines the relation between creativity, tourism, culture and the film industry, and discusses their effects on place-making and tourism.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several studies have developed metrics for software quality attributes of object-oriented designs such as reusability and functionality. However, metrics which measure the quality attribute of information security have received little attention. Moreover, existing security metrics measure either the system from a high level (i.e. the whole system’s level) or from a low level (i.e. the program code’s level). These approaches make it hard and expensive to discover and fix vulnerabilities caused by software design errors. In this work, we focus on the design of an object-oriented application and define a number of information security metrics derivable from a program’s design artifacts. These metrics allow software designers to discover and fix security vulnerabilities at an early stage, and help compare the potential security of various alternative designs. In particular, we present security metrics based on composition, coupling, extensibility, inheritance, and the design size of a given object-oriented, multi-class program from the point of view of potential information flow.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The role of particular third sector organisations, Social Clubs, in supporting gambling through the use of EGMs in venues presents as a difficult social issue. Social Clubs gain revenue from gambling activities; but also contribute to social well-being through the provision of services to communities. The revenues derived from gambling in specific geographic locales has been seen by government as a way to increase economic development particularly in deprived areas. However there are also concerns about accessibility of low-income citizens to Electronic Gaming Machines (EGMS) and the high level of gambling overall in these deprived areas. We argue that social capital can be viewed as a guard against deleterious effects of unconstrained use of EGM gambling in communities. However, it is contended that social capital may also be destroyed by gambling activity if commercial business actors are able to use EGMs without community obligations to service provision. This paper examines access to gambling through EGMs and its relationship to social capital and the consequent effect on community resilience, via an Australian case study. The results highlight the potential two-way relationship between gambling and volunteering, such that volunteering (and social capital more generally) may help protect against problems of gambling, but also that volunteering as an activity may be damaged by increased gambling activity. This suggests that, regardless of the direction of causation, it is necessary to build up social capital via volunteering and other social capital activities in areas where EGMS are concentrated. The study concludes that Social Clubs using EGMs to derive funds are uniquely positioned within the community to develop programs that foster social capital creation and build community resilience in deprived areas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Given there is currently a migration trend from traditional electrical supervisory control and data acquisition (SCADA) systems towards a smart grid based approach to critical infrastructure management. This project provides an evaluation of existing and proposed implementations for both traditional electrical SCADA and smart grid based architectures, and proposals a set of reference requirements which test bed implementations should implement. A high-level design for smart grid test beds is proposed and initial implementation performed, based on the proposed design, using open source and freely available software tools. The project examines the move towards smart grid based critical infrastructure management and illustrates the increased security requirements. The implemented test bed provides a basic framework for testing network requirements in a smart grid environment, as well as a platform for further research and development. Particularly to develop, implement and test network security related disturbances such as intrusion detection and network forensics. The project undertaken proposes and develops an architecture of the emulation of some smart grid functionality. The Common Open Research Emulator (CORE) platform was used to emulate the communication network of the smart grid. Specifically CORE was used to virtualise and emulate the TCP/IP networking stack. This is intended to be used for further evaluation and analysis, for example the analysis of application protocol messages, etc. As a proof of concept, software libraries were designed, developed and documented to enable and support the design and development of further smart grid emulated components, such as reclosers, switches, smart meters, etc. As part of the testing and evaluation a Modbus based smart meter emulator was developed to provide basic functionality of a smart meter. Further code was developed to send Modbus request messages to the emulated smart meter and receive Modbus responses from it. Although the functionality of the emulated components were limited, it does provide a starting point for further research and development. The design is extensible to enable the design and implementation of additional SCADA protocols. The project also defines an evaluation criteria for the evaluation of the implemented test bed, and experiments are designed to evaluate the test bed according to the defined criteria. The results of the experiments are collated and presented, and conclusions drawn from the results to facilitate discussion on the test bed implementation. The discussion undertaken also present possible future work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Prostate cancer is the second most common cause of cancer-related deaths in Western males. Current diagnostic, prognostic and treatment approaches are not ideal and advanced metastatic prostate cancer is incurable. There is an urgent need for improved adjunctive therapies and markers for this disease. GPCRs are likely to play a significant role in the initiation and progression of prostate cancer. Over the last decade, it has emerged that G protein coupled receptors (GPCRs) are likely to function as homodimers and heterodimers. Heterodimerisation between GPCRs can result in the formation of novel pharmacological receptors with altered functional outcomes, and a number of GPCR heterodimers have been implicated in the pathogenesis of human disease. Importantly, novel GPCR heterodimers represent potential new targets for the development of more specific therapeutic drugs. Ghrelin is a 28 amino acid peptide hormone which has a unique n-octanoic acid post-translational modification. Ghrelin has a number of important physiological roles, including roles in appetite regulation and the stimulation of growth hormone release. The ghrelin receptor is the growth hormone secretagogue receptor type 1a, GHS-R1a, a seven transmembrane domain GPCR, and GHS-R1b is a C-terminally truncated isoform of the ghrelin receptor, consisting of five transmembrane domains. Growing evidence suggests that ghrelin and the ghrelin receptor isoforms, GHS-R1a and GHS-R1b, may have a role in the progression of a number of cancers, including prostate cancer. Previous studies by our research group have shown that the truncated ghrelin receptor isoform, GHS-R1b, is not expressed in normal prostate, however, it is expressed in prostate cancer. The altered expression of this truncated isoform may reflect a difference between a normal and cancerous state. A number of mutant GPCRs have been shown to regulate the function of their corresponding wild-type receptors. Therefore, we investigated the potential role of interactions between GHS-R1a and GHS-R1b, which are co-expressed in prostate cancer and aimed to investigate the function of this potentially new pharmacological receptor. In 2005, obestatin, a 23 amino acid C-terminally amidated peptide derived from preproghrelin was identified and was described as opposing the stimulating effects of ghrelin on appetite and food intake. GPR39, an orphan GPCR which is closely related to the ghrelin receptor, was identified as the endogenous receptor for obestatin. Recently, however, the ability of obestatin to oppose the effects of ghrelin on appetite and food intake has been questioned, and furthermore, it appears that GPR39 may in fact not be the obestatin receptor. The role of GPR39 in the prostate is of interest, however, as it is a zinc receptor. Zinc has a unique role in the biology of the prostate, where it is normally accumulated at high levels, and zinc accumulation is altered in the development of prostate malignancy. Ghrelin and zinc have important roles in prostate cancer and dimerisation of their receptors may have novel roles in malignant prostate cells. The aim of the current study, therefore, was to demonstrate the formation of GHS-R1a/GHS-R1b and GHS-R1a/GPR39 heterodimers and to investigate potential functions of these heterodimers in prostate cancer cell lines. To demonstrate dimerisation we first employed a classical co-immunoprecipitation technique. Using cells co-overexpressing FLAG- and Myc- tagged GHS-R1a, GHS-R1b and GPR39, we were able to co-immunoprecipitate these receptors. Significantly, however, the receptors formed high molecular weight aggregates. A number of questions have been raised over the propensity of GPCRs to aggregate during co-immunoprecipitation as a result of their hydrophobic nature and this may be misinterpreted as receptor dimerisation. As we observed significant receptor aggregation in this study, we used additional methods to confirm the specificity of these putative GPCR interactions. We used two different resonance energy transfer (RET) methods; bioluminescence resonance energy transfer (BRET) and fluorescence resonance energy transfer (FRET), to investigate interactions between the ghrelin receptor isoforms and GPR39. RET is the transfer of energy from a donor fluorophore to an acceptor fluorophore when they are in close proximity, and RET methods are, therefore, applicable to the observation of specific protein-protein interactions. Extensive studies using the second generation bioluminescence resonance energy transfer (BRET2) technology were performed, however, a number of technical limitations were observed. The substrate used during BRET2 studies, coelenterazine 400a, has a low quantum yield and rapid signal decay. This study highlighted the requirement for the expression of donor and acceptor tagged receptors at high levels so that a BRET ratio can be determined. After performing a number of BRET2 experimental controls, our BRET2 data did not fit the predicted results for a specific interaction between these receptors. The interactions that we observed may in fact represent ‘bystander BRET’ resulting from high levels of expression, forcing the donor and acceptor into close proximity. Our FRET studies employed two different FRET techniques, acceptor photobleaching FRET and sensitised emission FRET measured by flow cytometry. We were unable to observe any significant FRET, or FRET values that were likely to result from specific receptor dimerisation between GHS-R1a, GHS-R1b and GPR39. While we were unable to conclusively demonstrate direct dimerisation between GHS-R1a, GHS-R1b and GPR39 using several methods, our findings do not exclude the possibility that these receptors interact. We aimed to investigate if co-expression of combinations of these receptors had functional effects in prostate cancers cells. It has previously been demonstrated that ghrelin stimulates cell proliferation in prostate cancer cell lines, through ERK1/2 activation, and GPR39 can stimulate ERK1/2 signalling in response to zinc treatments. Additionally, both GHS-R1a and GPR39 display a high level of constitutive signalling and these constitutively active receptors can attenuate apoptosis when overexpressed individually in some cell types. We, therefore, investigated ERK1/2 and AKT signalling and cell survival in prostate cancer the potential modulation of these functions by dimerisation between GHS-R1a, GHS-R1b and GPR39. Expression of these receptors in the PC-3 prostate cancer cell line, either alone or in combination, did not alter constitutive ERK1/2 or AKT signalling, basal apoptosis or tunicamycin-stimulated apoptosis, compared to controls. In summary, the potential interactions between the ghrelin receptor isoforms, GHS-R1a and GHS-R1b, and the related zinc receptor, GPR39, and the potential for functional outcomes in prostate cancer were investigated using a number of independent methods. We did not definitively demonstrate the formation of these dimers using a number of state of the art methods to directly demonstrate receptor-receptor interactions. We investigated a number of potential functions of GPR39 and GHS-R1a in the prostate and did not observe altered function in response to co-expression of these receptors. The technical questions raised by this study highlight the requirement for the application of extensive controls when using current methods for the demonstration of GPCR dimerisation. Similar findings in this field reflect the current controversy surrounding the investigation of GPCR dimerisation. Although GHS-R1a/GHS-R1b or GHS-R1a/GPR39 heterodimerisation was not clearly demonstrated, this study provides a basis for future investigations of these receptors in prostate cancer. Additionally, the results presented in this study and growing evidence in the literature highlight the requirement for an extensive understanding of the experimental method and the performance of a range of controls to avoid the spurious interpretation of data gained from artificial expression systems. The future development of more robust techniques for investigating GPCR dimerisation is clearly required and will enable us to elucidate whether GHS-R1a, GHS-R1b and GPR39 form physiologically relevant dimers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.