99 resultados para lack of catalytic mechanism
Resumo:
Establishing a nationwide Electronic Health Record system has become a primary objective for many countries around the world, including Australia, in order to improve the quality of healthcare while at the same time decreasing its cost. Doing so will require federating the large number of patient data repositories currently in use throughout the country. However, implementation of EHR systems is being hindered by several obstacles, among them concerns about data privacy and trustworthiness. Current IT solutions fail to satisfy patients’ privacy desires and do not provide a trustworthiness measure for medical data. This thesis starts with the observation that existing EHR system proposals suer from six serious shortcomings that aect patients’ privacy and safety, and medical practitioners’ trust in EHR data: accuracy and privacy concerns over linking patients’ existing medical records; the inability of patients to have control over who accesses their private data; the inability to protect against inferences about patients’ sensitive data; the lack of a mechanism for evaluating the trustworthiness of medical data; and the failure of current healthcare workflow processes to capture and enforce patient’s privacy desires. Following an action research method, this thesis addresses the above shortcomings by firstly proposing an architecture for linking electronic medical records in an accurate and private way where patients are given control over what information can be revealed about them. This is accomplished by extending the structure and protocols introduced in federated identity management to link a patient’s EHR to his existing medical records by using pseudonym identifiers. Secondly, a privacy-aware access control model is developed to satisfy patients’ privacy requirements. The model is developed by integrating three standard access control models in a way that gives patients access control over their private data and ensures that legitimate uses of EHRs are not hindered. Thirdly, a probabilistic approach for detecting and restricting inference channels resulting from publicly-available medical data is developed to guard against indirect accesses to a patient’s private data. This approach is based upon a Bayesian network and the causal probabilistic relations that exist between medical data fields. The resulting definitions and algorithms show how an inference channel can be detected and restricted to satisfy patients’ expressed privacy goals. Fourthly, a medical data trustworthiness assessment model is developed to evaluate the quality of medical data by assessing the trustworthiness of its sources (e.g. a healthcare provider or medical practitioner). In this model, Beta and Dirichlet reputation systems are used to collect reputation scores about medical data sources and these are used to compute the trustworthiness of medical data via subjective logic. Finally, an extension is made to healthcare workflow management processes to capture and enforce patients’ privacy policies. This is accomplished by developing a conceptual model that introduces new workflow notions to make the workflow management system aware of a patient’s privacy requirements. These extensions are then implemented in the YAWL workflow management system.
Resumo:
Inorganic nano-graphene hybrid materials that are strongly coupled via chemical bonding usually present superior electrochemical performance. However, how the chemical bond forms and the synergistic catalytic mechanism remain fundamental questions. In this study, the chemical bonding of the MoS2 nanolayer supported on vacancy mediated graphene and the hydrogen evolution reaction of this nanocatalyst system were investigated. An obvious reduction of the metallic state of the MoS2 nanolayer is noticed as electrons are transferred to form a strong contact with the reduced graphene support. The missing metallic state associated with the unsaturated atoms at the peripheral sites in turn modifies the hydrogen evolution activity. The easiest evolution path is from the Mo edge sites, with the presence of the graphene resulting in a decrease in the energy barrier from 0.17 to 0.11 eV. Evolution of H2 from the S edge becomes more difficult due to an increase in the energy barrier from 0.43 to 0.84 eV. The clarification of the chemical bonding and catalytic mechanisms for hydrogen evolution using this strongly coupled MoS2/graphene nanocatalyst provide a valuable source of reference and motivation for further investigation for improved hydrogen evolution using chemically active nanocoupled systems.
Resumo:
Principal Topic: Entrepreneurship is key to employment, innovation and growth (Acs & Mueller, 2008), and as such, has been the subject of tremendous research in both the economic and management literatures since Solow (1957), Schumpeter (1934, 1943), and Penrose (1959). The presence of entrepreneurs in the economy is a key factor in the success or failure of countries to grow (Audretsch and Thurik, 2001; Dejardin, 2001). Further studies focus on the conditions of existence of entrepreneurship, influential factors invoked are historical, cultural, social, institutional, or purely economic (North, 1997; Thurik 1996 & 1999). Of particular interest, beyond the reasons behind the existence of entrepreneurship, are entrepreneurial survival and good ''performance'' factors. Using cross-country firm data analysis, La Porta & Schleifer (2008) confirm that informal micro-businesses provide on average half of all economic activity in developing countries. They find that these are utterly unproductive compared to formal firms, and conclude that the informal sector serves as a social security net ''keep[ing] millions of people alive, but disappearing over time'' (abstract). Robison (1986), Hill (1996, 1997) posit that the Indonesian government under Suharto always pointed to the lack of indigenous entrepreneurship , thereby motivating the nationalisation of all industries. Furthermore, the same literature also points to the fact that small businesses were mostly left out of development programmes because they were supposed less productive and having less productivity potential than larger ones. Vial (2008) challenges this view and shows that small firms represent about 70% of firms, 12% of total output, but contribute to 25% of total factor productivity growth on average over the period 1975-94 in the industrial sector (Table 10, p.316). ---------- Methodology/Key Propositions: A review of the empirical literature points at several under-researched questions. Firstly, we assess whether there is, evidence of small family-business entrepreneurship in Indonesia. Secondly, we examine and present the characteristics of these enterprises, along with the size of the sector, and its dynamics. Thirdly, we study whether these enterprises underperform compared to the larger scale industrial sector, as it is suggested in the literature. We reconsider performance measurements for micro-family owned businesses. We suggest that, beside productivity measures, performance could be appraised by both the survival probability of the firm, and by the amount of household assets formation. We compare micro-family-owned and larger industrial firms' survival probabilities after the 1997 crisis, their capital productivity, then compare household assets of families involved in business with those who do not. Finally, we examine human and social capital as moderators of enterprises' performance. In particular, we assess whether a higher level of education and community participation have an effect on the likelihood of running a family business, and whether it has an impact on households' assets level. We use the IFLS database compiled and published by RAND Corporation. The data is a rich community, households, and individuals panel dataset in four waves: 1993, 1997, 2000, 2007. We now focus on the waves 1997 and 2000 in order to investigate entrepreneurship behaviours in turbulent times, i.e. the 1997 Asian crisis. We use aggregate individual data, and focus on households data in order to study micro-family-owned businesses. IFLS data covers roughly 7,600 households in 1997 and over 10,000 households in 2000, with about 95% of 1997 households re-interviewed in 2000. Households were interviewed in 13 of the 27 provinces as defined before 2001. Those 13 provinces were targeted because accounting for 83% of the population. A full description of the data is provided in Frankenberg and Thomas (2000), and Strauss et alii (2004). We deflate all monetary values in Rupiah with the World Development Indicators Consumer Price Index base 100 in 2000. ---------- Results and Implications: We find that in Indonesia, entrepreneurship is widespread and two thirds of households hold one or several family businesses. In rural areas, in 2000, 75% of households run one or several businesses. The proportion of households holding both a farm and a non farm business is higher in rural areas, underlining the reliance of rural households on self-employment, especially after the crisis. Those businesses come in various sizes from very small to larger ones. The median business production value represents less than the annual national minimum wage. Figures show that at least 75% of farm businesses produce less than the annual minimum wage, with non farm businesses being more numerous to produce the minimum wage. However, this is only one part of the story, as production is not the only ''output'' or effect of the business. We show that the survival rate of those businesses ranks between 70 and 82% after the 1997 crisis, which contrasts with the 67% survival rate for the formal industrial sector (Ter Wengel & Rodriguez, 2006). Micro Family Owned Businesses might be relatively small in terms of production, they also provide stability in times of crisis. For those businesses that provide business assets figures, we show that capital productivity is fairly high, with rates that are ten times higher for non farm businesses. Results show that households running a business have larger family assets, and households are better off in urban areas. We run a panel logit model in order to test the effect of human and social capital on the existence of businesses among households. We find that non farm businesses are more likely to appear in households with higher human and social capital situated in urban areas. Farm businesses are more likely to appear in lower human capital and rural contexts, while still being supported by community participation. The estimation of our panel data model confirm that households are more likely to have higher family assets if situated in urban area, the higher the education level, the larger the assets, and running a business increase the likelihood of having larger assets. This is especially true for non farm businesses that have a clearly larger and more significant effect on assets than farm businesses. Finally, social capital in the form of community participation also has a positive effect on assets. Those results confirm the existence of a strong entrepreneurship culture among Indonesian households. Investigating survival rates also shows that those businesses are quite stable, even in the face of a violent crisis such as the 1997 one, and as a result, can provide a safety net. Finally, considering household assets - the returns of business to the household, rather than profit or productivity - the returns of business to itself, shows that households running a business are better off. While we demonstrate that uman and social capital are key to business existence, survival and performance, those results open avenues for further research regarding the factors that could hamper growth of those businesses in terms of output and employment.
Resumo:
Contractual relationships have become increasingly strained in recent years in the construction industry result in the use of the judicial system for the settlement of contractual disagreements. Why is this so? Evidence from anecdotes suggest that the lack of capacity amongst owners and contractors to carry out a contract using a good practice approach during the construction of a project contribute to the occurrence of conflicts, losses, deficient contractual relationships and poor performance of the construction work. Recognizing that current forms of contract in use today perpetuate a legacy of construction problems, we are conducting explanatory research to examine whether the widely publicized benefits of New Engineering Contract (NEC) could be realized in the Australian construction industry. This paper outlines a research agenda that will help shed light on how contract forms are able to be used as a mechanism to ensure construction projects are delivered successfully whilst also meeting the goals of multiple stakeholders. Understanding the Critical Success Factors (CSFs), commonly used construction contracts and the NEC system can help us address some of these issues. However, there are gaps in the validation of the benefits of NEC and its link with project success. We identify some of these gaps and propose a methodology by which to gain insights into this phenomenon. Keywords: Project Success, Construction Contracting, New Engineering Contract (NEC)
Resumo:
This thesis is the result of an investigation of a Queensland example of curriculum reform based on outcomes, a type of reform common to many parts of the world during the last decade. The purpose of the investigation was to determine the impact of outcomes on teacher perspectives of professional practice. The focus was chosen to permit investigation not only of changes in behaviour resulting from the reform but also of teachers' attitudes and beliefs developed during implementation. The study is based on qualitative methodology, chosen because of its suitability for the investigation of attitudes and perspectives. The study exploits the researcher's opportunities for prolonged, direct contact with groups of teachers through the selection of an over-arching ethnography approach, an approach designed to capture the holistic nature of the reform and to contextualise the data within a broad perspective. The selection of grounded theory as a basis for data analysis reflects the open nature of this inquiry and demonstrates the study's constructivist assumptions about the production of knowledge. The study also constitutes a multi-site case study by virtue of the choice of three individual school sites as objects to be studied and to form the basis of the report. Three primary school sites administered by Brisbane Catholic Education were chosen as the focus of data collection. Data were collected from three school sites as teachers engaged in the first year of implementation of Student Performance Standards, the Queensland version of English outcomes based on the current English syllabus. Teachers' experience of outcomes-driven curriculum reform was studied by means of group interviews conducted at individual school sites over a period of fourteen months, researcher observations and the collection of artefacts such as report cards. Analysis of data followed grounded theory guidelines based on a system of coding. Though classification systems were not generated prior to data analysis, the labelling of categories called on standard, non-idiosyncratic terminology and analytic frames and concepts from existing literature wherever practicable in order to permit possible comparisons with other related research. Data from school sites were examined individually and then combined to determine teacher understandings of the reform, changes that have been made to practice and teacher responses to these changes in terms of their perspectives of professionalism. Teachers in the study understood the reform as primarily an accountability mechanism. Though teachers demonstrated some acceptance of the intentions of the reform, their responses to its conceptualisation, supporting documentation and implications for changing work practices were generally characterised by reduced confidence, anger and frustration. Though the impact of outcomes-based curriculum reform must be interpreted through the inter-relationships of a broad range of elements which comprise teachers' work and their attitudes towards their work, it is proposed that the substantive findings of the study can be understood in terms of four broad themes. First, when the conceptual design of outcomes did not serve teachers' accountability requirements and outcomes were perceived to be expressed in unfamiliar technical language, most teachers in the study lost faith in the value of the reform and lost confidence in their own abilities to understand or implement it. Second, this reduction of confidence was intensified when the scope of outcomes was outside the scope of the teachers' existing curriculum and assessment planning and teachers were confronted with the necessity to include aspects of syllabuses or school programs which they had previously omitted because of a lack of understanding or appreciation. The corollary was that outcomes promoted greater syllabus fidelity when frameworks were closely aligned. Third, other benefits the teachers associated with outcomes included the development of whole school curriculum resources and greater opportunity for teacher collaboration, particularly among schools. The teachers, however, considered a wide range of factors when determining the overall impact of the reform, and perceived a number of them in terms of the costs of implementation. These included the emergence of ethical dilemmas concerning relationships with students, colleagues and parents, reduced individual autonomy, particularly with regard to the selection of valued curriculum content and intensification of workload with the capacity to erode the relationships with students which teachers strongly associated with the rewards of their profession. Finally, in banding together at the school level to resist aspects of implementation, some teachers showed growing awareness of a collective authority capable of being exercised in response to top-down reform. These findings imply that Student Performance Standards require review and, additional implementation resourcing to support teachers through times of reduced confidence in their own abilities. Outcomes prove an effective means of high-fidelity syllabus implementation, and, provided they are expressed in an accessible way and aligned with syllabus frameworks and terminology, should be considered for inclusion in future syllabuses across a range of learning areas. The study also identifies a range of unintended consequences of outcomes-based curriculum and acknowledges the complexity of relationships among all the aspects of teachers' work. It also notes that the impact of reform on teacher perspectives of professional practice may alter teacher-teacher and school-system relationships in ways that have the potential to influence the effectiveness of future curriculum reform.
Resumo:
Thomas Young (1773-1829) carried out major pioneering work in many different subjects. In 1800 he gave the Bakerian Lecture of the Royal Society on the topic of the “mechanism of the eye”: this was published in the following year (Young, 1801). Young used his own design of optometer to measure refraction and accommodation, and discovered his own astigmatism. He considered the different possible origins of accommodation and confirmed that it was due to change in shape of the lens rather than to change in shape of the cornea or an increase in axial length. However, the paper also dealt with many other aspects of visual and ophthalmic optics, such as biometric parameters, peripheral refraction, longitudinal chromatic aberration, depth-of-focus and instrument myopia. These aspects of the paper have previously received little attention. We now give detailed consideration to these and other less-familiar features of Young’s work and conclude that his studies remain relevant to many of the topics which currently engage visual scientists.
Resumo:
The depth of focus (DOF) can be defined as the variation in image distance of a lens or an optical system which can be tolerated without incurring an objectionable lack of sharpness of focus. The DOF of the human eye serves a mechanism of blur tolerance. As long as the target image remains within the depth of focus in the image space, the eye will still perceive the image as being clear. A large DOF is especially important for presbyopic patients with partial or complete loss of accommodation (presbyopia), since this helps them to obtain an acceptable retinal image when viewing a target moving through a range of near to intermediate distances. The aim of this research was to investigate the DOF of the human eye and its association with the natural wavefront aberrations, and how higher order aberrations (HOAs) can be used to expand the DOF, in particular by inducing spherical aberrations ( 0 4 Z and 0 6 Z ). The depth of focus of the human eye can be measured using a variety of subjective and objective methods. Subjective measurements based on a Badal optical system have been widely adopted, through which the retinal image size can be kept constant. In such measurements, the subject.s tested eye is normally cyclopleged. Objective methods without the need of cycloplegia are also used, where the eye.s accommodative response is continuously monitored. Generally, the DOF measured by subjective methods are slightly larger than those measured objectively. In recent years, methods have also been developed to estimate DOF from retinal image quality metrics (IQMs) derived from the ocular wavefront aberrations. In such methods, the DOF is defined as the range of defocus error that degrades the retinal image quality calculated from the IQMs to a certain level of the possible maximum value. In this study, the effect of different amounts of HOAs on the DOF was theoretically evaluated by modelling and comparing the DOF of subjects from four different clinical groups, including young emmetropes (20 subjects), young myopes (19 subjects), presbyopes (32 subjects) and keratoconics (35 subjects). A novel IQM-based through-focus algorithm was developed to theoretically predict the DOF of subjects with their natural HOAs. Additional primary spherical aberration ( 0 4 Z ) was also induced in the wavefronts of myopes and presbyopes to simulate the effect of myopic refractive correction (e.g. LASIK) and presbyopic correction (e.g. progressive power IOL) on the subject.s DOF. Larger amounts of HOAs were found to lead to greater values of predicted DOF. The introduction of primary spherical aberration was found to provide moderate increase of DOF while slightly deteriorating the image quality at the same time. The predicted DOF was also affected by the IQMs and the threshold level adopted. We then investigated the influence of the chosen threshold level of the IQMs on the predicted DOF, and how it relates to the subjectively measured DOF. The subjective DOF was measured in a group of 17 normal subjects, and we used through-focus visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM to estimate the DOF. The results allowed comparison of the subjective DOF with the estimated DOF and determination of a threshold level for DOF estimation. Significant correlation was found between the subject.s estimated threshold level for the estimated DOF and HOA RMS (Pearson.s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual.s DOF from a single measurement of their wavefront aberrations. A subsequent study was conducted to investigate the DOF of keratoconic subjects. Significant increases of the level of HOAs, including spherical aberration, coma and trefoil, can be observed in keratoconic eyes. This population of subjects provides an opportunity to study the influence of these HOAs on DOF. It was also expected that the asymmetric aberrations (coma and trefoil) in the keratoconic eye could interact with defocus to cause regional blur of the target. A dual-Badal-channel optical system with a star-pattern target was used to measure the subjective DOF in 10 keratoconic eyes and compared to those from a group of 10 normal subjects. The DOF measured in keratoconic eyes was significantly larger than that in normal eyes. However there was not a strong correlation between the large amount of HOA RMS and DOF in keratoconic eyes. Among all HOA terms, spherical aberration was found to be the only HOA that helped to significantly increase the DOF in the studied keratoconic subjects. Through the first three studies, a comprehensive understanding of DOF and its association to the HOAs in the human eye had been achieved. An adaptive optics system was then designed and constructed. The system was capable of measuring and altering the wavefront aberrations in the subject.s eye and measuring the resulting DOF under the influence of different combination of HOAs. Using the AO system, we investigated the concept of extending the DOF through optimized combinations of 0 4 Z and 0 6 Z . Systematic introduction of a targeted amount of both 0 4 Z and 0 6 Z was found to significantly improve the DOF of healthy subjects. The use of wavefront combinations of 0 4 Z and 0 6 Z with opposite signs can further expand the DOF, rather than using 0 4 Z or 0 6 Z alone. The optimal wavefront combinations to expand the DOF were estimated using the ratio of increase in DOF and loss of retinal image quality defined by VSOTF. In the experiment, the optimal combinations of 0 4 Z and 0 6 Z were found to provide a better balance of DOF expansion and relatively smaller decreases in VA. Therefore, the optimal combinations of 0 4 Z and 0 6 Z provides a more efficient method to expand the DOF rather than 0 4 Z or 0 6 Z alone. This PhD research has shown that there is a positive correlation between the DOF and the eye.s wavefront aberrations. More aberrated eyes generally have a larger DOF. The association of DOF and the natural HOAs in normal subjects can be quantified, which allows the estimation of DOF directly from the ocular wavefront aberration. Among the Zernike HOA terms, spherical aberrations ( 0 4 Z and 0 6 Z ) were found to improve the DOF. Certain combinations of 0 4 Z and 0 6 Z provide a more effective method to expand DOF than using 0 4 Z or 0 6 Z alone, and this could be useful in the optimal design of presbyopic optical corrections such as multifocal contact lenses, intraocular lenses and laser corneal surgeries.
Resumo:
Stigmergy is a biological term used when discussing insect or swarm behaviour, and describes a model supporting environmental communication separately from artefacts or agents. This phenomenon is demonstrated in the behavior of ants and their food gathering process when following pheromone trails, or similarly termites and their termite mound building process. What is interesting with this mechanism is that highly organized societies are achieved with a lack of any apparent management structure. Stigmergic behavior is implicit in the Web where the volume of users provides a self-organizing and self-contextualization of content in sites which facilitate collaboration. However, the majority of content is generated by a minority of the Web participants. A significant contribution from this research would be to create a model of Web stigmergy, identifying virtual pheromones and their importance in the collaborative process. This paper explores how exploiting stigmergy has the potential of providing a valuable mechanism for identifying and analyzing online user behavior recording actionable knowledge otherwise lost in the existing web interaction dynamics. Ultimately this might assist our building better collaborative Web sites.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
In humans, more than 30,000 chimeric transcripts originating from 23,686 genes have been identified. The mechanisms and association of chimeric transcripts arising from chromosomal rearrangements with cancer are well established, but much remains unknown regarding the biogenesis and importance of other chimeric transcripts that arise from nongenomic alterations. Recently, a SLC45A3–ELK4 chimera has been shown to be androgen-regulated, and is overexpressed in metastatic or high-grade prostate tumors relative to local prostate cancers. Here, we characterize the expression of a KLK4 cis sense–antisense chimeric transcript, and show other examples in prostate cancer. Using non-protein-coding microarray analyses, we initially identified an androgen-regulated antisense transcript within the 3′ untranslated region of the KLK4 gene in LNCaP cells. The KLK4 cis-NAT was validated by strand-specific linker-mediated RT-PCR and Northern blotting. Characterization of the KLK4 cis-NAT by 5′ and 3′ rapid amplification of cDNA ends (RACE) revealed that this transcript forms multiple fusions with the KLK4 sense transcript. Lack of KLK4 antisense promoter activity using reporter assays suggests that these transcripts are unlikely to arise from a trans-splicing mechanism. 5′ RACE and analyses of deep sequencing data from LNCaP cells treated ±androgens revealed six high-confidence sense–antisense chimeras of which three were supported by the cDNA databases. In this study, we have shown complex gene expression at the KLK4 locus that might be a hallmark of cis sense–antisense chimeric transcription.
Resumo:
Background Huntingtin, the HD gene encoded protein mutated by polyglutamine expansion in Huntington's disease, is required in extraembryonic tissues for proper gastrulation, implicating its activities in nutrition or patterning of the developing embryo. To test these possibilities, we have used whole mount in situ hybridization to examine embryonic patterning and morphogenesis in homozygous Hdhex4/5 huntingtin deficient embryos. Results In the absence of huntingtin, expression of nutritive genes appears normal but E7.0–7.5 embryos exhibit a unique combination of patterning defects. Notable are a shortened primitive streak, absence of a proper node and diminished production of anterior streak derivatives. Reduced Wnt3a, Tbx6 and Dll1 expression signify decreased paraxial mesoderm and reduced Otx2 expression and lack of headfolds denote a failure of head development. In addition, genes initially broadly expressed are not properly restricted to the posterior, as evidenced by the ectopic expression of Nodal, Fgf8 and Gsc in the epiblast and T (Brachyury) and Evx1 in proximal mesoderm derivatives. Despite impaired posterior restriction and anterior streak deficits, overall anterior/posterior polarity is established. A single primitive streak forms and marker expression shows that the anterior epiblast and anterior visceral endoderm (AVE) are specified. Conclusion Huntingtin is essential in the early patterning of the embryo for formation of the anterior region of the primitive streak, and for down-regulation of a subset of dynamic growth and transcription factor genes. These findings provide fundamental starting points for identifying the novel cellular and molecular activities of huntingtin in the extraembryonic tissues that govern normal anterior streak development. This knowledge may prove to be important for understanding the mechanism by which the dominant polyglutamine expansion in huntingtin determines the loss of neurons in Huntington's disease.
Resumo:
Humankind has been dealing with all kinds of disasters since the dawn of time. The risk and impact of disasters producing mass casualties worldwide is increasing, due partly to global warming as well as to increased population growth, increased density and the aging population. China, as a country with a large population, vast territory, and complex climatic and geographical conditions, has been plagued by all kinds of disasters. Disaster health management has traditionally been a relatively arcane discipline within public health. However, SARS, Avian Influenza, and earthquakes and floods, along with the need to be better prepared for the Olympic Games in China has brought disasters, their management and their potential for large scale health consequences on populations to the attention of the public, the government and the international community alike. As a result significant improvements were made to the disaster management policy framework, as well as changes to systems and structures to incorporate an improved disaster management focus. This involved the upgrade of the Centres for Disease Control and Prevention (CDC) throughout China to monitor and better control the health consequences particularly of infectious disease outbreaks. However, as can be seen in the Southern China Snow Storm and Wenchuan Earthquake in 2008, there remains a lack of integrated disaster management and efficient medical rescue, which has been costly in terms of economics and health for China. In the context of a very large and complex country, there is a need to better understand whether these changes have resulted in effective management of the health impacts of such incidents. To date, the health consequences of disasters, particularly in China, have not been a major focus of study. The main aim of this study is to analyse and evaluate disaster health management policy in China and in particular, its ability to effectively manage the health consequences of disasters. Flood has been selected for this study as it is a common and significant disaster type in China and throughout the world. This information will then be used to guide conceptual understanding of the health consequences of floods. A secondary aim of the study is to compare disaster health management in China and Australia as these countries differ in their length of experience in having a formalised policy response. The final aim of the study is to determine the extent to which Walt and Gilson’s (1994) model of policy explains how disaster management policy in China was developed and implemented after SARS in 2003 to the present day. This study has utilised a case study methodology. A document analysis and literature search of Chinese and English sources was undertaken to analyse and produce a chronology of disaster health management policy in China. Additionally, three detailed case studies of flood health management in China were undertaken along with three case studies in Australia in order to examine the policy response and any health consequences stemming from the floods. A total of 30 key international disaster health management experts were surveyed to identify fundamental elements and principles of a successful policy framework for disaster health management. Key policy ingredients were identified from the literature, the case-studies and the survey of experts. Walt and Gilson (1994)’s policy model that focuses on the actors, content, context and process of policy was found to be a useful model for analysing disaster health management policy development and implementation in China. This thesis is divided into four parts. Part 1 is a brief overview of the issues and context to set the scene. Part 2 examines the conceptual and operational context including the international literature, government documents and the operational environment for disaster health management in China. Part 3 examines primary sources of information to inform the analysis. This involves two key studies: • A comparative analysis of the management of floods in China and Australia • A survey of international experts in the field of disaster management so as to inform the evaluation of the policy framework in existence in China and the criteria upon which the expression of that policy could be evaluated Part 4 describes the key outcomes of this research which include: • A conceptual framework for describing the health consequences of floods • A conceptual framework for disaster health management • An evaluation of the disaster health management policy and its implementation in China. The research outcomes clearly identified that the most significant improvements are to be derived from improvements in the generic management of disasters, rather than the health aspects alone. Thus, the key findings and recommendations tend to focus on generic issues. The key findings of this research include the following: • The health consequences of floods may be described in terms of time as ‘immediate’, ‘medium term’ and ‘long term’ and also in relation to causation as ‘direct’ and ‘indirect’ consequences of the flood. These two aspects form a matrix which in turn guides management responses. • Disaster health management in China requires a more comprehensive response throughout the cycle of prevention, preparedness, response and recovery but it also requires a more concentrated effort on policy implementation to ensure the translation of the policy framework into effective incident management. • The policy framework in China is largely of international standard with a sound legislative base. In addition the development of the Centres for Disease Control and Prevention has provided the basis for a systematic approach to health consequence management. However, the key weaknesses in the current system include: o The lack of a key central structure to provide the infrastructure with vital support for policy development, implementation and evaluation. o The lack of well-prepared local response teams similar to local government based volunteer groups in Australia. • The system lacks structures to coordinate government action at the local level. The result of this is a poorly coordinated local response and lack of clarity regarding the point at which escalation of the response to higher levels of government is advisable. These result in higher levels of risk and negative health impacts. The key recommendations arising from this study are: 1. Disaster health management policy in China should be enhanced by incorporating disaster management considerations into policy development, and by requiring a disaster management risk analysis and disaster management impact statement for development proposals. 2. China should transform existing organizations to establish a central organisation similar to the Federal Emergency Management Agency (FEMA) in the USA or the Emergency Management Australia (EMA) in Australia. This organization would be responsible for leading nationwide preparedness through planning, standards development, education and incident evaluation and to provide operational support to the national and local government bodies in the event of a major incident. 3. China should review national and local plans to reflect consistency in planning, and to emphasize the advantages of the integrated planning process. 4. Enhance community resilience through community education and the development of a local volunteer organization. China should develop a national strategy which sets direction and standards in regard to education and training, and requires system testing through exercises. Other initiatives may include the development of a local volunteer capability with appropriate training to assist professional response agencies such as police and fire services in a major incident. An existing organisation such as the Communist Party may be an appropriate structure to provide this response in a cost effective manner. 5. Continue development of professional emergency services, particularly ambulance, to ensure an effective infrastructure is in place to support the emergency response in disasters. 6. Funding for disaster health management should be enhanced, not only from government, but also from other sources such as donations and insurance. It is necessary to provide a more transparent mechanism to ensure the funding is disseminated according to the needs of the people affected. 7. Emphasis should be placed on prevention and preparedness, especially on effective disaster warnings. 8. China should develop local disaster health management infrastructure utilising existing resources wherever possible. Strategies for enhancing local infrastructure could include the identification of local resources (including military resources) which could be made available to support disaster responses. It should develop operational procedures to access those resources. Implementation of these recommendations should better position China to reduce the significant health consequences experienced each year from major incidents such as floods and to provide an increased level of confidence to the community about the country’s capacity to manage such events.
Resumo:
Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.
Resumo:
Biomarker analysis has been implemented in sports research in an attempt to monitor the effects of exertion and fatigue in athletes. This study proposed that while such biomarkers may be useful for monitoring injury risk in workers, proteomic approaches might also be utilised to identify novel exertion or injury markers. We found that urinary urea and cortisol levels were significantly elevated in mining workers following a 12 hour overnight shift. These levels failed to return to baseline over 24h in the more active maintenance crew compared to truck drivers (operators) suggesting a lack of recovery between shifts. Use of a SELDI-TOF MS approach to detect novel exertion or injury markers revealed a spectral feature which was associated with workers in both work categories who were engaged in higher levels of physical activity. This feature was identified as the LG3 peptide, a C-terminal fragment of the anti-angiogenic / anti-tumourigenic protein endorepellin. This finding suggests that urinary LG3 peptide may be a biomarker of physical activity. It is also possible that the activity mediated release of LG3 / endorepellin into the circulation may represent a biological mechanism for the known inverse association between physical activity and cancer risk / survival.
Resumo:
Over the last decade, Ionic Liquids (ILs) have been used for the dissolution and derivatization of isolated cellulose. This ability of ILs is now sought for their application in the selective dissolution of cellulose from lignocellulosic biomass, for the manufacture of cellulosic ethanol. However, there are significant knowledge gaps in the understanding of the chemistry of the interaction of biomass and ILs. While imidazolium ILs have been used successfully to dissolve both isolated crystalline cellulose and components of lignocellulosic biomass, phosphonium ILs have not been sufficiently explored for the use in dissolution of lignocellulosic biomass. This thesis reports on the study of the chemistry of sugarcane bagasse with phosphonium ILs. Qualitative and quantitative measurements of biomass components dissolved in the phosphonium ionic liquids (ILs), trihexyltetradecylphosphonium chloride ([P66614]Cl) and tributylmethylphosphonium methylsulphate ([P4441]MeSO4) are obtained using attenuated total reflectance-Fourier Transform Infra Red (FTIR). Absorption bands related to cellulose, hemicelluloses and lignin dissolution monitored in situ in biomass-IL mixtures indicate lignin dissolution in both ILs and some holocellulose dissolution in the hydrophilic [P4441]MeSO4. The kinetics of lignin dissolution reported here indicate that while dissolution in the hydrophobic IL [P66614]Cl appears to follow an accepted mechanism of acid catalysed β-aryl ether cleavage, dissolution in the hydrophilic IL [P4441]MeSO4 does not appear to follow this mechanism and may not be followed by condensation reactions (initiated by reactive ketones). The quantitative measurement of lignin dissolution in phosphonium ILs based on absorbance at 1510 cm-1 has demonstrated utility and greater precision than the conventional Klason lignin method. The cleavage of lignin β-aryl ether bonds in sugarcane bagasse by the ionic liquid [P66614]Cl, in the presence of catalytic amounts of mineral acid. (ca. 0.4 %). The delignification process of bagasse is studied over a range of temperatures (120 °C to 150 °C) by monitoring the production of β-ketones (indicative of cleavage of β-aryl ethers) using FTIR spectroscopy and by compositional analysis of the undissolved fractions. Maximum delignification is obtained at 150 °C, with 52 % of lignin removed from the original lignin content of bagasse. No delignification is observed in the absence of acid which suggests that the reaction is acid catalysed with the IL solubilising the lignin fragments. The rate of delignification was significantly higher at 150 °C, suggesting that crossing the glass transition temperature of lignin effects greater freedom of rotation about the propanoid carbon-carbon bonds and leads to increased cleavage of β-aryl ethers. An attempt has been made to propose a probable mechanism of delignifcation of bagasse with the phosphonuim IL. All polymeric components of bagasse, a lignocellulosic biomass, dissolve in the hydrophilic ionic liquid (IL) tributylmethylphosphonium methylsulfate ([P4441]MeSO4) with and without a catalytic amount of acid (H2SO4, ca. 0.4 %). The presence of acid significantly increases the extent of dissolution of bagasse in [P4441]MeSO4 (by ca. 2.5 times under conditions used here). The dissolved fractions can be partially recovered by the addition of an antisolvent (water) and are significantly enriched in lignin. Unlike acid catalysed dissolution in the hydrophobic IL tetradecyltrihexylphosphonium chloride there is little evidence of cleavage of β-aryl ether bonds of lignin dissolving in [P4441]MeSO4 (with and without acid), but this mechanism may play some role in the acid catalysed dissolution. The XRD of the undissolved fractions suggests that the IL may selectively dissolve the amorphous cellulose component, leaving behind crystalline material.