576 resultados para Show da Fé


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The studies in the thesis were derived from a program of research focused on centre-based child care in Australia. The studies constituted an ecological analysis as they examined proximal and distal factors which have the potential to affect children's developmental opportunities (Bronfenbrenner, 1979). The project was conducted in thirty-two child care centres located in south-east Queensland. Participants in the research included staff members at the centres, families using the centres and their children. The first study described the personal and professional characteristics of one hundred and forty-four child care workers, as well as their job satisfaction and job commitment. Factors impinging on the stability of care afforded to children were examined, specifically child care workers' intentions to leave their current position and actual staff turnover at a twelve month follow-up. This is an ecosystem analysis (Bronfenbrenner & Crouter, 1983), as it examined the world of work for carers; a setting not directly involving the developing child, but which has implications for children's experiences. Staff job satisfaction was focused on working with children and other adults, including parents and colleagues. Involvement with children was reported as being the most rewarding aspect of the work. This intrinsic satisfaction was enough to sustain caregivers' efforts to maintain their employment in child care programs. It was found that, while improving working conditions may help to reduce turnover, it is likely that moderate turnover rates will remain as child care staff work in relatively small centres and they leave in order to improve career prospects. Departure from a child care job appeared to be as much about improving career opportunities or changing personal circumstances, as it was about poor wages and working conditions. In the second study, factors that influence maternal satisfaction with child care arrangements were examined. The focus included examination of the nature and qualities of parental interaction with staff. This was a mesosystem analysis (Bronfenbrenner & Crouter, 1983), as it considered the links between family and child care settings. Two hundred and twenty-two questionnaires were returned from mothers whose children were enrolled in the participating centres. It was found that maternal satisfaction with child care encompassed the domains of child-centred and parent-centred satisfaction. The nature and range of responses in the quantitative and qualitative data indicated that these parents were genuinely satisfied with their children's care. In the prediction of maternal satisfaction with child care, single parents, mothers with high role satisfaction, and mothers who were satisfied with the frequency of staff contact and degree of supportive communication had higher levels of satisfaction with their child care arrangements. The third study described the structural and process variations within child care programs and examined program differences for compliance with regulations and differences by profit status of the centre, as a microsystem analysis (Bronfenbrenner, 1979). Observations were made in eighty-three programs which served children from two to five years. The results of the study affirmed beliefs that nonprofit centres are superior in the quality of care provided, although this was not to a level which meant that the care in for-profit centres was inadequate. Regulation of structural features of child care programs, per se, did not guarantee higher quality child care as measured by global or process indicators. The final study represented an integration of a range of influences in child care and family settings which may impact on development. Features of child care programs which predict children's social and cognitive development, while taking into account child and family characteristics, were identified. Results were consistent with other research findings which show that child and family characteristics and child care quality predict children's development. Child care quality was more important to the prediction of social development, while family factors appeared to be more predictive of cognitive/language development. An influential variable predictive of development was the period of time which the child had been in the centre. This highlighted the importance of the stability of child care arrangements. Child care quality features which had most influence were global ratings of the qualities of the program environment. However, results need to be interpreted cautiously as the explained variance in the predictive models developed was low. The results of these studies are discussed in terms of the implications for practice and future research. Considerations for an expanded view of ecological approaches to child care research are outlined. Issues discussed include the need to generate child care research which is relevant to social policy development, the implications of market driven policies for child care services, professionalism and professionalisation of child care work, and the need to reconceptualise child care research when the goal is to develop greater theoretical understanding about child care environments and developmental processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The focus of this study is the celebration of Eucharist in Catholic primary schools within the Archdiocese of Brisbane. The context of the contemporary Australian Catholic primary school embodies certain 'problematical realities' in relation to the time-honoured way in which school Eucharistic rituals have been celebrated. These contemporary realities raise a number of issues that impact on school celebrations of Eucharist. The purpose of this study is to explore administrators' differing conceptions of school Eucharistic rituals in an attempt to investigate some of these issues and assist members of individual school communities as they strive to make celebrations of Eucharist appropriate and meaningful for the group gathered. The phenomenographic research approach was adopted, as it is well suited to the purpose of this study and the nature of the research question. Phenomenography is essentially a study of variation. It attempts to map the 'whole' phenomenon under investigation by describing on equal terms all conceptions of the phenomenon and establishing an ordered relationship among them. The purpose of this study and the nature of the research question necessitate an approach that allows the identification and description of the different ways in which administrators' experience school Eucharistic rituals. Accordingly, phenomenography was selected. Members of the Administration Team, namely the principal, the APRE (Assistant to the Principal Religious Education) and, in larger primary schools, the AP A (Assistant to the Principal Administration) share responsibility for leading change in Catholic primary schools in the Archdiocese of Brisbane. In practice, however, principals delegate the role of leading the development of the school's religion program and providing leadership in the religious life of the school community to the APRE (Brisbane Catholic Education, 1997). Informants in this study are nineteen APREs from a variety of Catholic primary schools in the Archdiocese of Brisbane. These APREs come from schools across the archdiocese, rather than from within one particular region. Several significant findings resulted from this study. Firstly, the data show that there are significant differences in how APREs' experience school Eucharistic rituals, although the number of these qualitatively different conceptions is quite limited. The study identifies and describes six distinct yet related conceptions of school Eucharistic rituals. The logical relationship among these conceptions (the outcome space) is presented in the form of a diagram with accompanying explication. The variation among the conceptions is best understood and described in terms of three dimensions of the role of Eucharist in the Catholic primary school and is represented on the model of the outcome space. Individual transcripts suggest that individual APREs tend to emphasise some conceptions more than others. It is the contention of the present study that change in the practice of school Eucharistic rituals is unlikely to occur until all of a school community's conceptions are brought out into the open and articulated. As leaders of change, APREs need to be alerted to their own biases and become aware of alternative ways of conceiving school Eucharistic ritual. It is proposed that the different categories of description and dimensions, represented by the model of the outcome space, can be used to help in the process of articulating a school community's conceptions of Eucharist, with the APRE as facilitator of this process. As a result, the school community develops a better understanding of why their particular school does what it does in relation to school Eucharistic rituals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research used the Queensland Police Service, Australia, as a major case study. Information on principles, techniques and processes used, and the reason for the recording, storing and release of audit information for evidentiary purposes is reported. It is shown that Law Enforcement Agencies have a two-fold interest in, and legal obligation pertaining to, audit trails. The first interest relates to the situation where audit trails are actually used by criminals in the commission of crime and the second to where audit trails are generated by the information systems used by the police themselves in support of the recording and investigation of crime. Eleven court cases involving Queensland Police Service audit trails used in evidence in Queensland courts were selected for further analysis. It is shown that, of the cases studied, none of the evidence presented was rejected or seriously challenged from a technical perspective. These results were further analysed and related to normal requirements for trusted maintenance of audit trail information in sensitive environments with discussion on the ability and/or willingness of courts to fully challenge, assess or value audit evidence presented. Managerial and technical frameworks for firstly what is considered as an environment where a computer system may be considered to be operating properly and, secondly, what aspects of education, training, qualifications, expertise and the like may be considered as appropriate for persons responsible within that environment, are both proposed. Analysis was undertaken to determine if audit and control of information in a high security environment, such as law enforcement, could be judged as having improved, or not, in the transition from manual to electronic processes. Information collection, control of processing and audit in manual processes used by the Queensland Police Service, Australia, in the period 1940 to 1980 was assessed against current electronic systems essentially introduced to policing in the decades of the 1980s and 1990s. Results show that electronic systems do provide for faster communications with centrally controlled and updated information readily available for use by large numbers of users who are connected across significant geographical locations. However, it is clearly evident that the price paid for this is a lack of ability and/or reluctance to provide improved audit and control processes. To compare the information systems audit and control arrangements of the Queensland Police Service with other government departments or agencies, an Australia wide survey was conducted. Results of the survey were contrasted with the particular results of a survey, conducted by the Australian Commonwealth Privacy Commission four years previous, to this survey which showed that security in relation to the recording of activity against access to information held on Australian government computer systems has been poor and a cause for concern. However, within this four year period there is evidence to suggest that government organisations are increasingly more inclined to generate audit trails. An attack on the overall security of audit trails in computer operating systems was initiated to further investigate findings reported in relation to the government systems survey. The survey showed that information systems audit trails in Microsoft Corporation's Windows operating system environments are relied on quite heavily. An audit of the security for audit trails generated, stored and managed in the Microsoft Windows 2000 operating system environment was undertaken and compared and contrasted with similar such audit trail schemes in the UNIX and Linux operating systems. Strength of passwords and exploitation of any security problems in access control were targeted using software tools that are freely available in the public domain. Results showed that such security for the Windows 2000 system is seriously flawed and the integrity of audit trails stored within these environments cannot be relied upon. An attempt to produce a framework and set of guidelines for use by expert witnesses in the information technology (IT) profession is proposed. This is achieved by examining the current rules and guidelines related to the provision of expert evidence in a court environment, by analysing the rationale for the separation of distinct disciplines and corresponding bodies of knowledge used by the Medical Profession and Forensic Science and then by analysing the bodies of knowledge within the discipline of IT itself. It is demonstrated that the accepted processes and procedures relevant to expert witnessing in a court environment are transferable to the IT sector. However, unlike some discipline areas, this analysis has clearly identified two distinct aspects of the matter which appear particularly relevant to IT. These two areas are; expertise gained through the application of IT to information needs in a particular public or private enterprise; and expertise gained through accepted and verifiable education, training and experience in fundamental IT products and system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to effect permanent closure in burns patients suffering from full thickness wounds, replacing their skin via split thickness autografting, is essential. Dermal substitutes in conjunction with widely meshed split thickness autografts (+/- cultured keratinocytes) reduce scarring at the donor and recipient sites of burns patients by reducing demand for autologous skin (both surface area and thickness), without compromising dermal delivery at the wound face. Tissue engineered products such as Integra consist of a dermal template which is rapidly remodelled to form a neodermis, at which time the temporary silicone outer layer is removed and replaced with autologous split thickness skin. Whilst provision of a thick tissue engineered dermis at full thickness burn sites reduces scarring, it is hampered by delays in vascularisation which results in clinical failure. The ultimate success of any skin graft product is dependent upon a number of basic factors including adherence, haemostasis and in the case of viable tissue grafts, success is ultimately dependent upon restoration of a normal blood supply, and hence this study. Ultimately, the goal of this research is to improve the therapeutic properties of tissue replacements, through impregnation with growth factors aimed at stimulating migration and proliferation of microvascular endothelial cells into the donor tissue post grafting. For the purpose of my masters, the aim was to evaluate the responsiveness of a dermal microvascular endothelial cell line to growth factors and haemostatic factors, in the presence of the glycoprotein vitronectin. Vitronectin formed the backbone for my hypothesis and research due to its association with both epithelial and, more specifically, endothelial migration and proliferation. Early work using a platform technology referred to as VitroGro (Tissue Therapies Ltd), which is comprised of vitronectin bound BP5/IGF-1, aided keratinocyte proliferation. I hypothesised that this result would translate to another epithelium - endothelium. VitroGro had no effect on endothelial proliferation or migration. Vitronectin increases the presence of Fibroblast Growth Factor (FGF) and Vascular Endothelial Growth Factor (VEGF) receptors, enhancing cell responsiveness to their respective ligands. So, although Human Microvascular Endothelial Cell line 1 (HMEC-1) VEGF receptor expression is generally low, it was hypothesised that exposure to vitronectin would up-regulate this receptor. HMEC-1 migration, but not proliferation, was enhanced by vitronectin bound VEGF, as well as vitronectin bound Epidermal Growth Factor (EGF), both of which could be used to stimulate microvascular endothelial cell migration for the purpose of transplantation. In addition to vitronectin's synergy with various growth factors, it has also been shown to play a role in haemostasis. Vitronectin binds thrombin-antithrombin III (TAT) to form a trimeric complex that takes on many of the attributes of vitronectin, such as heparin affinity, which results in its adherence to endothelium via heparan sulfate proteoglycans (HSP), followed by unaltered transcytosis through the endothelium, and ultimately its removal from the circulation. This has been documented as a mechanism designed to remove thrombin from the circulation. Equally, it could be argued that it is a mechanism for delivering vitronectin to the matrix. My results show that matrix-bound vitronectin dramatically alters the effect that conformationally altered antithrombin three (cATIII) has on proliferation of microvascular endothelial cells. cATIII stimulates HMEC-1 proliferation in the presence of matrix-bound vitronectin, as opposed to inhibiting proliferation in its absence. Binding vitronectin to tissues and organs prior to transplant, in the presence of cATIII, will have a profound effect on microvascular infiltration of the graft, by preventing occlusion of existing vessels whilst stimulating migration and proliferation of endothelium within the tissue.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Prostate cancer is an important male health issue. The strategies used to diagnose and treat prostate cancer underscore the cell and molecular interactions that promote disease progression. Prostate cancer is histologically defined by increasingly undifferentiated tumour cells and therapeutically targeted by androgen ablation. Even as the normal glandular architecture of the adult prostate is lost, prostate cancer cells remain dependent on the androgen receptor (AR) for growth and survival. This project focused on androgen-regulated gene expression, altered cellular differentiation, and the nexus between these two concepts. The AR controls prostate development, homeostasis and cancer progression by regulating the expression of downstream genes. Kallikrein-related serine peptidases are prominent transcriptional targets of AR in the adult prostate. Kallikrein 3 (KLK3), which is commonly referred to as prostate-specific antigen, is the current serum biomarker for prostate cancer. Other kallikreins are potential adjunct biomarkers. As secreted proteases, kallikreins act through enzyme cascades that may modulate the prostate cancer microenvironment. Both as a panel of biomarkers and cascade of proteases, the roles of kallikreins are interconnected. Yet the expression and regulation of different kallikreins in prostate cancer has not been compared. In this study, a spectrum of prostate cell lines was used to evaluate the expression profile of all 15 members of the kallikrein family. A cluster of genes was co-ordinately expressed in androgenresponsive cell lines. This group of kallikreins included KLK2, 3, 4 and 15, which are located adjacent to one another at the centromeric end of the kallikrein locus. KLK14 was also of interest, because it was ubiquitously expressed among the prostate cell lines. Immunohistochemistry showed that these 5 kallikreins are co-expressed in benign and malignant prostate tissue. The androgen-regulated expression of KLK2 and KLK3 is well-characterised, but has not been compared with other kallikreins. Therefore, KLK2, 3, 4, 14 and 15 expression were all measured in time course and dose response experiments with androgens, AR-antagonist treatments, hormone deprivation experiments and cells transfected with AR siRNA. Collectively, these experiments demonstrated that prostatic kallikreins are specifically and directly regulated by the AR. The data also revealed that kallikrein genes are differentially regulated by androgens; KLK2 and KLK3 were strongly up-regulated, KLK4 and KLK15 were modestly up-regulated, and KLK14 was repressed. Notably, KLK14 is located at the telomeric end of the kallikrein locus, far away from the centromeric cluster of kallikreins that are stimulated by androgens. These results show that the expression of KLK2, 3, 4, 14 and 15 is maintained in prostate cancer, but that these genes exhibit different responses to androgens. This makes the kallikrein locus an ideal model to investigate AR signalling. The increasingly dedifferentiated phenotype of aggressive prostate cancer cells is accompanied by the re-expression of signalling molecules that are usually expressed during embryogenesis and foetal tissue development. The Wnt pathway is one developmental cascade that is reactivated in prostate cancer. The canonical Wnt cascade regulates the intracellular levels of -catenin, a potent transcriptional co-activator of T-cell factor (TCF) transcription factors. Notably, -catenin can also bind to the AR and synergistically stimulate androgen-mediated gene expression. This is at the expense of typical Wnt/TCF target genes, because the AR:-catenin and TCF:-catenin interactions are mutually exclusive. The effect of -catenin on kallikrein expression was examined to further investigate the role of -catenin in prostate cancer. Stable knockdown of -catenin in LNCaP prostate cancer cells attenuated the androgen-regulated expression of KLK2, 3, 4 and 15, but not KLK14. To test whether KLK14 is instead a TCF:-catenin target gene, the endogenous levels of -catenin were increased by inhibiting its degradation. Although KLK14 expression was up-regulated by these treatments, siRNA knockdown of -catenin demonstrated that this effect was independent of -catenin. These results show that -catenin is required for maximal expression of KLK2, 3, 4 and 15, but not KLK14. Developmental cells and tumour cells express a similar repertoire of signalling molecules, which means that these different cell types are responsive to one another. Previous reports have shown that stem cells and foetal tissues can reprogram aggressive cancer cells to less aggressive phenotypes by restoring the balance to developmental signalling pathways that are highly dysregulated in cancer. To investigate this phenomenon in prostate cancer, DU145 and PC-3 prostate cancer cells were cultured on matrices pre-conditioned with human embryonic stem cells (hESCs). Soft agar assays showed that prostate cancer cells exposed to hESC conditioned matrices had reduced clonogenicity compared with cells harvested from control matrices. A recent study demonstrated that this effect was partially due to hESC-derived Lefty, an antagonist of Nodal. A member of the transforming growth factor (TGF) superfamily, Nodal regulates embryogenesis and is re-expressed in cancer. The role of Nodal in prostate cancer has not previously been reported. Therefore, the expression and function of the Nodal signalling pathway in prostate cancer was investigated. Western blots confirmed that Nodal is expressed in DU145 and PC-3 cells. Immunohistochemistry revealed greater expression of Nodal in malignant versus benign glands. Notably, the Nodal inhibitor, Lefty, was not expressed at the mRNA level in any prostate cell lines tested. The Nodal signalling pathway is functionally active in prostate cancer cells. Recombinant Nodal treatments triggered downstream phosphorylation of Smad2 in DU145 and LNCaP cells, and stably-transfected Nodal increased the clonogencity of LNCaP cells. Nodal was also found to modulate AR signalling. Nodal reduced the activity of an androgen-regulated KLK3 promoter construct in luciferase assays and attenuated the endogenous expression of AR target genes including prostatic kallikreins. These results demonstrate that Nodal is a novel example of a developmental signalling molecule that is reexpressed in prostate cancer and may have a functional role in prostate cancer progression. In summary, this project clarifies the role of androgens and changing cellular differentiation in prostate cancer by characterising the expression and function of the downstream genes encoding kallikrein-related serine proteases and Nodal. Furthermore, this study emphasises the similarities between prostate cancer and early development, and the crosstalk between developmental signalling pathways and the AR axis. The outcomes of this project also affirm the utility of the kallikrein locus as a model system to monitor tumour progression and the phenotype of prostate cancer cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Continuum mechanics provides a mathematical framework for modelling the physical stresses experienced by a material. Recent studies show that physical stresses play an important role in a wide variety of biological processes, including dermal wound healing, soft tissue growth and morphogenesis. Thus, continuum mechanics is a useful mathematical tool for modelling a range of biological phenomena. Unfortunately, classical continuum mechanics is of limited use in biomechanical problems. As cells refashion the bres that make up a soft tissue, they sometimes alter the tissue's fundamental mechanical structure. Advanced mathematical techniques are needed in order to accurately describe this sort of biological `plasticity'. A number of such techniques have been proposed by previous researchers. However, models that incorporate biological plasticity tend to be very complicated. Furthermore, these models are often dicult to apply and/or interpret, making them of limited practical use. One alternative approach is to ignore biological plasticity and use classical continuum mechanics. For example, most mechanochemical models of dermal wound healing assume that the skin behaves as a linear viscoelastic solid. Our analysis indicates that this assumption leads to physically unrealistic results. In this thesis we present a novel and practical approach to modelling biological plasticity. Our principal aim is to combine the simplicity of classical linear models with the sophistication of plasticity theory. To achieve this, we perform a careful mathematical analysis of the concept of a `zero stress state'. This leads us to a formal denition of strain that is appropriate for materials that undergo internal remodelling. Next, we consider the evolution of the zero stress state over time. We develop a novel theory of `morphoelasticity' that can be used to describe how the zero stress state changes in response to growth and remodelling. Importantly, our work yields an intuitive and internally consistent way of modelling anisotropic growth. Furthermore, we are able to use our theory of morphoelasticity to develop evolution equations for elastic strain. We also present some applications of our theory. For example, we show that morphoelasticity can be used to obtain a constitutive law for a Maxwell viscoelastic uid that is valid at large deformation gradients. Similarly, we analyse a morphoelastic model of the stress-dependent growth of a tumour spheroid. This work leads to the prediction that a tumour spheroid will always be in a state of radial compression and circumferential tension. Finally, we conclude by presenting a novel mechanochemical model of dermal wound healing that takes into account the plasticity of the healing skin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While it is commonly accepted that computability on a Turing machine in polynomial time represents a correct formalization of the notion of a feasibly computable function, there is no similar agreement on how to extend this notion on functionals, that is, what functionals should be considered feasible. One possible paradigm was introduced by Mehlhorn, who extended Cobham's definition of feasible functions to type 2 functionals. Subsequently, this class of functionals (with inessential changes of the definition) was studied by Townsend who calls this class POLY, and by Kapron and Cook who call the same class basic feasible functionals. Kapron and Cook gave an oracle Turing machine model characterisation of this class. In this article, we demonstrate that the class of basic feasible functionals has recursion theoretic properties which naturally generalise the corresponding properties of the class of feasible functions, thus giving further evidence that the notion of feasibility of functionals mentioned above is correctly chosen. We also improve the Kapron and Cook result on machine representation.Our proofs are based on essential applications of logic. We introduce a weak fragment of second order arithmetic with second order variables ranging over functions from NN which suitably characterises basic feasible functionals, and show that it is a useful tool for investigating the properties of basic feasible functionals. In particular, we provide an example how one can extract feasible programs from mathematical proofs that use nonfeasible functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the context of learning paradigms of identification in the limit, we address the question: why is uncertainty sometimes desirable? We use mind change bounds on the output hypotheses as a measure of uncertainty, and interpret desirable as reduction in data memorization, also defined in terms of mind change bounds. The resulting model is closely related to iterative learning with bounded mind change complexity, but the dual use of mind change bounds for hypotheses and for data is a key distinctive feature of our approach. We show that situations exists where the more mind changes the learner is willing to accept, the lesser the amount of data it needs to remember in order to converge to the correct hypothesis. We also investigate relationships between our model and learning from good examples, set-driven, monotonic and strong-monotonic learners, as well as class-comprising versus class-preserving learnability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The stylized facts that motivate this thesis include the diversity in growth patterns that are observed across countries during the process of economic development, and the divergence over time in income distributions both within and across countries. This thesis constructs a dynamic general equilibrium model in which technology adoption is costly and agents are heterogeneous in their initial holdings of resources. Given the households resource level, this study examines how adoption costs influence the evolution of household income over time and the timing of transition to more productive technologies. The analytical results of the model constructed here characterize three growth outcomes associated with the technology adoption process depending on productivity differences between the technologies. These are appropriately labeled as poverty trap, dual economy and balanced growth. The model is then capable of explaining the observed diversity in growth patterns across countries, as well as divergence of incomes over time. Numerical simulations of the model furthermore illustrate features of this transition. They suggest that that differences in adoption costs account for the timing of households decision to switch technology which leads to a disparity in incomes across households in the technology adoption process. Since this determines the timing of complete adoption of the technology within a country, the implications for cross-country income differences are obvious. Moreover, the timing of technology adoption appears to be impacts on patterns of growth of households, which are different across various income groups. The findings also show that, in the presence of costs associated with the adoption of more productive technologies, inequalities of income and wealth may increase over time tending to delay the convergence in income levels. Initial levels of inequalities in the resources also have an impact on the date of complete adoption of more productive technologies. The issue of increasing income inequality in the process of technology adoption opens up another direction for research. Specifically increasing inequality implies that distributive conflicts may emerge during the transitional process with political- economy consequences. The model is therefore extended to include such issues. Without any political considerations, taxes would leads to a reduction in inequality and convergence of incomes across agents. However this process is delayed if politico-economic influences are taken into account. Moreover, the political outcome is sub optimal. This is essentially due to the fact that there is a resistance associated with the complete adoption of the advanced technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The joints of a humanoid robot experience disturbances of markedly different magnitudes during the course of a walking gait. Consequently, simple feedback control techniques poorly track desired joint trajectories. This paper explores the addition of a control system inspired by the architecture of the cerebellum to improve system response. This system learns to compensate the changes in load that occur during a cycle of motion. The joint compensation scheme, called Trajectory Error Learning, augments the existing feedback control loop on a humanoid robot. The results from tests on the GuRoo platform show an improvement in system response for the system when augmented with the cerebellar compensator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vigilance declines when exposed to highly predictable and uneventful tasks. Monotonous tasks provide little cognitive and motor stimulation and contribute to human errors. This paper aims to model and detect vigilance decline in real time through participants reaction times during a monotonous task. A lab-based experiment adapting the Sustained Attention to Response Task (SART) is conducted to quantify the effect of monotony on overall performance. Then relevant parameters are used to build a model detecting hypovigilance throughout the experiment. The accuracy of different mathematical models are compared to detect in real-time minute by minute - the lapses in vigilance during the task. We show that monotonous tasks can lead to an average decline in performance of 45%. Furthermore, vigilance modelling enables to detect vigilance decline through reaction times with an accuracy of 72% and a 29% false alarm rate. Bayesian models are identified as a better model to detect lapses in vigilance as compared to Neural Networks and Generalised Linear Mixed Models. This modelling could be used as a framework to detect vigilance decline of any human performing monotonous tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monotony has been identified as a contributing factor to road crashes. Drivers ability to react to unpredictable events deteriorates when exposed to highly predictable and uneventful driving tasks, such as driving on Australian rural roads, many of which are monotonous by nature. Highway design in particular attempts to reduce the drivers task to a merely lane-keeping one. Such a task provides little stimulation and is monotonous, thus affecting the drivers attention which is no longer directed towards the road. Inattention contributes to crashes, especially for professional drivers. Monotony has been studied mainly from the endogenous perspective (for instance through sleep deprivation) without taking into account the influence of the task itself (repetitiveness) or the surrounding environment. The aim and novelty of this thesis is to develop a methodology (mathematical framework) able to predict driver lapses of vigilance under monotonous environments in real time, using endogenous and exogenous data collected from the driver, the vehicle and the environment. Existing approaches have tended to neglect the specificity of task monotony, leaving the question of the existence of a monotonous state unanswered. Furthermore the issue of detecting vigilance decrement before it occurs (predictions) has not been investigated in the literature, let alone in real time. A multidisciplinary approach is necessary to explain how vigilance evolves in monotonous conditions. Such an approach needs to draw on psychology, physiology, road safety, computer science and mathematics. The systemic approach proposed in this study is unique with its predictive dimension and allows us to define, in real time, the impacts of monotony on the drivers ability to drive. Such methodology is based on mathematical models integrating data available in vehicles to the vigilance state of the driver during a monotonous driving task in various environments. The model integrates different data measuring drivers endogenous and exogenous factors (related to the driver, the vehicle and the surrounding environment). Electroencephalography (EEG) is used to measure driver vigilance since it has been shown to be the most reliable and real time methodology to assess vigilance level. There are a variety of mathematical models suitable to provide a framework for predictions however, to find the most accurate model, a collection of mathematical models were trained in this thesis and the most reliable was found. The methodology developed in this research is first applied to a theoretically sound measure of sustained attention called Sustained Attention Response to Task (SART) as adapted by Michael (2010), Michael and Meuter (2006, 2007). This experiment induced impairments due to monotony during a vigilance task. Analyses performed in this thesis confirm and extend findings from Michael (2010) that monotony leads to an important vigilance impairment independent of fatigue. This thesis is also the first to show that monotony changes the dynamics of vigilance evolution and tends to create a monotonous state characterised by reduced vigilance. Personality traits such as being a low sensation seeker can mitigate this vigilance decrement. It is also evident that lapses in vigilance can be predicted accurately with Bayesian modelling and Neural Networks. This framework was then applied to the driving task by designing a simulated monotonous driving task. The design of such task requires multidisciplinary knowledge and involved psychologist Rebecca Michael. Monotony was varied through both the road design and the road environment variables. This experiment demonstrated that road monotony can lead to driving impairment. Particularly monotonous road scenery was shown to have the most impact compared to monotonous road design. Next, this study identified a variety of surrogate measures that are correlated with vigilance levels obtained from the EEG. Such vigilance states can be predicted with these surrogate measures. This means that vigilance decrement can be detected in a car without the use of an EEG device. Amongst the different mathematical models tested in this thesis, only Neural Networks predicted the vigilance levels accurately. The results of both these experiments provide valuable information about the methodology to predict vigilance decrement. Such an issue is quite complex and requires modelling that can adapt to highly inter-individual differences. Only Neural Networks proved accurate in both studies, suggesting that these models are the most likely to be accurate when used on real roads or for further research on vigilance modelling. This research provides a better understanding of the driving task under monotonous conditions. Results demonstrate that mathematical modelling can be used to determine the drivers vigilance state when driving using surrogate measures identified during this study. This research has opened up avenues for future research and could result in the development of an in-vehicle device predicting driver vigilance decrement. Such a device could contribute to a reduction in crashes and therefore improve road safety.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spatial organization of Ge islands, grown by physical vapor deposition, on prepatterned Si(001) substrates has been investigated. The substrates were patterned prior to Ge deposition by nanoindentation. Characterization of Ge dots is performed by atomic force microscopy and scanning electron microscopy. The nanoindents act as trapping sites, allowing ripening of Ge islands at those locations during subsequent deposition and diffusion of Ge on the surface. The results show that island ordering is intrinsically linked to the nucleation and growth at indented sites and it strongly depends on pattern parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lateral gene transfer (LGT) from prokaryotes to microbial eukaryotes is usually detected by chance through genome-sequencing projects. Here, we explore a different, hypothesis-driven approach. We show that the fitness advantage associated with the transferred gene, typically invoked only in retrospect, can be used to design a functional screen capable of identifying postulated LGT cases. We hypothesized that beta-glucuronidase (gus) genes may be prone to LGT from bacteria to fungi (thought to lack gus) because this would enable fungi to utilize glucuronides in vertebrate urine as a carbon source. Using an enrichment procedure based on a glucose-releasing glucuronide analog (cellobiouronic acid), we isolated two gus(+) ascomycete fungi from soils (Penicillium canescens and Scopulariopsis sp.). A phylogenetic analysis suggested that their gus genes, as well as the gus genes identified in genomic sequences of the ascomycetes Aspergillus nidulans and Gibberella zeae, had been introgressed laterally from high-GC gram(+) bacteria. Two such bacteria (Arthrobacter spp.), isolated together with the gus(+) fungi, appeared to be the descendants of a bacterial donor organism from which gus had been transferred to fungi. This scenario was independently supported by similar substrate affinities of the encoded beta-glucuronidases, the absence of introns from fungal gus genes, and the similarity between the signal peptide-encoding 5' extensions of some fungal gus genes and the Arthrobacter sequences upstream of gus. Differences in the sequences of the fungal 5' extensions suggested at least two separate introgression events after the divergence of the two main Euascomycete classes. We suggest that deposition of glucuronides on soils as a result of the colonization of land by vertebrates may have favored LGT of gus from bacteria to fungi in soils.