940 resultados para principle


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background During a global influenza pandemic, the vaccine requirements of developing countries can surpass their supply capabilities, if these exist at all, compelling them to rely on developed countries for stocks that may not be available in time. There is thus a need for developing countries in general to produce their own pandemic and possibly seasonal influenza vaccines. Here we describe the development of a plant-based platform for producing influenza vaccines locally, in South Africa. Plant-produced influenza vaccine candidates are quicker to develop and potentially cheaper than egg-produced influenza vaccines, and their production can be rapidly upscaled. In this study, we investigated the feasibility of producing a vaccine to the highly pathogenic avian influenza A subtype H5N1 virus, the most generally virulent influenza virus identified to date. Two variants of the haemagglutinin (HA) surface glycoprotein gene were synthesised for optimum expression in plants: these were the full-length HA gene (H5) and a truncated form lacking the transmembrane domain (H5tr). The genes were cloned into a panel of Agrobacterium tumefaciens binary plant expression vectors in order to test HA accumulation in different cell compartments. The constructs were transiently expressed in tobacco by means of agroinfiltration. Stable transgenic tobacco plants were also generated to provide seed for stable storage of the material as a pre-pandemic strategy. Results For both transient and transgenic expression systems the highest accumulation of full-length H5 protein occurred in the apoplastic spaces, while the highest accumulation of H5tr was in the endoplasmic reticulum. The H5 proteins were produced at relatively high concentrations in both systems. Following partial purification, haemagglutination and haemagglutination inhibition tests indicated that the conformation of the plant-produced HA variants was correct and the proteins were functional. The immunisation of chickens and mice with the candidate vaccines elicited HA-specific antibody responses. Conclusions We managed, after synthesis of two versions of a single gene, to produce by transient and transgenic expression in plants, two variants of a highly pathogenic avian influenza virus HA protein which could have vaccine potential. This is a proof of principle of the potential of plant-produced influenza vaccines as a feasible pandemic response strategy for South Africa and other developing countries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We used in vivo (biological), in silico (computational structure prediction), and in vitro (model sequence folding) analyses of single-stranded DNA sequences to show that nucleic acid folding conservation is the selective principle behind a high-frequency single-nucleotide reversion observed in a three-nucleotide mutated motif of the Maize streak virus replication associated protein (Rep) gene. In silico and in vitro studies showed that the three-nucleotide mutation adversely affected Rep nucleic acid folding, and that the single-nucleotide reversion [C(601)A] restored wild-type-like folding. In vivo support came from infecting maize with mutant viruses: those with Rep genes containing nucleotide changes predicted to restore a wild-type-like fold [A(601)/G(601)] preferentially accumulated over those predicted to fold differently [C(601)/T(601)], which frequently reverted to A(601) and displaced the original population. We propose that the selection of native nucleic acid folding is an epigenetic effect, which might have broad implications in the evolution of plants and their viruses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A fundamental prerequisite of population health research is the ability to establish an accurate denominator. This in turn requires that every individual in the study population is counted. However, this seemingly simple principle has become a point of conflict between researchers whose aim is to produce evidence of disparities in population health outcomes and governments whose policies promote(intentionally or not) inequalities that are the underlying causes of health disparities. Research into the health of asylum seekers is a case in point. There is a growing body of evidence documenting the adverse affects of recent changes in asylum-seeking legislation, including mandatory detention. However, much of this evidence has been dismissed by some governments as being unsound, biased and unscientific because, it is argued, evidence is derived from small samples or from case studies. Yet, it is the policies of governments that are the key barrier to the conduct of rigorous population health research on asylum seekers. In this paper, the authors discuss the challenges of counting asylum seekers and the limitations of data reported in some industrialized countries. They argue that the lack of accurate statistical data on asylum seekers has been an effective neo-conservative strategy for erasing the health inequalities in this vulnerable population, indeed a strategy that renders invisible this population. They describe some alternative strategies that may be used by researchers to obtain denominator data on hard-to-reach populations such as asylum seekers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is widely recognised that exposure to air pollutants affect pulmonary and lung dysfunction as well as a range of neurological and vascular disorders. The rapid increase of worldwide carbon emissions continues to compromise environmental sustainability whilst contributing to premature death. Moreover, the harms caused by air pollution have a more pernicious reach, such as being the major source of climate change and ‘natural disasters’, which reportedly kills millions of people each year (World Health Organization, 2012). The opening quotations tell a story of the UK government's complacency towards the devastation of toxic and contaminating air emissions. The above headlines greeted the British public earlier this year after its government was taken to the Court of Appeal for an appalling air pollution record that continues to cause the premature deaths of 30,000 British people each year at a health cost estimated at £20 billion per annum. This combined with pending legal proceedings against the UK government for air pollution violations by the European Commission, point to a Cameron government that prioritises hot air and profit margins over human lives. The UK's legal air pollution regimes are an industry dominated process that relies on negotiation and partnership between regulators and polluters. The entire model seeks to assist business compliance rather than punish corporate offenders. There is no language of ‘crime’ in relation to UK air pollution violations but rather a discourse of ‘exceedence’ (Walters, 2010). It is a regulatory system not premised on the ‘polluter pay’ principle but instead the ‘polluter profit’ principle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The civil liability provisions relating to the assessment of damages for past and future economic loss have abrogated the common law principle of full compensation by imposing restrictions on the damages award, most commonly by a “three times average weekly earnings” cap. This consideration of the impact of those provisions is informed by a case study of the Supreme Court of Victoria Court of Appeal decision, Tuohey v Freemasons Hospital (Tuohey) , which addressed the construction and arithmetic operation of the Victorian cap for high income earners. While conclusions as to operation of the cap outside of Victoria can be drawn from Tuohey, a number of issues await judicial determination. These issues, which include the impact of the damages caps on the calculation of damages for economic loss in the circumstances of fluctuating income; vicissitudes; contributory negligence; claims per quod servitum amisit; and claims by dependants, are identified and potential resolutions discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The UN Convention on the Rights of Persons with Disability (CRPD) promotes equal and full participation by children in education. Equity of educational access for all students, including students with disability, free from discrimination, is the first stated national goal of Australian education (MCEETYA 2008). Australian federal disability discrimination law, the Disability Discrimination Act 1992 (DDA), follows the Convention, with the federal Disability Standards for Education 2005 (DSE) enacting specific requirements for education. This article discusses equity of processes for inclusion of students with disability in Australian educational accountability testing, including international tests in which many countries participate. The conclusion drawn is that equitable inclusion of students with disability in current Australian educational accountability testing in not occurring from a social perspective and is not in principle compliant with law. However, given the reluctance of courts to intervene in education matters and the uncertainty of an outcome in any court consideration, the discussion shows that equitable inclusion in accountability systems is available through policy change rather than expensive, and possibly unsuccessful, legal challenges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Filtration using granular media such as quarried sand, anthracite and granular activated carbon is a well-known technique used in both water and wastewater treatment. A relatively new prefiltration method called pebble matrix filtration (PMF) technology has been proved effective in treating high turbidity water during heavy rain periods that occur in many parts of the world. Sand and pebbles are the principal filter media used in PMF laboratory and pilot field trials conducted in the UK, Papua New Guinea and Serbia. However during first full-scale trials at a water treatment plant in Sri Lanka in 2008, problems were encountered in sourcing the required uniform size and shape of pebbles due to cost, scarcity and Government regulations on pebble dredging. As an alternative to pebbles, hand-made clay pebbles (balls) were fired in a kiln and their performance evaluated for the sustainability of the PMF system. These clay balls within a filter bed are subjected to stresses due to self-weight and overburden, therefore, it is important that clay balls should be able to withstand these stresses in water saturated conditions. In this paper, experimentally determined physical properties including compression failure load (Uniaxial Compressive Strength) and tensile strength at failure (theoretical) of hand-made clay balls are described. Hand-made clay balls fired between the kiln temperatures of 875oC to 960oC gave failure loads of between 3.0 kN and 7.1 kN. In another test when clay balls were fired to 1250oC the failure load was 35.0 kN compared to natural Scottish cobbles with an average failure load of 29.5 kN. The uniaxial compressive strength of clay balls obtained by experiment has been presented in terms of the tensile yield stress of clay balls. Based on the effective stress principle in soil mechanics, a method for the estimation of maximum theoretical load on clay balls used as filter media is proposed and compared with experimental failure loads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is also proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Experiments based on several real-world data collections demonstrate that WebPut outperforms existing approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Five Canadian high school Chemistry classes in one school, taught by three different teachers, studied the concepts of dynamic chemical equilibria and Le Chatelier’s Principle. Some students received traditional teacher-led explanations of the concept first and used an interactive scientific visualisation second, while others worked with the visualisation first and received the teacher-led explanation second. Students completed a test of their conceptual understanding of the relevant concepts prior to instruction, after the first instructional session and at the end of instruction. Data on students’ academic achievement (highest, middle or lowest third of the class on the mid-term exam) and gender were also collected to explore the relationship between these factors, conceptual development and instructional sequencing. Results show, within this context at least, that teaching sequence is not important in terms of students’ conceptual learning gains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article suggests that the issue of proportionality in anti-doping sanctions has been inconsistently dealt with by the Court of Arbitration for Sport (CAS). Given CAS’s pre-eminent role in interpreting and applying the World Anti-Doping Code under the anti-doping policies of its signatories, an inconsistent approach to the application of the proportionality principle will cause difficulties for domestic anti-doping tribunals seeking guidance as to the appropriateness of their doping sanctions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter explores the objectives, principle and methods of climate law. The United Nations Framework Convention on Climate Change (UNFCCC) lays the foundations of the international regime by setting out its ultimate objectives in Article 2, the key principles in Article 3, and the methods of the regime in Article 4. The ultimate objective of the regime – to avoid dangerous anthropogenic interference – is examined and assessments of the Intergovernmental Panel on Climate Change (IPCC) are considered when seeking to understand the definition of this concept. The international environmental principles of: state sovereignty and responsibility, preventative action, cooperation, sustainable development, precaution, polluter pays and common but differentiated responsibility are then examined and their incorporation within the international climate regime instruments evaluated. This is followed by an examination of the methods used by the mitigation and adaptation regimes in seeking to achieve the objective of the UNFCCC. Methods of the mitigation regime include: domestic implementation of policies, setting of standards and targets and allocation of rights, use of flexibility mechanisms, and reporting. While it is noted that methods of the adaptation regime are still evolving, the latter includes measures such as impact assessments, national adaptation plans and the provision of funding.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Delay Tolerant Network (DTN) is one where nodes can be highly mobile, with long message delay times forming dynamic and fragmented networks. Traditional centralised network security is difficult to implement in such a network, therefore distributed security solutions are more desirable in DTN implementations. Establishing effective trust in distributed systems with no centralised Public Key Infrastructure (PKI) such as the Pretty Good Privacy (PGP) scheme usually requires human intervention. Our aim is to build and compare different de- centralised trust systems for implementation in autonomous DTN systems. In this paper, we utilise a key distribution model based on the Web of Trust principle, and employ a simple leverage of common friends trust system to establish initial trust in autonomous DTN’s. We compare this system with two other methods of autonomously establishing initial trust by introducing a malicious node and measuring the distribution of malicious and fake keys. Our results show that the new trust system not only mitigates the distribution of fake malicious keys by 40% at the end of the simulation, but it also improved key distribution between nodes. This paper contributes a comparison of three de-centralised trust systems that can be employed in autonomous DTN systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

What are the information practices of teen content creators? In the United States over two thirds of teens have participated in creating and sharing content in online communities that are developed for the purpose of allowing users to be producers of content. This study investigates how teens participating in digital participatory communities find and use information as well as how they experience the information. From this investigation emerged a model of their information practices while creating and sharing content such as film-making, visual art work, story telling, music, programming, and web site design in digital participatory communities. The research uses grounded theory methodology in a social constructionist framework to investigate the research problem: what are the information practices of teen content creators? Data was gathered through semi-structured interviews and observation of teen’s digital communities. Analysis occurred concurrently with data collection, and the principle of constant comparison was applied in analysis. As findings were constructed from the data, additional data was collected until a substantive theory was constructed and no new information emerged from data collection. The theory that was constructed from the data describes five information practices of teen content creators. The five information practices are learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. In describing the five information practices there are three necessary descriptive components, the community of practice, the experiences of information and the information actions. The experiences of information include information as participation, inspiration, collaboration, process, and artifact. Information actions include activities that occur in the categories of gathering, thinking and creating. The experiences of information and information actions intersect in the information practices, which are situated within the specific community of practice, such as a digital participatory community. Finally, the information practices interact and build upon one another and this is represented in a graphic model and explanation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this panel, we showcase approaches to teaching for creativity in disciplines of the Media, Entertainment and Creative Arts School and the School of Design within the Creative Industries Faculty (CIF) at QUT. The Faculty is enormously diverse, with 4,000 students enrolled across a total of 20 disciplines. Creativity is a unifying concept in CIF, both as a graduate attribute, and as a key pedagogic principle. We take as our point of departure the assertion that it is not sufficient to assume that students of tertiary courses in creative disciplines are ‘naturally’ creative. Rather, teachers in higher education must embrace their roles as facilitators of development and learning for the creative workforce, including working to build creative capacity (Howkins, 2009). In so doing, we move away from Renaissance notions of creativity as an individual genius, a disposition or attribute which cannot be learned, towards a 21st century conceptualisation of creativity as highly collaborative, rhizomatic, and able to be developed through educational experiences (see, for instance, Robinson, 2006; Craft; 2001; McWilliam & Dawson, 2008). It has always been important for practitioners of the arts and design to be creative. Under the national innovation agenda (Bradley et al, 2008) and creative industries policy (e.g., Department for Culture, Media and Sport, 2008; Office for the Arts, 2011), creativity has been identified as a key determinant of economic growth, and thus developing students’ creativity has now become core higher education business across all fields. Even within the arts and design, professionals are challenged to be creative in new ways, for new purposes, in different contexts, and using new digital tools and platforms. Teachers in creative disciplines may have much to offer to the rest of the higher education sector, in terms of designing and modelling innovative and best practice pedagogies for the development of student creative capability. Information and Communication Technologies such as mobile learning, game-based learning, collaborative online learning tools and immersive learning environments offer new avenues for creative learning, although analogue approaches may also have much to offer, and should not be discarded out of hand. Each panelist will present a case study of their own approach to teaching for creativity, and will address the following questions with respect to their case: 1. What conceptual view of creativity does the case reflect? 2. What pedagogical approaches are used, and why were these chosen? What are the roles of innovative learning approaches, including ICTs, if any? 3. How is creativity measured or assessed? How do students demonstrate creativity? We seek to identify commonalities and contrasts between and among the pedagogic case studies, and to answer the question: what can we learn about teaching creatively and teaching for creativity from CIF best practice?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.