779 resultados para anorexie restrictive


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The introduction of the Australian curriculum, the use of standardised testing (e.g. NAPLAN) and the My School website are couched in a context of accountability. This circumstance has stimulated and in some cases renewed a range of boundaries in Australian Education. The consequences that arise from standardised testing have accentuated the boundaries produced by social reproduction in education which has led to an increase in the numbers of students disengaging from mainstream education and applying for enrolment at the Edmund Rice Education Australia Flexible Learning Centre Network (EREAFLCN). Boundaries are created for many young people who are denied access to credentials and certification as a result of being excluded from or in some way disengaging from standardised education and testing. Young people who participate at the EREAFLCN arrive with a variety of forms of cultural capital that are not valued in current education and employment fields. This is not to say that these young people’s different forms of cultural capital have no value, but rather that such funds of knowledge, repertoires and cultural capital are not valued by the majority of powerful agents in educational and employment fields. How then can the qualitative value of traditionally unorthodox - yet often intricate, ingenious, and astute - versions of cultural capital evident in the habitus of many young people be made to count, be recognised, be valuated? Can a process of educational assessment be a field of capital exchange and a space which breaches boundaries through a valuating process? This paper reports on the development of an innovative approach to assessment in an alternative education institution designed for the re-engagement of ‘at risk’ youth who have left formal schooling. A case study approach has been used to document the engagement of six young people, with an educational approach described as assessment for learning as a field of exchange across two sites in the EREAFLCN. In order to capture the broad range of students’ cultural and social capital, an electronic portfolio system (EPS) is under trial. The model draws on categories from sociological models of capital and reconceptualises the eportfolio as a sociocultural zone of learning and development. Results from the trial show a general tendency towards engagement with the EPS and potential for the attainment of socially valued cultural capital in the form of school credentials. In this way restrictive boundaries can be breached and a more equitable outcome achieved for many young Australians.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The introduction of the Australian curriculum, the use of standardised testing (e.g. NAPLAN) and the My School website have stimulated and in some cases renewed a range of boundaries for young people in Australian Education. Standardised testing has accentuated social reproduction in education with an increase in the numbers of students disengaging from mainstream education and applying for enrolment at the Edmund Rice Education Australia Flexible Learning Centre Network (EREAFLCN). Many young people are denied access to credentials and certification as they become excluded from standardised education and testing. The creativity and skills of marginalised youth are often evidence of general capabilities and yet do not appear to be recognised in mainstream educational institutions when standardised approaches are adopted. Young people who participate at the EREAFLCN arrive with a variety of forms of cultural capital, frequently utilising general capabilities, which are not able to be valued in current education and employment fields. This is not to say that these young people‟s different forms of cultural capital have no value, but rather that such funds of knowledge, repertoires and cultural capital are not valued by the majority of powerful agents in educational and employment fields. How then can the inherent value of traditionally unorthodox - yet often intricate, ingenious, and astute-versions of cultural capital evident in the habitus of many young people be made to count, be recognised, be valuated?Can a process of educational assessment be a field of capital exchange and a space which crosses boundaries through a valuating process? This paper reports on the development of an innovative approach to assessment in an alternative education institution designed for the re engagement of „at risk‟ youth who have left formal schooling. A case study approach has been used to document the engagement of six young people, with an educational approach described as assessment for learning as a field of exchange across two sites in the EREAFLCN. In order to capture the broad range of students‟ cultural and social capital, an electronic portfolio system (EPS) is under trial. The model draws on categories from sociological models of capital and reconceptualises the eportfolio as a sociocultural zone of learning and development. Results from the trial show a general tendency towards engagement with the EPS and potential for the attainment of socially valued cultural capital in the form of school credentials. In this way restrictive boundaries can be breached and a more equitable outcome achieved for many young Australians.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Australian Democrats have recently proposed federal legislation which requires consideration of open source software when making decisions about public agency procurement contracts. A similar legislative proposal has been made in South Australia.170 The Financial Management and Accountability (Anti Restrictive Software Practices) Amendment Bill 2003 (Cwth) aims to redress concerns that “a small number of software manufacturers have a disproportionate and restrictive hold on the supply, use and development of software”...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The estimation of phylogenetic divergence times from sequence data is an important component of many molecular evolutionary studies. There is now a general appreciation that the procedure of divergence dating is considerably more complex than that initially described in the 1960s by Zuckerkandl and Pauling (1962, 1965). In particular, there has been much critical attention toward the assumption of a global molecular clock, resulting in the development of increasingly sophisticated techniques for inferring divergence times from sequence data. In response to the documentation of widespread departures from clocklike behavior, a variety of local- and relaxed-clock methods have been proposed and implemented. Local-clock methods permit different molecular clocks in different parts of the phylogenetic tree, thereby retaining the advantages of the classical molecular clock while casting off the restrictive assumption of a single, global rate of substitution (Rambaut and Bromham 1998; Yoder and Yang 2000).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is still no comprehensive information strategy governing access to and reuse of public sector information, applying on a nationwide basis, across all levels of government – local, state and federal - in Australia. This is the case both for public sector materials generally and for spatial data in particular. Nevertheless, the last five years have seen some significant developments in information policy and practice, the result of which has been a considerable lessening of the barriers that previously acted to impede the accessibility and reusability of a great deal of spatial and other material held by public sector agencies. Much of the impetus for change has come from the spatial community which has for many years been a proponent of the view “that government held information, and in particular spatial information, will play an absolutely critical role in increasing the innovative capacity of this nation.”1 However, the potential of government spatial data to contribute to innovation will remain unfulfilled without reform of policies on access and reuse as well as the pervasive practices of public sector data custodians who have relied on government copyright to justify the imposition of restrictive conditions on its use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pre-packaged administrations have been prevalent in the UK for years. However, Australia's voluntary administration regime has been more restrictive of the practice. This article analyses the evolution of UK pre-packs, why they are not prevalent in Australia and the challenges for UK and Australian lawmakers in striking the right balance with pre-packs in their respective administration regimes. The article proposes a mechanism that might make ‘connected-party’ pre-pack business sales work more fairly for stakeholders — that is, by obligating a connected-party purchaser to make a future-income contribution in favour of the insolvent company whose business has been ‘rescued’ by a pre-packaged sale in administration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The average structure (CI) of a volcanic plagioclase megacryst with composition Ano, from the Hogarth Ranges, Australia, has been determined using three-dimensional, singlecrystal neutron and X-ray diffraction data. Least squaresr efinements, incorporating anisotropic thermal motion of all atoms and an extinction correction, resulted in weighted R factors (based on intensities) of 0.076 and 0.056, respectively, for the neutron and X-ray data. Very weak e reflections could be detected in long-exposure X-ray and electron diffraction photographs of this crystal, but the refined average structure is believed to be unaffected by the presence of such a weak superstructure. The ratio of the scattering power of Na to that of Ca is different for X ray and neutron radiation, and this radiation-dependence of scattering power has been used to determine the distribution of Na and Ca over a split-atom M site (two sites designated M' and M") in this Ano, plagioclase. Relative peak-height ratios M'/M", revealed in difference Fourier sections calculated from neutron and X-ray data, formed the basis for the cation-distribution analysis. As neutron and X-ray data sets were directly compared in this analysis, it was important that systematic bias between refined neutron and X-ray positional parameters could be demonstrated to be absent. In summary, with an M-site model constrained only by the electron-microprobedetermined bulk composition of the crystal, the following values were obtained for the M-site occupanciesN: ar, : 0.29(7),N ar. : 0.23(7),C ar, : 0.15(4),a nd Car" : 0.33(4). These results indicate that restrictive assumptions about M sites, on which previous plagioclase refinements have been based, are not applicable to this Ano, and possibly not to the entire compositional range. T-site ordering determined by (T-O) bond-length variation-t,o : 0.51(l), trm = t2o = t2m = 0.32(l)-is weak, as might be expectedf rom the volcanic origin of this megacryst.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Amidst a proliferation of bestseller books, blockbuster films, television documentaries and sensational news reports, public awareness campaigns have claimed their place in a growing chorus of concern about the crime of human trafficking. These campaigns aim to capture the public’s support in efforts to eliminate a ‘modern slave trade’ in which individuals seeking a better life are transported across borders and forced into exploitative labour conditions. Constrained by the limitations of primary campaign materials (posters, print ads, billboards) typically allowing for only a single image and minimal text, it is unlikely that these awareness campaigns can accurately convey the complexity of the trafficking problem. This chapter explores how the depictions of trafficking victims in awareness campaigns can exclude those who do not fit a restrictive narrative mould. Nils Christie’s pivotal work on the construction of society’s ideal victim is the lens through which this paper examines the literal ‘poster child’ of the anti-trafficking movement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Twenty first century learners operate in organic, immersive environments. A pedagogy of student-centred learning is not a recipe for rooms. A contemporary learning environment is like a landscape that grows, morphs, and responds to the pressures of the context and micro-culture. There is no single adaptable solution, nor a suite of off-the-shelf answers; propositions must be customisable and infinitely variable. They must be indeterminate and changeable; based on the creation of learning places, not restrictive or constraining spaces. A sustainable solution will be un-fixed, responsive to the life cycle of the components and materials, able to be manipulated by the users; it will create and construct its own history. Learning occurs as formal education with situational knowledge structures, but also as informal learning, active learning, blended learning social learning, incidental learning, and unintended learning. These are not spatial concepts but socio-cultural patterns of discovery. Individual learning requirements must run free and need to be accommodated as the learner sees fit. The spatial solution must accommodate and enable a full array of learning situations. It is a system not an object. Three major components: 1. The determinate landscape: in-situ concrete 'plate' that is permanent. It predates the other components of the system and remains as a remnant/imprint/fossil after the other components of the system have been relocated. It is a functional learning landscape in its own right; enabling a variety of experiences and activities. 2. The indeterminate landscape: a kit of pre-fabricated 2-D panels assembled in a unique manner at each site to suit the client and context. Manufactured to the principles of design-for-disassembly. A symbiotic barnacle like system that attaches itself to the existing infrastructure through the determinate landscape which acts as a fast growth rhizome. A carapace of protective panels, infinitely variable to create enclosed, semi-enclosed, and open learning places. 3. The stations: pre-fabricated packages of highly-serviced space connected through the determinate landscape. Four main types of stations; wet-room learning centres, dry-room learning centres, ablutions, and low-impact building services. Entirely customised at the factory and delivered to site. The stations can be retro-fitted to suit a new context during relocation. Principles of design for disassembly: material principles • use recycled and recyclable materials • minimise the number of types of materials • no toxic materials • use lightweight materials • avoid secondary finishes • provide identification of material types component principles • minimise/standardise the number of types of components • use mechanical not chemical connections • design for use of common tools and equipment • provide easy access to all components • make component size to suite means of handling • provide built in means of handling • design to realistic tolerances • use a minimum number of connectors and a minimum number of types system principles • design for durability and repeated use • use prefabrication and mass production • provide spare components on site • sustain all assembly and material information

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Olivier Corten’s The Law Against War is a comprehensive, meticulously-researched study of contemporary international law governing the use of armed force in international relations. As a translated and updated version of a 2008 book published in French, it offers valuable insights into the positivist methodology that underpins much of the European scholarship of international law. Corten undertakes a rigorous analysis of state practice from 1945 onwards, with a view to clarifying the current meaning and scope of international law’s prohibition on the use of force. His central argument is that the majority of states remain attached to a strict interpretation of this rule. For Corten, state practice indicates that the doctrines of anticipatory self-defence, pre-emptive force and humanitarian intervention have no basis in contemporary international law. His overall position accords with a traditional, restrictive view of the circumstances in which states are permitted to use force...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A body of research in conversation analysis has identified a range of structurally-provided positions in which sources of trouble in talk-in-interaction can be addressed using repair. These practices are contained within what Schegloff (1992) calls the repair space. In this paper, I examine a rare instance in which a source of trouble is not resolved within the repair space and comes to be addressed outside of it. The practice by which this occurs is a post-completion account; that is, an account that is produced after the possible completion of the sequence containing a source of trouble. Unlike fourth position repair, the final repair position available within the repair space, this account is not made in preparation for a revised response to the trouble-source turn. Its more restrictive aim, rather, is to circumvent an ongoing difference between the parties involved. I argue that because the trouble is addressed in this manner, and in this particular position, the repair space can be considered as being limited to the sequence in which a source of trouble originates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our daily lives become more and more dependent upon smartphones due to their increased capabilities. Smartphones are used in various ways from payment systems to assisting the lives of elderly or disabled people. Security threats for these devices become increasingly dangerous since there is still a lack of proper security tools for protection. Android emerges as an open smartphone platform which allows modification even on operating system level. Therefore, third-party developers have the opportunity to develop kernel-based low-level security tools which is not normal for smartphone platforms. Android quickly gained its popularity among smartphone developers and even beyond since it bases on Java on top of "open" Linux in comparison to former proprietary platforms which have very restrictive SDKs and corresponding APIs. Symbian OS for example, holding the greatest market share among all smartphone OSs, was closing critical APIs to common developers and introduced application certification. This was done since this OS was the main target for smartphone malwares in the past. In fact, more than 290 malwares designed for Symbian OS appeared from July 2004 to July 2008. Android, in turn, promises to be completely open source. Together with the Linux-based smartphone OS OpenMoko, open smartphone platforms may attract malware writers for creating malicious applications endangering the critical smartphone applications and owners� privacy. In this work, we present our current results in analyzing the security of Android smartphones with a focus on its Linux side. Our results are not limited to Android, they are also applicable to Linux-based smartphones such as OpenMoko Neo FreeRunner. Our contribution in this work is three-fold. First, we analyze android framework and the Linux-kernel to check security functionalities. We survey wellaccepted security mechanisms and tools which can increase device security. We provide descriptions on how to adopt these security tools on Android kernel, and provide their overhead analysis in terms of resource usage. As open smartphones are released and may increase their market share similar to Symbian, they may attract attention of malware writers. Therefore, our second contribution focuses on malware detection techniques at the kernel level. We test applicability of existing signature and intrusion detection methods in Android environment. We focus on monitoring events on the kernel; that is, identifying critical kernel, log file, file system and network activity events, and devising efficient mechanisms to monitor them in a resource limited environment. Our third contribution involves initial results of our malware detection mechanism basing on static function call analysis. We identified approximately 105 Executable and Linking Format (ELF) executables installed to the Linux side of Android. We perform a statistical analysis on the function calls used by these applications. The results of the analysis can be compared to newly installed applications for detecting significant differences. Additionally, certain function calls indicate malicious activity. Therefore, we present a simple decision tree for deciding the suspiciousness of the corresponding application. Our results present a first step towards detecting malicious applications on Android-based devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our daily lives become more and more dependent upon smartphones due to their increased capabilities. Smartphones are used in various ways, e.g. for payment systems or assisting the lives of elderly or disabled people. Security threats for these devices become more and more dangerous since there is still a lack of proper security tools for protection. Android emerges as an open smartphone platform which allows modification even on operating system level and where third-party developers first time have the opportunity to develop kernel-based low-level security tools. Android quickly gained its popularity among smartphone developers and even beyond since it bases on Java on top of "open" Linux in comparison to former proprietary platforms which have very restrictive SDKs and corresponding APIs. Symbian OS, holding the greatest market share among all smartphone OSs, was even closing critical APIs to common developers and introduced application certification. This was done since this OS was the main target for smartphone malwares in the past. In fact, more than 290 malwares designed for Symbian OS appeared from July 2004 to July 2008. Android, in turn, promises to be completely open source. Together with the Linux-based smartphone OS OpenMoko, open smartphone platforms may attract malware writers for creating malicious applications endangering the critical smartphone applications and owners privacy. Since signature-based approaches mainly detect known malwares, anomaly-based approaches can be a valuable addition to these systems. They base on mathematical algorithms processing data that describe the state of a certain device. For gaining this data, a monitoring client is needed that has to extract usable information (features) from the monitored system. Our approach follows a dual system for analyzing these features. On the one hand, functionality for on-device light-weight detection is provided. But since most algorithms are resource exhaustive, remote feature analysis is provided on the other hand. Having this dual system enables event-based detection that can react to the current detection need. In our ongoing research we aim to investigates the feasibility of light-weight on-device detection for certain occasions. On other occasions, whenever significant changes are detected on the device, the system can trigger remote detection with heavy-weight algorithms for better detection results. In the absence of the server respectively as a supplementary approach, we also consider a collaborative scenario. Here, mobile devices sharing a common objective are enabled by a collaboration module to share information, such as intrusion detection data and results. This is based on an ad-hoc network mode that can be provided by a WiFi or Bluetooth adapter nearly every smartphone possesses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Programmed cell death is characterized by a cascade of tightly controlled events that culminate in the orchestrated death of the cell. In multicellular organisms autophagy and apoptosis are recognized as two principal means by which these genetically determined cell deaths occur. During plant-microbe interactions cell death programs can mediate both resistant and susceptible events. Via oxalic acid (OA), the necrotrophic phytopathogen Sclerotinia sclerotiorum hijacks host pathways and induces cell death in host plant tissue resulting in hallmark apoptotic features in a time and dose dependent manner. OA-deficient mutants are non-pathogenic and trigger a restricted cell death phenotype in the host that unexpectedly exhibits markers associated with the plant hypersensitive response including callose deposition and a pronounced oxidative burst, suggesting the plant can recognize and in this case respond, defensively. The details of this plant directed restrictive cell death associated with OA deficient mutants is the focus of this work. Using a combination of electron and fluorescence microscopy, chemical effectors and reverse genetics, we show that this restricted cell death is autophagic. Inhibition of autophagy rescued the non-pathogenic mutant phenotype. These findings indicate that autophagy is a defense response in this necrotrophic fungus/plant interaction and suggest a novel function associated with OA; namely, the suppression of autophagy. These data suggest that not all cell deaths are equivalent, and though programmed cell death occurs in both situations, the outcome is predicated on who is in control of the cell death machinery. Based on our data, we suggest that it is not cell death per se that dictates the outcome of certain plant-microbe interactions, but the manner by which cell death occurs that is crucial.