925 resultados para Sufficient reason.
Resumo:
Balboni identifies her interest as being the processes of official disclosure and the path taken to civil litigation by survivors of child sexual abuse by Roman Catholic Clergy. The empirical data, on which this work is based, come in the form of in-depth face-to-face interviews with 22 survivors of clergy sexual abuse who have pursued litigation and 13 of their advocates. Balboni provides a space for survivors’ accounts of the ‘why’ behind their decision making and the impact of civil litigation on their lives to be heard, discussed and contextualized with both clarity and sensitivity. She acknowledges the breadth and depth of survivor responses, and the perspectives of their legal advocates, employing defiance theory, symbolic interaction and other points of analysis, to capture the journey of survivors towards litigation and beyond. Balboni’s work is deeply poignant in its recognition of survivors’ voices, the complex transformative capacity of litigation, the effects of community forming amongst survivors and the complex nature of ‘empowerment’ obtained by survivors through civil litigation. Acknowledging that, for many survivors, litigation becomes a means of identity change and truth telling, Balboni admits that ‘these survivors helped me understand that litigation is more about voice than monetary settlement’ (p. 149). This work is not deeply analytical or theoretically rich but privileges the voices of survivors and their advocates with sufficient frameworks to contextualize and explain participants’ perspectives and experiences.
Resumo:
The higher education sector is undergoing a number of significant changes, the implications of which have yet to emerge. One such change is the increasing reliance by higher education providers on the revenue generated by full fee paying international students to fund their operating expenses. The report by the Victorian Ombudsman, Investigation into how Universities Deal with International Students ('Victorian Ombudsman's Report') tabled in the Victorian Parliament on 27 October 2011, provides evidence that Australian higher education providers may be failing to meet their legal obligations to international students. The Victorian Ombudsman's Report is the result of an investigation into four Victorian universities teaching international students with a focus on accounting and nursing schools. The report contains evidence that the universities were admitting students with scores below, or at the lower end of, the International English Language Testing System ('IELTS') score considered acceptable. Alternatively, they were relying upon their own language testing admission standards and not on an independent test like the IELTS test. While the universities provided English language support services for their international students after they had been admitted, the Ombudsman was concerned that the universities 'have not dedicated sufficient resources to meet the level of need amongst international students'.
Resumo:
Resistance to chemotherapy and metastases are the major causes of breast cancer-related mortality. Moreover, cancer stem cells (CSC) play critical roles in cancer progression and treatment resistance. Previously, it was found that CSC-like cells can be generated by aberrant activation of epithelial–mesenchymal transition (EMT), thereby making anti-EMT strategies a novel therapeutic option for treatment of aggressive breast cancers. Here, we report that the transcription factor FOXC2 induced in response to multiple EMT signaling pathways as well as elevated in stem cell-enriched factions is a critical determinant of mesenchymal and stem cell properties, in cells induced to undergo EMT- and CSC-enriched breast cancer cell lines. More specifically, attenuation of FOXC2 expression using lentiviral short hairpin RNA led to inhibition of the mesenchymal phenotype and associated invasive and stem cell properties, which included reduced mammosphere-forming ability and tumor initiation. Whereas, overexpression of FOXC2 was sufficient to induce CSC properties and spontaneous metastasis in transformed human mammary epithelial cells. Furthermore, a FOXC2-induced gene expression signature was enriched in the claudin-low/basal B breast tumor subtype that contains EMT and CSC features. Having identified PDGFR-β to be regulated by FOXC2, we show that the U.S. Food and Drug Administration-approved PDGFR inhibitor, sunitinib, targets FOXC2-expressing tumor cells leading to reduced CSC and metastatic properties. Thus, FOXC2 or its associated gene expression program may provide an effective target for anti-EMT-based therapies for the treatment of claudin-low/basal B breast tumors or other EMT-/CSC-enriched tumors.
Resumo:
The feasibility of using an in-hardware implementation of a genetic algorithm (GA) to solve the computationally expensive travelling salesman problem (TSP) is explored, especially in regard to hardware resource requirements for problem and population sizes. We investigate via numerical experiments whether a small population size might prove sufficient to obtain reasonable quality solutions for the TSP, thereby permitting relatively resource efficient hardware implementation on field programmable gate arrays (FPGAs). Software experiments on two TSP benchmarks involving 48 and 532 cities were used to explore the extent to which population size can be reduced without compromising solution quality, and results show that a GA allowed to run for a large number of generations with a smaller population size can yield solutions of comparable quality to those obtained using a larger population. This finding is then used to investigate feasible problem sizes on a targeted Virtex-7 vx485T-2 FPGA platform via exploration of hardware resource requirements for memory and data flow operations.
Resumo:
The advanced programmatic risk analysis and management model (APRAM) is one of the recently developed methods that can be used for risk analysis and management purposes considering schedule, cost, and quality risks simultaneously. However, this model considers those failure risks that occur only over the design and construction phases of a project’s life cycle. While it can be sufficient for some projects for which the required cost during the operating life is much less than the budget required over the construction period, it should be modified in relation to infrastructure projects because the associated costs during the operating life cycle are significant. In this paper, a modified APRAM is proposed, which can consider potential risks that might occur over the entire life cycle of the project, including technical and managerial failure risks. Therefore, the modified model can be used as an efficient decision-support tool for construction managers in the housing industry in which various alternatives might be technically available. The modified method is demonstrated by using a real building project, and this demonstration shows that it can be employed efficiently by construction managers. The Delphi method was applied in order to figure out the failure events and their associated probabilities. The results show that although the initial cost of a cold-formed steel structural system is higher than a conventional construction system, the former’s failure cost is much lower than the latter’s
Resumo:
This collaborative project by Daniel Mafe and Andrew Brown, one of a number in they have been involved in together, conjoins painting and digital sound into a single, large scale, immersive exhibition/installation. The work as a whole acts as an interstitial point between contrasting approaches to abstraction: the visual and aural, the digital and analogue are pushed into an alliance and each works to alter perceptions of the other. For example, the paintings no longer mutely sit on the wall to be stared into. The sound seemingly emanating from each work shifts the viewer’s typical visual perception and engages their aural sensibilities. This seems to make one more aware of the objects as objects – the surface of each piece is brought into scrutiny – and immerses the viewer more viscerally within the exhibition. Similarly, the sonic experience is focused and concentrated spatially by each painted piece even as the exhibition is dispersed throughout the space. The sounds and images are similar in each local but not identical, even though they may seem to be the same from casual interaction, closer attention will quickly show this is not the case. In preparing this exhibition each artist has had to shift their mode of making to accommodate the other’s contribution. This was mainly done by a process of emptying whereby each was called upon to do less to the works they were making and to iterate the works toward a shared conception, blurring notions of individual imagination while maintaining material authorship. Empting was necessary to enable sufficient porosity where each medium allowed the other entry to its previously gated domain. The paintings are simple and subtle to allow the odd sonic textures a chance to work on the viewer’s engagement with them. The sound remains both abstract, using noise-like textures, and at a low volume to allow the audience’s attention to wander back and forth between aspects of the works.
Resumo:
Background & aims: One aim of the Australasian Nutrition Care Day Survey was to determine the nutritional status and dietary intake of acute care hospital patients. Methods: Dietitians from 56 hospitals in Australia and New Zealand completed a 24-h survey of nutritional status and dietary intake of adult hospitalised patients. Nutritional risk was evaluated using the Malnutrition Screening Tool. Participants ‘at risk’ underwent nutritional assessment using Subjective Global Assessment. Based on the International Classification of Diseases (Australian modification), participants were also deemed malnourished if their body mass index was <18.5 kg/m2. Dietitians recorded participants’ dietary intake at each main meal and snacks as 0%, 25%, 50%, 75%, or 100% of that offered. Results: 3122 patients (mean age: 64.6 ± 18 years) participated in the study. Forty-one percent of the participants were “at risk” of malnutrition. Overall malnutrition prevalence was 32%. Fifty-five percent of malnourished participants and 35% of well-nourished participants consumed ≤50% of the food during the 24-h audit. “Not hungry” was the most common reason for not consuming everything offered during the audit. Conclusion: Malnutrition and sub-optimal food intake is prevalent in acute care patients across hospitals in Australia and New Zealand and warrants appropriate interventions.
Resumo:
In this panel, we showcase approaches to teaching for creativity in disciplines of the Media, Entertainment and Creative Arts School and the School of Design within the Creative Industries Faculty (CIF) at QUT. The Faculty is enormously diverse, with 4,000 students enrolled across a total of 20 disciplines. Creativity is a unifying concept in CIF, both as a graduate attribute, and as a key pedagogic principle. We take as our point of departure the assertion that it is not sufficient to assume that students of tertiary courses in creative disciplines are ‘naturally’ creative. Rather, teachers in higher education must embrace their roles as facilitators of development and learning for the creative workforce, including working to build creative capacity (Howkins, 2009). In so doing, we move away from Renaissance notions of creativity as an individual genius, a disposition or attribute which cannot be learned, towards a 21st century conceptualisation of creativity as highly collaborative, rhizomatic, and able to be developed through educational experiences (see, for instance, Robinson, 2006; Craft; 2001; McWilliam & Dawson, 2008). It has always been important for practitioners of the arts and design to be creative. Under the national innovation agenda (Bradley et al, 2008) and creative industries policy (e.g., Department for Culture, Media and Sport, 2008; Office for the Arts, 2011), creativity has been identified as a key determinant of economic growth, and thus developing students’ creativity has now become core higher education business across all fields. Even within the arts and design, professionals are challenged to be creative in new ways, for new purposes, in different contexts, and using new digital tools and platforms. Teachers in creative disciplines may have much to offer to the rest of the higher education sector, in terms of designing and modelling innovative and best practice pedagogies for the development of student creative capability. Information and Communication Technologies such as mobile learning, game-based learning, collaborative online learning tools and immersive learning environments offer new avenues for creative learning, although analogue approaches may also have much to offer, and should not be discarded out of hand. Each panelist will present a case study of their own approach to teaching for creativity, and will address the following questions with respect to their case: 1. What conceptual view of creativity does the case reflect? 2. What pedagogical approaches are used, and why were these chosen? What are the roles of innovative learning approaches, including ICTs, if any? 3. How is creativity measured or assessed? How do students demonstrate creativity? We seek to identify commonalities and contrasts between and among the pedagogic case studies, and to answer the question: what can we learn about teaching creatively and teaching for creativity from CIF best practice?
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Due to increased complexity, scale, and functionality of information and telecommunication (IT) infrastructures, every day new exploits and vulnerabilities are discovered. These vulnerabilities are most of the time used by ma¬licious people to penetrate these IT infrastructures for mainly disrupting business or stealing intellectual pro¬perties. Current incidents prove that it is not sufficient anymore to perform manual security tests of the IT infra¬structure based on sporadic security audits. Instead net¬works should be continuously tested against possible attacks. In this paper we present current results and challenges towards realizing automated and scalable solutions to identify possible attack scenarios in an IT in¬frastructure. Namely, we define an extensible frame¬work which uses public vulnerability databases to identify pro¬bable multi-step attacks in an IT infrastructure, and pro¬vide recommendations in the form of patching strategies, topology changes, and configuration updates.
Resumo:
Grouping users in social networks is an important process that improves matching and recommendation activities in social networks. The data mining methods of clustering can be used in grouping the users in social networks. However, the existing general purpose clustering algorithms perform poorly on the social network data due to the special nature of users' data in social networks. One main reason is the constraints that need to be considered in grouping users in social networks. Another reason is the need of capturing large amount of information about users which imposes computational complexity to an algorithm. In this paper, we propose a scalable and effective constraint-based clustering algorithm based on a global similarity measure that takes into consideration the users' constraints and their importance in social networks. Each constraint's importance is calculated based on the occurrence of this constraint in the dataset. Performance of the algorithm is demonstrated on a dataset obtained from an online dating website using internal and external evaluation measures. Results show that the proposed algorithm is able to increases the accuracy of matching users in social networks by 10% in comparison to other algorithms.
A qualitative think aloud study of the early Neo-Piagetian stages of reasoning in novice programmers
Resumo:
Recent research indicates that some of the difficulties faced by novice programmers are manifested very early in their learning. In this paper, we present data from think aloud studies that demonstrate the nature of those difficulties. In the think alouds, novices were required to complete short programming tasks which involved either hand executing ("tracing") a short piece of code, or writing a single sentence describing the purpose of the code. We interpret our think aloud data within a neo-Piagetian framework, demonstrating that some novices reason at the sensorimotor and preoperational stages, not at the higher concrete operational stage at which most instruction is implicitly targeted.
Resumo:
Discourses of public education reform, like that exemplified within the Queensland Government’s future vision document, Queensland State Education-2010 (QSE-2010), position schooling as a panacea to pervasive social instability and a means to achieve a new consensus. However, in unravelling the many conflicting statements that conjoin to form education policy and inform related literature (Ball, 1993), it becomes clear that education reform discourse is polyvalent (Foucault, 1977). Alongside visionary statements that speak of public education as a vehicle for social justice are the (re)visionary or those reflecting neoliberal individualism and a conservative politics. In this paper, it is argued that the latter coagulate to form strategic discursive practices which work to (re)secure dominant relations of power. Further, discussion of the characteristics needed by the “ideal” future citizen of Queensland reflect efforts to ‘tame change through the making of the child’ (Popkewitz, 2004, p.201). The casualties of this (re)vision and the refusal to investigate the pathologies of “traditional” schooling are the children who, for whatever reason, do not conform to the norm of the desired school child as an “ideal” citizen-in-the-making and who become relegated to alternative educational settings.
Resumo:
This article focuses on problem solving activities in a first grade classroom in a typical small community and school in Indiana. But, the teacher and the activities in this class were not at all typical of what goes on in most comparable classrooms; and, the issues that will be addressed are relevant and important for students from kindergarten through college. Can children really solve problems that involve concepts (or skills) that they have not yet been taught? Can children really create important mathematical concepts on their own – without a lot of guidance from teachers? What is the relationship between problem solving abilities and the mastery of skills that are widely regarded as being “prerequisites” to such tasks?Can primary school children (whose toolkits of skills are limited) engage productively in authentic simulations of “real life” problem solving situations? Can three-person teams of primary school children really work together collaboratively, and remain intensely engaged, on problem solving activities that require more than an hour to complete? Are the kinds of learning and problem solving experiences that are recommended (for example) in the USA’s Common Core State Curriculum Standards really representative of the kind that even young children encounter beyond school in the 21st century? … This article offers an existence proof showing why our answers to these questions are: Yes. Yes. Yes. Yes. Yes. Yes. And: No. … Even though the evidence we present is only intended to demonstrate what’s possible, not what’s likely to occur under any circumstances, there is no reason to expect that the things that our children accomplished could not be accomplished by average ability children in other schools and classrooms.
Resumo:
The assembly of retroviruses such as HIV-1 is driven by oligomerization of their major structural protein, Gag. Gag is a multidomain polyprotein including three conserved folded domains: MA (matrix), CA (capsid) and NC (nucleocapsid)(1). Assembly of an infectious virion proceeds in two stages(2). In the first stage, Gag oligomerization into a hexameric protein lattice leads to the formation of an incomplete, roughly spherical protein shell that buds through the plasma membrane of the infected cell to release an enveloped immature virus particle. In the second stage, cleavage of Gag by the viral protease leads to rearrangement of the particle interior, converting the non-infectious immature virus particle into a mature infectious virion. The immature Gag shell acts as the pivotal intermediate in assembly and is a potential target for anti-retroviral drugs both in inhibiting virus assembly and in disrupting virus maturation(3). However, detailed structural information on the immature Gag shell has not previously been available. For this reason it is unclear what protein conformations and interfaces mediate the interactions between domains and therefore the assembly of retrovirus particles, and what structural transitions are associated with retrovirus maturation. Here we solve the structure of the immature retroviral Gag shell from Mason-Pfizer monkey virus by combining cryo-electron microscopy and tomography. The 8-angstrom resolution structure permits the derivation of a pseudo-atomic model of CA in the immature retrovirus, which defines the protein interfaces mediating retrovirus assembly. We show that transition of an immature retrovirus into its mature infectious form involves marked rotations and translations of CA domains, that the roles of the amino-terminal and carboxy-terminal domains of CA in assembling the immature and mature hexameric lattices are exchanged, and that the CA interactions that stabilize the immature and mature viruses are almost completely distinct.