62 resultados para AL-2004-1
Resumo:
One of the hallmarks of cancer is the ability to activate invasion and metastasis (Hanahan et al., 2011). Cancer morbidity and mortality are largely related to the spread of the primary, localised tumour to adjacent and distant sites (Pantel et al., 2004). Appropriate management and treatment decisions of predicting metastatic disease at the time of diagnosis is thus crucial, which supports better understanding of the metastatic process. There are common events that occur during metastasis: dissociation from the primary tumour mass, reorganisation/remodelling of extracellular matrix, cell migration, recognition and transversal of endothelial cells and the vascular circulation and lodgement and proliferation within ectopic stroma (Wells, 2006). One of the key and initial events is the increased capability of cancer cells to move, escaping the regulation of normal physiological control. The cellular cytoskeleton plays an important role in cancer cell motility and active cytoskeletal rearrangement can result in metastatic disease. This active change in cytoskeletal dynamics results in manipulation of plasma membrane and cellular balance between cellular adhesion and motility which in turn determines cancer cell movement. Members of the tetraspanins play important roles in regulation of cancer migration and cancer-endothelial cell interactions, which are critical for cancer invasion and metastasis. Their involvements in active cytoskeletal dynamics, cancer metastasis and potential clinical application will be discussed in this review. In particular, tetraspanin member, CD151, is highlighted for its major role in cancer invasion and metastasis
Resumo:
Since the 1980s the calls for further criminalisation of organisational conduct causing harm to workers, the public and the environment have intensified in Australia, Canada and England and Wales.' One focal point of this movement has been the criminal law's response to organisations (and their personnel) failing to comply with occupational health and safety ('OHS') standards, particularly when physical harm (death and serious injury) has resulted from those breaches. Some governments have responded with proposals to enable manslaughter prosecutions to be initiated 'more effectively' against organisations causing the deaths of workers or, in some cases, members of the public (Archibald et al, 2004; Haines and Hall, 2004; Hall et al, 2004; Tombs and Whyte, 2003). In Australia governments have also increased monetary penalties for regulatory OHS offences, a few have introduced other contemporary organisational sanctions, and some have initiated OHS prosecutions more vigorously and with larger fines.
Resumo:
A one size fits all approach dominates alcohol programs in school settings (Botvin et al., 2007), which may limit program effectiveness (Snyder et al., 2004). Programs tailored to the meet the needs and wants of adolescent groups may be more effective. Limited attention has been directed towards employing a full segmentation process. Where segmentation has been examined, the focus has remained on socio-demographic characteristics and more recently psychographic variables (Mathijssen et al., 2012). The current study aimed to identify whether the addition of behaviour could be used to identify segments. Variables included attitudes towards binge drinking (α = 0.86), behavioral intentions’ (α = 0.97), perceived behavioral control (PBC), injunctive norms (α = 0.94); descriptive norms (α = 0.87), knowledge and reported behaviour. Data was collected from five schools, n = 625 (32.96% girls). Two-Step cluster analysis produced a sample (n = 625) with a silhouette measure of cohesion and separation of 0.4. The intention measure and whether students reported previously consuming alcohol were the most distinguishing characteristics - predictor importance scores of (1.0). A four segment solution emerged. The first segment (“Male abstainers” – 37.2%) featured the highest knowledge score (M: 5.9) along with the lowest-risk drinking attitudes and intentions to drink excessively. Segment 2 (“At risk drinkers” - 11.2%) were characterised by their high-risk attitudes and high-risk drinking intentions. Injunctive (M: 4.1) and descriptive norms (M: 4.9) may indicate a social environment where drinking is the norm. Segment 3 (”Female abstainers” – 25.9%) represents young girls, who have the lowest-risk attitudes and low intentions to drink excessively. The fourth and final segment (boys = 67.4%) (“Moderate drinkers” – 25.7%) all report previously drinking alcohol yet their attitudes and intentions towards excessive alcohol consumption are lower than other segments. Segmentation focuses on identifying groups of individuals who feature similar characteristics. The current study illustrates the importance of including reported behaviour in addition to psychographic and demographic characteristics to identify unique groups to inform intervention planning and design. Key messages The principle of segmentation has received limited attention in the context of school-based alcohol education programs. This research identified four segments amongst 14-16 year high school students, each of which can be targeted with a unique, tailored program to meet the needs and wants of the target audience.
Resumo:
Decision-making for conservation is conducted within the margins of limited funding. Furthermore, to allocate these scarce resources we make assumptions about the relationship between management impact and expenditure. The structure of these relationships, however, is rarely known with certainty. We present a summary of work investigating the impact of model uncertainty on robust decision-making in conservation and how this is affected by available conservation funding. We show that achieving robustness in conservation decisions can require a triage approach, and emphasize the need for managers to consider triage not as surrendering but as rational decision making to ensure species persistence in light of the urgency of the conservation problems, uncertainty, and the poor state of conservation funding. We illustrate this theory by a specific application to allocation of funding to reduce poaching impact on the Sumatran tiger Panthera tigris sumatrae in Kerinci Seblat National Park, Indonesia. To conserve our environment, conservation managers must make decisions in the face of substantial uncertainty. Further, they must deal with the fact that limitations in budgets and temporal constraints have led to a lack of knowledge on the systems we are trying to preserve and on the benefits of the actions we have available (Balmford & Cowling 2006). Given this paucity of decision-informing data there is a considerable need to assess the impact of uncertainty on the benefit of management options (Regan et al. 2005). Although models of management impact can improve decision making (e.g.Tenhumberg et al. 2004), they typically rely on assumptions around which there is substantial uncertainty. Ignoring this 'model uncertainty', can lead to inferior decision-making (Regan et al. 2005), and potentially, the loss of the species we are trying to protect. Current methods used in ecology allow model uncertainty to be incorporated into the model selection process (Burnham & Anderson 2002; Link & Barker 2006), but do not enable decision-makers to assess how this uncertainty would change a decision. This is the basis of information-gap decision theory (info-gap); finding strategies most robust to model uncertainty (Ben-Haim 2006). Info-gap has permitted conservation biology to make the leap from recognizing uncertainty to explicitly incorporating severe uncertainty into decision-making. In this paper we present a summary of McDonald-Madden et al (2008a) who use an info-gap framework to address the impact of uncertainty in the functional representations of biological systems on conservation decision-making. Furthermore, we highlight the importance of two key elements limiting conservation decision-making - funding and knowledge - and how they interact to influence the best management strategy for a threatened species. Copyright © ASCE 2011.
Resumo:
Almost 10 years ago, Pullin and Knight (2001) called for an “effectiveness revolution in conservation” to be enabled by the systematic evaluation of evidence for conservation decision making. Drawing from the model used in clinicalmedicine, they outlined the concept of “evidencebased conservation” in which existing information, or evidence, from relevant and rigorous research is compiled and analyzed in a systematic manner to inform conservation actions (Cochrane 1972). The promise of evidencebased conservation has generated significant interest; 25 systematic reviews have been completed since 2004 and dozens are underway (Collaboration for Environmental Evidence 2010). However we argue that an “effectiveness revolution” (Pullin & Knight 2001) in conservation will not be possible unless mechanisms are devised for incorporating the growing evidence base into decision frameworks. For conservation professionals to accomplish the missions of their organizations they must demonstrate that their actions actually achieve objectives (Pullin & Knight 2009). Systematic evaluation provides a framework for objectively evaluating the effectiveness of actions. To leverage the benefit of these evaluations, we need resource-allocation systems that are responsive to their outcomes. The allocation of conservation resources is often the product of institutional priorities or reliance on intuition (Sutherland et al. 2004; Pullin & Knight 2005; Cook et al. 2010). We highlight the NICE technologyappraisal process because it provides an example of formal integration of systematic-evidence evaluation with provision of guidance for action. The transparent process, which clearly delineates costs and benefits of each alternative action, could also provide the public with new insight into the environmental effects of different decisions. This insight could stimulate a wider discussion about investment in conservation by demonstrating how changes in funding might affect the probability of achieving conservation objectives. ©2010 Society for Conservation Biology
Resumo:
Polybrominated diphenyl ethers (PBDEs) are considered to be a cost effective and efficient way to reduce flammability therefore reducing harm caused by fires. PBDEs are incorporated into a variety of manufactured products and are found worldwide in biological and environmental samples (e.g. Hites et al. 2004). Unlike other persistent organic pollutants there is limited data on PBDE concentrations by age and/or other population specific factors. Some studies have shown no variation in adult serum PBDE concentrations with age (e.g. Mazdai et al., 2003, Meironyte Guvenius et al., 2003) while Petreas et al. (2003) and Schecter et al. (2005) found results to be suggestive of an age trend in adult data but no statistically significant correlation was found. In addition to the data on adult concentrations there is limited data which investigates the levels of PBDEs in infants and young children. Fangström et al. (2005) showed that in seven year olds there was no difference in PBDE concentration when compared to adult concentrations. While Thomsen et al. (2002, 2005) found the concentration of PBDEs in pooled samples of blood serum from a 0-4 years age group to be higher than other age groups (4 to > 60 years). In addition, a family of four was studied in the U.S. and the concentrations were found to be greatest in the 18-month-old infant followed by the 5 year old child, then the mother and father (Fischer et al., 2006). The objectives of this study were to assess age, gender and regional trends of PBDE concentrations in a representative sample of the Australian population.
Resumo:
The contemporary methodology for growth models of organisms is based on continuous trajectories and thus it hinders us from modelling stepwise growth in crustacean populations. Growth models for fish are normally assumed to follow a continuous function, but a different type of model is needed for crustacean growth. Crustaceans must moult in order for them to grow. The growth of crustaceans is a discontinuous process due to the periodical shedding of the exoskeleton in moulting. The stepwise growth of crustaceans through the moulting process makes the growth estimation more complex. Stochastic approaches can be used to model discontinuous growth or what are commonly known as "jumps" (Figure 1). However, in stochastic growth model we need to ensure that the stochastic growth model results in only positive jumps. In view of this, we will introduce a subordinator that is a special case of a Levy process. A subordinator is a non-decreasing Levy process, that will assist in modelling crustacean growth for better understanding of the individual variability and stochasticity in moulting periods and increments. We develop the estimation methods for parameter estimation and illustrate them with the help of a dataset from laboratory experiments. The motivational dataset is from the ornate rock lobster, Panulirus ornatus, which can be found between Australia and Papua New Guinea. Due to the presence of sex effects on the growth (Munday et al., 2004), we estimate the growth parameters separately for each sex. Since all hard parts are shed too often, the exact age determination of a lobster can be challenging. However, the growth parameters for the aforementioned moult processes from tank data being able to estimate through: (i) inter-moult periods, and (ii) moult increment. We will attempt to derive a joint density, which is made up of two functions: one for moult increments and the other for time intervals between moults. We claim these functions are conditionally independent given pre-moult length and the inter-moult periods. The variables moult increments and inter-moult periods are said to be independent because of the Markov property or conditional probability. Hence, the parameters in each function can be estimated separately. Subsequently, we integrate both of the functions through a Monte Carlo method. We can therefore obtain a population mean for crustacean growth (e. g. red curve in Figure 1). [GRAPHICS]
Resumo:
The thermal decomposition of natural ammonium oxalate known as oxammite has been studied using a combination of high resolution thermogravimetry coupled to an evolved gas mass spectrometer and Raman spectroscopy coupled to a thermal stage. Three mass loss steps were found at 57, 175 and 188°C attributed to dehydration, ammonia evolution and carbon dioxide evolution respectively. Raman spectroscopy shows two bands at 3235 and 3030 cm-1 attributed to the OH stretching vibrations and three bands at 2995, 2900 and 2879 cm-1, attributed to the NH vibrational modes. The thermal degradation of oxammite may be followed by the loss of intensity of these bands. No intensity remains in the OH stretching bands at 100°C and the NH stretching bands show no intensity at 200°C. Multiple CO symmetric stretching bands are observed at 1473, 1454, 1447 and 1431cm-1, suggesting that the mineral oxammite is composed of a mixture of chemicals including ammonium oxalate dihydrate, ammonium oxalate monohydrate and anhydrous ammonium oxalate.
Resumo:
Over the years, public health in relation to Australian Aboriginal people has involved many individuals and groups including health professionals, governments, politicians, special interest groups and corporate organisations. Since colonisation commenced until the1980s, public health relating to Aboriginal and Torres Strait Islander people was not necessarily in the best interests of Aboriginal and Torres Strait Islander people, but rather in the interests of the non-Aboriginal population. The attention that was paid focussed more generally around the subject of reproduction and issues of prostitution, exploitation, abuse and venereal diseases (Kidd, 1997). Since the late 1980s there has been a shift in the broader public health agenda (see Baum, 1998) along with public health in relation to Aboriginal and Torres Strait Islander people (NHMRC, 2003). This has been coupled with increasing calls to develop appropriate tertiary curriculum and to educate, train, and employ more Aboriginal and Torres Strait Islander and non-Aboriginal people in public health (Anderson et al., 2004; Genat, 2007; PHERP, 2008a, 2008b). Aboriginal and Torres Strait Islander people have been engaged in public health in ways in which they are in a position to influence the public health agenda (Anderson 2004; 2008; Anderson et al., 2004; NATSIHC, 2003). There have been numerous projects, programs and strategies that have sought to develop the Aboriginal and Torres Strait Islander Public Health workforce (AHMAC, 2002; Oldenburg et al., 2005; SCATSIH, 2002). In recent times the Aboriginal community controlled health sector has joined forces with other peak bodies and governments to find solutions and strategies to improve the health outcomes of Aboriginal and Torres Strait Islander peoples (NACCHO & Oxfam, 2007). This case study chapter will not address these broader activities. Instead it will explore the activities and roles of staff within the Public Health and Research Unit (PHRU) at the Victorian Aboriginal Community Controlled Health Organisation (VACCHO). It will focus on their experiences with education institutions, their work in public health and their thoughts on gaps and where improvements can be made in public health, research and education. What will be demonstrated is the diversity of education qualifications and experience. What will also be reflected is how people work within public health on a daily basis to enact change for equity in health and contribute to the improvement of future health outcomes of the Victorian Aboriginal community.
Resumo:
In Queensland, Australia, the ultraviolet (UV) radiation levels are high (greater than UV Index 3) almost all year round. Although ambient UV is about three times higher in summer compared to winter, Queensland residents receive approximately equal personal doses of UV radiation within these seasons (Neale et al., 2010). Sun protection messages throughout the year are thus essential (Montague et al., 2001), need to reach all segments of the population, and should incorporate guidelines for maintenance of adequate vitamin D levels. Knowledge is an essential requirement to allow people to make health conscious decisions. Unprompted knowledge commonly requires a higher level of awareness or recency of acquisition compared to prompted recall (Waller et al., 2004). This paper thus reports further on the data from a 2008 population-based, cross-sectional telephone survey conducted in Queensland, Australia (2,001 participants; response rate=45%) (Youl et al., 2009). It was the aim of this research to establish the level of, and factors predicting, unprompted and prompted knowledge about health and vitamin D.
Resumo:
Introduction: There are many low intensity (LI) cognitive behavoural therapy (CBT) solutions to the problem of limited service access. In this chapter, we aim to discuss a relatively low-technology approach to access using standard postal services-CBT by mail, or M-CBT. Bibliotherapies including M-CBT teach key concepts and self-management techniques, together with screening tools and forms to structure home practice. M-CBT differs from other bibliotherapies by segmenting interventions and mailing them at regular intervals. Most involve participants returning copies of monitoring forms or completed handouts. Therapist feedback is provided, often in personal letters that accompany the printed materials. Participants may also be given access to telephone or email support. ----- ----- M-CBT clearly fulfills criteria for an LI CBT (see Bennett-Levy et al., Chapter 1, for a definition of LI interventions). Once written, they involve little therapist time and rely heavily on self-management. However, content and overall treatment duration need not be compromised. Long-term interventions with multiple components can be delivered via this method, provided their content can be communicated in letters and engagement is maintained.
Resumo:
Many people with severe mental illness (SMI) such as schizophrenia, whose psychotic symptoms are effectively managed, continue to experience significant functional problems. This chapter argues that low intensity (LI) cognitive behaviour therapy (CBT; e.g. for depression, anxiety, or other issues) is applicable to these clients, and that LI CBT can be consistent with long-term case management. However, adjustments to LI CBT strategies are often necessary and boundaries between LI CBT and high intensity (HI) CBT (with more extensive practitioner contact and complexity) may become blurred. Our focus is on LI CBT's self-management emphasis, its restricted content and segment length, and potential use after limited training. In addition to exploring these issues, it draws on the authors' Collaborative Recovery (CR; Oades et al. 2005) and 'Start Over and Survive' programs (Kavanagh et al. 2004) as examples. ----- ----- Evidence for the effectiveness of LI CBT with severe mental illness is often embedded within multicomponent programs. For example, goal setting and therapeutic homework are common components of such programs, but they can also be used as discrete LI CBT interventions. A review of 40 randomised controlled trials involving recipients with schizophrenia or other sever mental illnesses has identified key components of illness management programs (Mueser et al. 2002). However, it is relatively rare for specific components of these complex interventions to be assessed in isolation. Given these constraints, the evidence for specific LI CBT interventions with severe mental ilnness is relatively limited.
Resumo:
It is predicted that with increased life expectancy in the developed world, there will be a greater demand for synthetic materials to repair or regenerate lost, injured or diseased bone (Hench & Thompson 2010). There are still few synthetic materials having true bone inductivity, which limits their application for bone regeneration, especially in large-size bone defects. To solve this problem, growth factors, such as bone morphogenetic proteins (BMPs), have been incorporated into synthetic materials in order to stimulate de novo bone formation in the center of large-size bone defects. The greatest obstacle with this approach is that the rapid diffusion of the protein from the carrier material, leading to a precipitous loss of bioactivity; the result is often insufficient local induction or failure of bone regeneration (Wei et al. 2007). It is critical that the protein is loaded in the carrier material in conditions which maintains its bioactivity (van de Manakker et al. 2009). For this reason, the efficient loading and controlled release of a protein from a synthetic material has remained a significant challenge. The use of microspheres as protein/drug carriers has received considerable attention in recent years (Lee et al. 2010; Pareta & Edirisinghe 2006; Wu & Zreiqat 2010). Compared to macroporous block scaffolds, the chief advantage of microspheres is their superior protein-delivery properties and ability to fill bone defects with irregular and complex shapes and sizes. Upon implantation, the microspheres are easily conformed to the irregular implant site, and the interstices between the particles provide space for both tissue and vascular ingrowth, which are important for effective and functional bone regeneration (Hsu et al. 1999). Alginates are natural polysaccharides and their production does not have the implicit risk of contamination with allo or xeno-proteins or viruses (Xie et al. 2010). Because alginate is generally cytocompatible, it has been used extensively in medicine, including cell therapy and tissue engineering applications (Tampieri et al. 2005; Xie et al. 2010; Xu et al. 2007). Calcium cross-linked alginate hydrogel is considered a promising material as a delivery matrix for drugs and proteins, since its gel microspheres form readily in aqueous solutions at room temperature, eliminating the need for harsh organic solvents, thereby maintaining the bioactivity of proteins in the process of loading into the microspheres (Jay & Saltzman 2009; Kikuchi et al. 1999). In addition, calcium cross-linked alginate hydrogel is degradable under physiological conditions (Kibat PG et al. 1990; Park K et al. 1993), which makes alginate stand out as an attractive candidate material for the protein carrier and bone regeneration (Hosoya et al. 2004; Matsuno et al. 2008; Turco et al. 2009). However, the major disadvantages of alginate microspheres is their low loading efficiency and also rapid release of proteins due to the mesh-like networks of the gel (Halder et al. 2005). Previous studies have shown that a core-shell structure in drug/protein carriers can overcome the issues of limited loading efficiencies and rapid release of drug or protein (Chang et al. 2010; Molvinger et al. 2004; Soppimath et al. 2007). We therefore hypothesized that introducing a core-shell structure into the alginate microspheres could solve the shortcomings of the pure alginate. Calcium silicate (CS) has been tested as a biodegradable biomaterial for bone tissue regeneration. CS is capable of inducing bone-like apatite formation in simulated body fluid (SBF) and its apatite-formation rate in SBF is faster than that of Bioglass® and A-W glass-ceramics (De Aza et al. 2000; Siriphannon et al. 2002). Titanium alloys plasma-spray coated with CS have excellent in vivo bioactivity (Xue et al. 2005) and porous CS scaffolds have enhanced in vivo bone formation ability compared to porous β-tricalcium phosphate ceramics (Xu et al. 2008). In light of the many advantages of this material, we decided to prepare CS/alginate composite microspheres by combining a CS shell with an alginate core to improve their protein delivery and mineralization for potential protein delivery and bone repair applications