959 resultados para Cohesive zone model
Resumo:
Objectives: Recovery is an emerging movement in mental health. Evidence for recovery-based approaches is not well developed and approaches to implement recovery-oriented services are not well articulated. The collaborative recovery model (CRM) is presented as a model that assists clinicians to use evidence-based skills with consumers, in a manner consistent with the recovery movement. A current 5 year multisite Australian study to evaluate the effectiveness of CRM is briefly described. Conclusion: The collaborative recovery model puts into practice several aspects of policy regarding recovery-oriented services, using evidence-based practices to assist individuals who have chronic or recurring mental disorders (CRMD). It is argued that this model provides an integrative framework combining (i) evidence-based practice; (ii) manageable and modularized competencies relevant to case management and psychosocial rehabilitation contexts; and (iii) recognition of the subjective experiences of consumers.
Resumo:
While Business Process Management (BPM) is an established discipline, the increased adoption of BPM technology in recent years has introduced new challenges. One challenge concerns dealing with process model complexity in order to improve the understanding of a process model by stakeholders and process analysts. Features for dealing with this complexity can be classified in two categories: 1) those that are solely concerned with the appearance of the model, and 2) those that in essence change the structure of the model. In this paper we focus on the former category and present a collection of patterns that generalize and conceptualize various existing features. The paper concludes with a detailed analysis of the degree of support of a number of state-of-the-art languages and language implementations for these patterns.
Resumo:
Ghrelin is a gut-brain peptide hormone that induces appetite, stimulates the release of growth hormone, and has recently been shown to ameliorate inflammation. Recent studies have suggested that ghrelin may play a potential role in inflammation-related diseases such as inflammatory bowel diseases (IBD). A previous study with ghrelin in the TNBS mouse model of colitis demonstrated that ghrelin treatment decreased the clinical severity of colitis and inflammation and prevented the recurrence of disease. Ghrelin may be acting at the immunological and epithelial level as the ghrelin receptor (GHSR) is expressed by immune cells and intestinal epithelial cells. The current project investigated the effect of ghrelin in a different mouse model of colitis using dextran sodium sulphate (DSS) – a luminal toxin. Two molecular weight forms of DSS were used as they give differing effects (5kDa and 40kDa). Ghrelin treatment significantly improved clinical colitis scores (p=0.012) in the C57BL/6 mouse strain with colitis induced by 2% DSS (5kDa). Treatment with ghrelin suppressed colitis in the proximal colon as indicated by reduced accumulative histopathology scores (p=0.03). Whilst there was a trend toward reduced scores in the mid and distal colon in these mice this did not reach significance. Ghrelin did not affect histopathology scores in the 40kDa model. There was no significant effect on the number of regulatory T cells or TNF-α secretion from cultured lymph node cells from these mice. The discovery of C-terminal ghrelin peptides, for example, obestatin and the peptide derived from exon 4 deleted proghrelin (Δ4 preproghrelin peptide) have raised questions regarding their potential role in biological functions. The current project investigated the effect of Δ4 peptide in the DSS model of colitis however no significant suppression of colitis was observed. In vitro epithelial wound healing assays were also undertaken to determine the effect of ghrelin on intestinal epithelial cell migration. Ghrelin did not significantly improve wound healing in these assays. In conclusion, ghrelin treatment displays a mild anti-inflammatory effect in the 5kDa DSS model. The potential mechanisms behind this effect and the disparity between these results and those published previously will be discussed.
Resumo:
Over the years, people have often held the hypothesis that negative feedback should be very useful for largely improving the performance of information filtering systems; however, we have not obtained very effective models to support this hypothesis. This paper, proposes an effective model that use negative relevance feedback based on a pattern mining approach to improve extracted features. This study focuses on two main issues of using negative relevance feedback: the selection of constructive negative examples to reduce the space of negative examples; and the revision of existing features based on the selected negative examples. The former selects some offender documents, where offender documents are negative documents that are most likely to be classified in the positive group. The later groups the extracted features into three groups: the positive specific category, general category and negative specific category to easily update the weight. An iterative algorithm is also proposed to implement this approach on RCV1 data collections, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Groundwater flow model of the Logan river alluvial aquifer system Josephville, South East Queensland
Resumo:
The study focuses on an alluvial plain situated within a large meander of the Logan River at Josephville near Beaudesert which supports a factory that processes gelatine. The plant draws water from on site bores, as well as the Logan River, for its production processes and produces approximately 1.5 ML per day (Douglas Partners, 2004) of waste water containing high levels of dissolved ions. At present a series of treatment ponds are used to aerate the waste water reducing the level of organic matter; the water is then used to irrigate grazing land around the site. Within the study the hydrogeology is investigated, a conceptual groundwater model is produced and a numerical groundwater flow model is developed from this. On the site are several bores that access groundwater, plus a network of monitoring bores. Assessment of drilling logs shows the area is formed from a mixture of poorly sorted Quaternary alluvial sediments with a laterally continuous aquifer comprised of coarse sands and fine gravels that is in contact with the river. This aquifer occurs at a depth of between 11 and 15 metres and is overlain by a heterogeneous mixture of silts, sands and clays. The study investigates the degree of interaction between the river and the groundwater within the fluvially derived sediments for reasons of both environmental monitoring and sustainability of the potential local groundwater resource. A conceptual hydrogeological model of the site proposes two hydrostratigraphic units, a basal aquifer of coarse-grained materials overlain by a thick semi-confining unit of finer materials. From this, a two-layer groundwater flow model and hydraulic conductivity distribution was developed based on bore monitoring and rainfall data using MODFLOW (McDonald and Harbaugh, 1988) and PEST (Doherty, 2004) based on GMS 6.5 software (EMSI, 2008). A second model was also considered with the alluvium represented as a single hydrogeological unit. Both models were calibrated to steady state conditions and sensitivity analyses of the parameters has demonstrated that both models are very stable for changes in the range of ± 10% for all parameters and still reasonably stable for changes up to ± 20% with RMS errors in the model always less that 10%. The preferred two-layer model was found to give the more realistic representation of the site, where water level variations and the numerical modeling showed that the basal layer of coarse sands and fine gravels is hydraulically connected to the river and the upper layer comprising a poorly sorted mixture of silt-rich clays and sands of very low permeability limits infiltration from the surface to the lower layer. The paucity of historical data has limited the numerical modelling to a steady state one based on groundwater levels during a drought period and forecasts for varying hydrological conditions (e.g. short term as well as prolonged dry and wet conditions) cannot reasonably be made from such a model. If future modelling is to be undertaken it is necessary to establish a regular program of groundwater monitoring and maintain a long term database of water levels to enable a transient model to be developed at a later stage. This will require a valid monitoring network to be designed with additional bores required for adequate coverage of the hydrogeological conditions at the Josephville site. Further investigations would also be enhanced by undertaking pump testing to investigate hydrogeological properties in the aquifer.
Resumo:
Objective: The emergency medical system (EMS) can be defined as a comprehensive, coordinated and integrated system of care for patients suffering acute illness and injury. The aim of the present paper is to describe the evolution of the Queensland Emergency Medical System (QEMS) and to recommend a strategic national approach to EMS development. Methods: Following the formation of the Queensland Ambulance Service in 1991, a state EMS committee was formed. This committee led the development and approval of the cross portfolio QEMS policy framework that has resulted in dynamic policy development, system monitoring and evaluation. This framework is led by the Queensland Emergency Medical Services Advisory Committee. Results: There has been considerable progress in the development of all aspects of the EMS in Queensland. These developments have derived from the improved coordination and leadership that QEMS provides and has resulted in widespread satisfaction by both patients and stakeholders. Conclusions: The strategic approach outlined in the present paper offers a model for EMS arrangements throughout Australia. We propose that the Council of Australian Governments should require each state and Territory to maintain an EMS committee. These state EMS committees should have a broad portfolio of responsibilities. They should provide leadership and direction to the development of the EMS and ensure coordination and quality of outcomes. A national EMS committee with broad representation and broad scope should be established to coordinate the national development of Australia's EMS.
An indexing model for sustainable urban environmental management : the case of Gold Coast, Australia
Resumo:
Improving urban ecosystems and the quality of life of citizens have become a central issue in the global effort of creating sustainable built environments. As human beings our lives completely depend on the sustainability of the nature and we need to protect and manage natural resources in a more sustainable way in order to sustain our existence. As a result of population growth and rapid urbanisation, increasing demand of productivity depletes and degrades natural resources. However, the increasing activities and rapid development require more resources, and therefore, ecological planning becomes an essential vehicle in preserving scarce natural resources. This paper aims to indentify the interation between urban ecosystems and human activities in the context of urban sustainability and explores the degrading environmental impacts of this interaction and the necessity and benefits of using sustainability indicators as a tool in sustainable urban evnironmental management. Additionally, the paper also introduces an environmental sustainability indexing model (ASSURE) as an innovative approach to evaluate the environmental conditions of built environment.
Resumo:
In the age of climate change and rapid urbanisation, stormwater management and water sensitive urban design have become important issues for urban policy makers. This paper reports the initial findings of a research study that develops an indexing model for assessing stormwater quality in the Gold Coast.
Resumo:
The broad definition of sustainable development at the early stage of its introduction has caused confusion and hesitation among local authorities and planning professionals. The main difficulties are experience in employing loosely-defined principles of sustainable development in setting policies and goals. The question of how this theory/rhetoric-practice gap could be filled will be the theme of this study. One of the widely employed sustainability accounting approaches by governmental organisations, triple bottom line, and applicability of this approach to sustainable urban development policies will be examined. When incorporating triple bottom line considerations with the environmental impact assessment techniques, the framework of GIS-based decision support system that helps decision-makers in selecting policy option according to the economic, environmental and social impacts will be introduced. In order to embrace sustainable urban development policy considerations, the relationship between urban form, travel pattern and socio-economic attributes should be clarified. This clarification associated with other input decision support systems will picture the holistic state of the urban settings in terms of sustainability. In this study, grid-based indexing methodology will be employed to visualise the degree of compatibility of selected scenarios with the designated sustainable urban future. In addition, this tool will provide valuable knowledge about the spatial dimension of the sustainable development. It will also give fine details about the possible impacts of urban development proposals by employing disaggregated spatial data analysis (e.g. land-use, transportation, urban services, population density, pollution, etc.). The visualisation capacity of this tool will help decision makers and other stakeholders compare and select alternative of future urban developments.
Resumo:
We developed orthogonal least-squares techniques for fitting crystalline lens shapes, and used the bootstrap method to determine uncertainties associated with the estimated vertex radii of curvature and asphericities of five different models. Three existing models were investigated including one that uses two separate conics for the anterior and posterior surfaces, and two whole lens models based on a modulated hyperbolic cosine function and on a generalized conic function. Two new models were proposed including one that uses two interdependent conics and a polynomial based whole lens model. The models were used to describe the in vitro shape for a data set of twenty human lenses with ages 7–82 years. The two-conic-surface model (7 mm zone diameter) and the interdependent surfaces model had significantly lower merit functions than the other three models for the data set, indicating that most likely they can describe human lens shape over a wide age range better than the other models (although with the two-conic-surfaces model being unable to describe the lens equatorial region). Considerable differences were found between some models regarding estimates of radii of curvature and surface asphericities. The hyperbolic cosine model and the new polynomial based whole lens model had the best precision in determining the radii of curvature and surface asphericities across the five considered models. Most models found significant increase in anterior, but not posterior, radius of curvature with age. Most models found a wide scatter of asphericities, but with the asphericities usually being positive and not significantly related to age. As the interdependent surfaces model had lower merit function than three whole lens models, there is further scope to develop an accurate model of the complete shape of human lenses of all ages. The results highlight the continued difficulty in selecting an appropriate model for the crystalline lens shape.
Resumo:
The selection criteria for contractor pre-qualification are characterized by the co-existence of both quantitative and qualitative data. The qualitative data is non-linear, uncertain and imprecise. An ideal decision support system for contractor pre-qualification should have the ability of handling both quantitative and qualitative data, and of mapping the complicated nonlinear relationship of the selection criteria, such that rational and consistent decisions can be made. In this research paper, an artificial neural network model was developed to assist public clients identifying suitable contractors for tendering. The pre-qualification criteria (variables) were identified for the model. One hundred and twelve real pre-qualification cases were collected from civil engineering projects in Hong Kong, and eighty-eight hypothetical pre-qualification cases were also generated according to the “If-then” rules used by professionals in the pre-qualification process. The results of the analysis totally comply with current practice (public developers in Hong Kong). Each pre-qualification case consisted of input ratings for candidate contractors’ attributes and their corresponding pre-qualification decisions. The training of the neural network model was accomplished by using the developed program, in which a conjugate gradient descent algorithm was incorporated for improving the learning performance of the network. Cross-validation was applied to estimate the generalization errors based on the “re-sampling” of training pairs. The case studies show that the artificial neural network model is suitable for mapping the complicated nonlinear relationship between contractors’ attributes and their corresponding pre-qualification (disqualification) decisions. The artificial neural network model can be concluded as an ideal alternative for performing the contractor pre-qualification task.
Resumo:
Intuitively, any `bag of words' approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distri- butions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document's initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur's search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.