975 resultados para Minimal-model
Resumo:
Ghrelin is a gut-brain peptide hormone that induces appetite, stimulates the release of growth hormone, and has recently been shown to ameliorate inflammation. Recent studies have suggested that ghrelin may play a potential role in inflammation-related diseases such as inflammatory bowel diseases (IBD). A previous study with ghrelin in the TNBS mouse model of colitis demonstrated that ghrelin treatment decreased the clinical severity of colitis and inflammation and prevented the recurrence of disease. Ghrelin may be acting at the immunological and epithelial level as the ghrelin receptor (GHSR) is expressed by immune cells and intestinal epithelial cells. The current project investigated the effect of ghrelin in a different mouse model of colitis using dextran sodium sulphate (DSS) – a luminal toxin. Two molecular weight forms of DSS were used as they give differing effects (5kDa and 40kDa). Ghrelin treatment significantly improved clinical colitis scores (p=0.012) in the C57BL/6 mouse strain with colitis induced by 2% DSS (5kDa). Treatment with ghrelin suppressed colitis in the proximal colon as indicated by reduced accumulative histopathology scores (p=0.03). Whilst there was a trend toward reduced scores in the mid and distal colon in these mice this did not reach significance. Ghrelin did not affect histopathology scores in the 40kDa model. There was no significant effect on the number of regulatory T cells or TNF-α secretion from cultured lymph node cells from these mice. The discovery of C-terminal ghrelin peptides, for example, obestatin and the peptide derived from exon 4 deleted proghrelin (Δ4 preproghrelin peptide) have raised questions regarding their potential role in biological functions. The current project investigated the effect of Δ4 peptide in the DSS model of colitis however no significant suppression of colitis was observed. In vitro epithelial wound healing assays were also undertaken to determine the effect of ghrelin on intestinal epithelial cell migration. Ghrelin did not significantly improve wound healing in these assays. In conclusion, ghrelin treatment displays a mild anti-inflammatory effect in the 5kDa DSS model. The potential mechanisms behind this effect and the disparity between these results and those published previously will be discussed.
Resumo:
Over the years, people have often held the hypothesis that negative feedback should be very useful for largely improving the performance of information filtering systems; however, we have not obtained very effective models to support this hypothesis. This paper, proposes an effective model that use negative relevance feedback based on a pattern mining approach to improve extracted features. This study focuses on two main issues of using negative relevance feedback: the selection of constructive negative examples to reduce the space of negative examples; and the revision of existing features based on the selected negative examples. The former selects some offender documents, where offender documents are negative documents that are most likely to be classified in the positive group. The later groups the extracted features into three groups: the positive specific category, general category and negative specific category to easily update the weight. An iterative algorithm is also proposed to implement this approach on RCV1 data collections, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Groundwater flow model of the Logan river alluvial aquifer system Josephville, South East Queensland
Resumo:
The study focuses on an alluvial plain situated within a large meander of the Logan River at Josephville near Beaudesert which supports a factory that processes gelatine. The plant draws water from on site bores, as well as the Logan River, for its production processes and produces approximately 1.5 ML per day (Douglas Partners, 2004) of waste water containing high levels of dissolved ions. At present a series of treatment ponds are used to aerate the waste water reducing the level of organic matter; the water is then used to irrigate grazing land around the site. Within the study the hydrogeology is investigated, a conceptual groundwater model is produced and a numerical groundwater flow model is developed from this. On the site are several bores that access groundwater, plus a network of monitoring bores. Assessment of drilling logs shows the area is formed from a mixture of poorly sorted Quaternary alluvial sediments with a laterally continuous aquifer comprised of coarse sands and fine gravels that is in contact with the river. This aquifer occurs at a depth of between 11 and 15 metres and is overlain by a heterogeneous mixture of silts, sands and clays. The study investigates the degree of interaction between the river and the groundwater within the fluvially derived sediments for reasons of both environmental monitoring and sustainability of the potential local groundwater resource. A conceptual hydrogeological model of the site proposes two hydrostratigraphic units, a basal aquifer of coarse-grained materials overlain by a thick semi-confining unit of finer materials. From this, a two-layer groundwater flow model and hydraulic conductivity distribution was developed based on bore monitoring and rainfall data using MODFLOW (McDonald and Harbaugh, 1988) and PEST (Doherty, 2004) based on GMS 6.5 software (EMSI, 2008). A second model was also considered with the alluvium represented as a single hydrogeological unit. Both models were calibrated to steady state conditions and sensitivity analyses of the parameters has demonstrated that both models are very stable for changes in the range of ± 10% for all parameters and still reasonably stable for changes up to ± 20% with RMS errors in the model always less that 10%. The preferred two-layer model was found to give the more realistic representation of the site, where water level variations and the numerical modeling showed that the basal layer of coarse sands and fine gravels is hydraulically connected to the river and the upper layer comprising a poorly sorted mixture of silt-rich clays and sands of very low permeability limits infiltration from the surface to the lower layer. The paucity of historical data has limited the numerical modelling to a steady state one based on groundwater levels during a drought period and forecasts for varying hydrological conditions (e.g. short term as well as prolonged dry and wet conditions) cannot reasonably be made from such a model. If future modelling is to be undertaken it is necessary to establish a regular program of groundwater monitoring and maintain a long term database of water levels to enable a transient model to be developed at a later stage. This will require a valid monitoring network to be designed with additional bores required for adequate coverage of the hydrogeological conditions at the Josephville site. Further investigations would also be enhanced by undertaking pump testing to investigate hydrogeological properties in the aquifer.
Resumo:
Objective: The emergency medical system (EMS) can be defined as a comprehensive, coordinated and integrated system of care for patients suffering acute illness and injury. The aim of the present paper is to describe the evolution of the Queensland Emergency Medical System (QEMS) and to recommend a strategic national approach to EMS development. Methods: Following the formation of the Queensland Ambulance Service in 1991, a state EMS committee was formed. This committee led the development and approval of the cross portfolio QEMS policy framework that has resulted in dynamic policy development, system monitoring and evaluation. This framework is led by the Queensland Emergency Medical Services Advisory Committee. Results: There has been considerable progress in the development of all aspects of the EMS in Queensland. These developments have derived from the improved coordination and leadership that QEMS provides and has resulted in widespread satisfaction by both patients and stakeholders. Conclusions: The strategic approach outlined in the present paper offers a model for EMS arrangements throughout Australia. We propose that the Council of Australian Governments should require each state and Territory to maintain an EMS committee. These state EMS committees should have a broad portfolio of responsibilities. They should provide leadership and direction to the development of the EMS and ensure coordination and quality of outcomes. A national EMS committee with broad representation and broad scope should be established to coordinate the national development of Australia's EMS.
An indexing model for sustainable urban environmental management : the case of Gold Coast, Australia
Resumo:
Improving urban ecosystems and the quality of life of citizens have become a central issue in the global effort of creating sustainable built environments. As human beings our lives completely depend on the sustainability of the nature and we need to protect and manage natural resources in a more sustainable way in order to sustain our existence. As a result of population growth and rapid urbanisation, increasing demand of productivity depletes and degrades natural resources. However, the increasing activities and rapid development require more resources, and therefore, ecological planning becomes an essential vehicle in preserving scarce natural resources. This paper aims to indentify the interation between urban ecosystems and human activities in the context of urban sustainability and explores the degrading environmental impacts of this interaction and the necessity and benefits of using sustainability indicators as a tool in sustainable urban evnironmental management. Additionally, the paper also introduces an environmental sustainability indexing model (ASSURE) as an innovative approach to evaluate the environmental conditions of built environment.
Resumo:
In the age of climate change and rapid urbanisation, stormwater management and water sensitive urban design have become important issues for urban policy makers. This paper reports the initial findings of a research study that develops an indexing model for assessing stormwater quality in the Gold Coast.
Resumo:
The broad definition of sustainable development at the early stage of its introduction has caused confusion and hesitation among local authorities and planning professionals. The main difficulties are experience in employing loosely-defined principles of sustainable development in setting policies and goals. The question of how this theory/rhetoric-practice gap could be filled will be the theme of this study. One of the widely employed sustainability accounting approaches by governmental organisations, triple bottom line, and applicability of this approach to sustainable urban development policies will be examined. When incorporating triple bottom line considerations with the environmental impact assessment techniques, the framework of GIS-based decision support system that helps decision-makers in selecting policy option according to the economic, environmental and social impacts will be introduced. In order to embrace sustainable urban development policy considerations, the relationship between urban form, travel pattern and socio-economic attributes should be clarified. This clarification associated with other input decision support systems will picture the holistic state of the urban settings in terms of sustainability. In this study, grid-based indexing methodology will be employed to visualise the degree of compatibility of selected scenarios with the designated sustainable urban future. In addition, this tool will provide valuable knowledge about the spatial dimension of the sustainable development. It will also give fine details about the possible impacts of urban development proposals by employing disaggregated spatial data analysis (e.g. land-use, transportation, urban services, population density, pollution, etc.). The visualisation capacity of this tool will help decision makers and other stakeholders compare and select alternative of future urban developments.
Resumo:
The selection criteria for contractor pre-qualification are characterized by the co-existence of both quantitative and qualitative data. The qualitative data is non-linear, uncertain and imprecise. An ideal decision support system for contractor pre-qualification should have the ability of handling both quantitative and qualitative data, and of mapping the complicated nonlinear relationship of the selection criteria, such that rational and consistent decisions can be made. In this research paper, an artificial neural network model was developed to assist public clients identifying suitable contractors for tendering. The pre-qualification criteria (variables) were identified for the model. One hundred and twelve real pre-qualification cases were collected from civil engineering projects in Hong Kong, and eighty-eight hypothetical pre-qualification cases were also generated according to the “If-then” rules used by professionals in the pre-qualification process. The results of the analysis totally comply with current practice (public developers in Hong Kong). Each pre-qualification case consisted of input ratings for candidate contractors’ attributes and their corresponding pre-qualification decisions. The training of the neural network model was accomplished by using the developed program, in which a conjugate gradient descent algorithm was incorporated for improving the learning performance of the network. Cross-validation was applied to estimate the generalization errors based on the “re-sampling” of training pairs. The case studies show that the artificial neural network model is suitable for mapping the complicated nonlinear relationship between contractors’ attributes and their corresponding pre-qualification (disqualification) decisions. The artificial neural network model can be concluded as an ideal alternative for performing the contractor pre-qualification task.
Resumo:
Intuitively, any `bag of words' approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distri- butions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document's initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur's search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.
Resumo:
To date, most theories of business models have theorized value capture assuming that appropriability regimes were exogenous and that the firm would face a unique, ideal-typical appropriability regime. This has led theory contributions to focus on governance structures to minimize transaction costs, to downplay the interdepencies between value capture and value creation, and to ignore revenue generation strategies. We propose a reconceptualization of business models value capture mechanisms that rely on assumptions of endogeneity and multiplicity of appropriability regimes. This new approach to business model construction highlights the interdependencies and trade-offs between value creation and value capture offered by different types and combinations of appropriability regimes. The theory is illustrated by the analysis of three cases of open source software business models
Resumo:
The ability to forecast machinery failure is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models for forecasting machinery health based on condition data. Although these models have aided the advancement of the discipline, they have made only a limited contribution to developing an effective machinery health prognostic system. The literature review indicates that there is not yet a prognostic model that directly models and fully utilises suspended condition histories (which are very common in practice since organisations rarely allow their assets to run to failure); that effectively integrates population characteristics into prognostics for longer-range prediction in a probabilistic sense; which deduces the non-linear relationship between measured condition data and actual asset health; and which involves minimal assumptions and requirements. This work presents a novel approach to addressing the above-mentioned challenges. The proposed model consists of a feed-forward neural network, the training targets of which are asset survival probabilities estimated using a variation of the Kaplan-Meier estimator and a degradation-based failure probability density estimator. The adapted Kaplan-Meier estimator is able to model the actual survival status of individual failed units and estimate the survival probability of individual suspended units. The degradation-based failure probability density estimator, on the other hand, extracts population characteristics and computes conditional reliability from available condition histories instead of from reliability data. The estimated survival probability and the relevant condition histories are respectively presented as “training target” and “training input” to the neural network. The trained network is capable of estimating the future survival curve of a unit when a series of condition indices are inputted. Although the concept proposed may be applied to the prognosis of various machine components, rolling element bearings were chosen as the research object because rolling element bearing failure is one of the foremost causes of machinery breakdowns. Computer simulated and industry case study data were used to compare the prognostic performance of the proposed model and four control models, namely: two feed-forward neural networks with the same training function and structure as the proposed model, but neglected suspended histories; a time series prediction recurrent neural network; and a traditional Weibull distribution model. The results support the assertion that the proposed model performs better than the other four models and that it produces adaptive prediction outputs with useful representation of survival probabilities. This work presents a compelling concept for non-parametric data-driven prognosis, and for utilising available asset condition information more fully and accurately. It demonstrates that machinery health can indeed be forecasted. The proposed prognostic technique, together with ongoing advances in sensors and data-fusion techniques, and increasingly comprehensive databases of asset condition data, holds the promise for increased asset availability, maintenance cost effectiveness, operational safety and – ultimately – organisation competitiveness.
Resumo:
Objective: In an effort to examine the decreasing oral health trend of Australian dental patients, the Health Belief Model (HBM) was utilised to understand the beliefs underlying brushing and flossing self-care. The HBM states that perception of severity and susceptibility to inaction and an estimate of the barriers and benefits of behavioural performance influences people’s health behaviours. Self-efficacy, confidence in one’s ability to perform oral self-care, was also examined. Methods: In dental waiting rooms, a community sample (N = 92) of dental patients completed a questionnaire assessing HBM variables and self-efficacy, as well as their performance of the oral hygiene behaviours of brushing and flossing. Results: Partial support only was found for the HBM with barriers emerging as the sole HBM factor influencing brushing and flossing behaviours. Self-efficacy significantly predicted both oral hygiene behaviours also. Conclusion: Support was found for the control factors, specifically a consideration of barriers and self-efficacy, in the context of understanding dental patients’ oral hygiene decisions. Practice implications: Dental professionals should encourage patients’ self-confidence to brush and floss at recommended levels and discuss strategies that combat barriers to performance, rather than emphasising the risks of inaction or the benefits of oral self-care.