867 resultados para Expected satiety


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This is a comprehensive study of human kidney proximal tubular epithelial cells (PTEC) which are known to respond to and mediate the pathological process of a range of kidney diseases. It identifies various molecules expressed by PTEC and how these molecules participate in down-regulating the inflammatory process, thereby highlighting the clinical potential of these molecules to treat various kidney diseases. In the disease state, PTEC gain the ability to regulate the immune cell responses present within the interstitium. This down-regulation is a complex interaction of contact dependent/independent mechanisms involving various immuno-regulatory molecules including PD-L1, sHLA-G and IDO. The overall outcome of this down-regulation is suppressed DC maturation, decreased number of antibody producing B cells and low T cell responses. These manifestations within a clinical setting are expected to dampen the ongoing inflammation, preventing the damage caused to the kidney tissue.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traffic is one of the prominent sources of polycyclic aromatic hydrocarbons (PAHs) and road surfaces are the most critical platform for stormwater pollution. Build-up of pollutants on road surfaces was the focus of this research study. The study found that PAHs build-up on road surfaces primarily originate from traffic activities, specifically gasoline powered vehicles. Other sources such as diesel vehicles, industrial oil combustion and incineration were also found to contribute to the PAH build-up. Additionally, the study explored the linkages between concentrations of PAHs and traffic characteristics such as traffic volume, vehicle mix and traffic flow. While traffic congestion was found to be positively correlated with 6- ring and 5- ring PAHs in road build-up, it was negatively correlated with 3-ring and 4 ring PAHs. The absence of positive correlation between 3-ring and 4-ring PAHs and traffic parameters is attributed to the propensity of these relatively volatile PAHs to undergo re-suspension and evaporation. The outcomes of this study are expected to contribute effective transport and land use planning for the prevention of PAH pollution in the urban environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The philosophical promise of community development to “resource and empower people so that they can collectively control their own destinies” (Kenny 1996:104) is no doubt alluring to Indigenous Australia. Given the historical and contemporary experiences of colonial control and surveillance of Aboriginal bodies, alongside the continuing experiences of socio-economic disadvantage, community development reaffirms the aspirational goal of Indigenous Australians for self-determination. Self-determination as a national policy agenda for Indigenous Australians emerged in the 1970s and saw the establishment of a wide range of Aboriginal community-controlled services (Tsey et al 2012). Sullivan (2010:4) argues that the Aboriginal community controlled service sector during this time has, and continues to be, instrumental to advancing the plight of Indigenous Australians both materially and politically. Yet community development and self-determination remain highly problematic and contested in how they manifest in Indigenous social policy agendas and in practice (Hollinsworth 1996; Martin 2003; McCausland 2005; Moreton-Robinson 2009). Moreton-Robinson (2009:68) argues that a central theme underpinning these tensions is a reading of Indigeneity in which Aboriginal and Torres Strait Islander people, behaviours, cultures, and communities are pathologised as “dysfunctional” thus enabling assertions that Indigenous people are incapable of managing their own affairs. This discourse distracts us from the “strategies and tactics of patriarchal white sovereignty” that inhibit the “state’s earlier policy of self-determination” (Moreton-Robinson 2009:68). We acknowledge the irony of community development espoused by Ramirez above (1990), that the least resourced are expected to be most resourceful.; however, we wish to interrogate the processes that inhibit Indigenous participation and control of our own affairs rather than further interrogate Aboriginal minds as uneducated, incapable and/or impaired...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2012, the Australian Council of Deans of Education (ACDE), through the Queensland University of Technology, led a MATSITI project focusing on issues related to the retention, support and graduation of Aboriginal and Torres Strait Islander teachers in initial Teacher Education programs across Australia. While some of the barriers that impact on the graduation of Aboriginal and Torres Strait Islander teachers are well, known, this was the first large-scale Australian study to look at the issues nationally and in depth. Thirty-four Teacher Education programs across the country were audited, meetings were held in each state, both Aboriginal and Torres Strait Islander and non-Indigenous Faculty were consulted and approximately 70 Aboriginal and Torres Strait Islander pre-service teachers interviewed. This paper reports on the outcomes of that project, including the evidence that while recruitment into Teacher Education has, in some sites, reached parity, retention rates are well-below expected across the nation. The paper focuses both on the quantitative data and, even more significantly, on the voices of the pre-service teachers themselves, offering insights into the ways forward. As a result of this study, Deans and Heads of School of Teacher Education programs across the country have developed Action Plans alongside their university's Indigenous Higher Education Centres to improve support and retention of Aboriginal and Torres Strait Islander teachers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduced predators can have pronounced effects on naïve prey species; thus, predator control is often essential for conservation of threatened native species. Complete eradication of the predator, although desirable, may be elusive in budget-limited situations, whereas predator suppression is more feasible and may still achieve conservation goals. We used a stochastic predator-prey model based on a Lotka-Volterra system to investigate the cost-effectiveness of predator control to achieve prey conservation. We compared five control strategies: immediate eradication, removal of a constant number of predators (fixed-number control), removal of a constant proportion of predators (fixed-rate control), removal of predators that exceed a predetermined threshold (upper-trigger harvest), and removal of predators whenever their population falls below a lower predetermined threshold (lower-trigger harvest). We looked at the performance of these strategies when managers could always remove the full number of predators targeted by each strategy, subject to budget availability. Under this assumption immediate eradication reduced the threat to the prey population the most. We then examined the effect of reduced management success in meeting removal targets, assuming removal is more difficult at low predator densities. In this case there was a pronounced reduction in performance of the immediate eradication, fixed-number, and lower-trigger strategies. Although immediate eradication still yielded the highest expected minimum prey population size, upper-trigger harvest yielded the lowest probability of prey extinction and the greatest return on investment (as measured by improvement in expected minimum population size per amount spent). Upper-trigger harvest was relatively successful because it operated when predator density was highest, which is when predator removal targets can be more easily met and the effect of predators on the prey is most damaging. This suggests that controlling predators only when they are most abundant is the "best" strategy when financial resources are limited and eradication is unlikely. © 2008 Society for Conservation Biology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a concern that high densities of elephants in southern Africa could lead to the overall reduction of other forms of biodiversity. We present a grid-based model of elephant-savanna dynamics, which differs from previous elephant-vegetation models by accounting for woody plant demographics, tree-grass interactions, stochastic environmental variables (fire and rainfall), and spatial contagion of fire and tree recruitment. The model projects changes in height structure and spatial pattern of trees over periods of centuries. The vegetation component of the model produces long-term tree-grass coexistence, and the emergent fire frequencies match those reported for southern African savannas. Including elephants in the savanna model had the expected effect of reducing woody plant cover, mainly via increased adult tree mortality, although at an elephant density of 1.0 elephant/km2, woody plants still persisted for over a century. We tested three different scenarios in addition to our default assumptions. (1) Reducing mortality of adult trees after elephant use, mimicking a more browsing-tolerant tree species, mitigated the detrimental effect of elephants on the woody population. (2) Coupling germination success (increased seedling recruitment) to elephant browsing further increased tree persistence, and (3) a faster growing woody component allowed some woody plant persistence for at least a century at a density of 3 elephants/km2. Quantitative models of the kind presented here provide a valuable tool for exploring the consequences of management decisions involving the manipulation of elephant population densities. © 2005 by the Ecological Society of America.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The notion of being sure that you have completely eradicated an invasive species is fanciful because of imperfect detection and persistent seed banks. Eradication is commonly declared either on an ad hoc basis, on notions of seed bank longevity, or on setting arbitrary thresholds of 1% or 5% confidence that the species is not present. Rather than declaring eradication at some arbitrary level of confidence, we take an economic approach in which we stop looking when the expected costs outweigh the expected benefits. We develop theory that determines the number of years of absent surveys required to minimize the net expected cost. Given detection of a species is imperfect, the optimal stopping time is a trade-off between the cost of continued surveying and the cost of escape and damage if eradication is declared too soon. A simple rule of thumb compares well to the exact optimal solution using stochastic dynamic programming. Application of the approach to the eradication programme of Helenium amarum reveals that the actual stopping time was a precautionary one given the ranges for each parameter. © 2006 Blackwell Publishing Ltd/CNRS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is becoming increasingly popular to consider species interactions when managing ecological foodwebs. Such an approach is useful in determining how management can affect multiple species, with either beneficial or detrimental consequences. Identifying such actions is particularly valuable in the context of conservation decision making as funding is severely limited. This paper outlines a new approach that simplifies the resource allocation problem in a two species system for a range of species interactions: independent, mutualism, predator-prey, and competitive exclusion. We assume that both species are endangered and we do not account for decisions over time. We find that optimal funding allocation is to the conservation of the species with the highest marginal gain in expected probability of survival and that, across all except mutualist interaction types, optimal conservation funding allocation differs between species. Loss in efficiency from ignoring species interactions was most severe in predator-prey systems. The funding problem we address, where an ecosystem includes multiple threatened species, will only become more commonplace as increasing numbers of species worldwide become threatened. © 2011 Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Threatened species often exist in a small number of isolated subpopulations. Given limitations on conservation spending, managers must choose from strategies that range from managing just one subpopulation and risking all other subpopulations to managing all subpopulations equally and poorly, thereby risking the loss of all subpopulations. We took an economic approach to this problem in an effort to discover a simple rule of thumb for optimally allocating conservation effort among subpopulations. This rule was derived by maximizing the expected number of extant subpopulations remaining given n subpopulations are actually managed. We also derived a spatiotemporally optimized strategy through stochastic dynamic programming. The rule of thumb suggested that more subpopulations should be managed if the budget increases or if the cost of reducing local extinction probabilities decreases. The rule performed well against the exact optimal strategy that was the result of the stochastic dynamic program and much better than other simple strategies (e.g., always manage one extant subpopulation or half of the remaining subpopulation). We applied our approach to the allocation of funds in 2 contrasting case studies: reduction of poaching of Sumatran tigers (Panthera tigris sumatrae) and habitat acquisition for San Joaquin kit foxes (Vulpes macrotis mutica). For our estimated annual budget for Sumatran tiger management, the mean time to extinction was about 32 years. For our estimated annual management budget for kit foxes in the San Joaquin Valley, the mean time to extinction was approximately 24 years. Our framework allows managers to deal with the important question of how to allocate scarce conservation resources among subpopulations of any threatened species. © 2008 Society for Conservation Biology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a new way to build a combined list from K base lists, each containing N items. A combined list consists of top segments of various sizes from each base list so that the total size of all top segments equals N. A sequence of item requests is processed and the goal is to minimize the total number of misses. That is, we seek to build a combined list that contains all the frequently requested items. We first consider the special case of disjoint base lists. There, we design an efficient algorithm that computes the best combined list for a given sequence of requests. In addition, we develop a randomized online algorithm whose expected number of misses is close to that of the best combined list chosen in hindsight. We prove lower bounds that show that the expected number of misses of our randomized algorithm is close to the optimum. In the presence of duplicate items, we show that computing the best combined list is NP-hard. We show that our algorithms still apply to a linearized notion of loss in this case. We expect that this new way of aggregating lists will find many ranking applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Linear assets are engineering infrastructure, such as pipelines, railway lines, and electricity cables, which span long distances and can be divided into different segments. Optimal management of such assets is critical for asset owners as they normally involve significant capital investment. Currently, Time Based Preventive Maintenance (TBPM) strategies are commonly used in industry to improve the reliability of such assets, as they are easy to implement compared with reliability or risk-based preventive maintenance strategies. Linear assets are normally of large scale and thus their preventive maintenance is costly. Their owners and maintainers are always seeking to optimize their TBPM outcomes in terms of minimizing total expected costs over a long term involving multiple maintenance cycles. These costs include repair costs, preventive maintenance costs, and production losses. A TBPM strategy defines when Preventive Maintenance (PM) starts, how frequently the PM is conducted and which segments of a linear asset are operated on in each PM action. A number of factors such as required minimal mission time, customer satisfaction, human resources, and acceptable risk levels need to be considered when planning such a strategy. However, in current practice, TBPM decisions are often made based on decision makers’ expertise or industrial historical practice, and lack a systematic analysis of the effects of these factors. To address this issue, here we investigate the characteristics of TBPM of linear assets, and develop an effective multiple criteria decision making approach for determining an optimal TBPM strategy. We develop a recursive optimization equation which makes it possible to evaluate the effect of different maintenance options for linear assets, such as the best partitioning of the asset into segments and the maintenance cost per segment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most real-life data analysis problems are difficult to solve using exact methods, due to the size of the datasets and the nature of the underlying mechanisms of the system under investigation. As datasets grow even larger, finding the balance between the quality of the approximation and the computing time of the heuristic becomes non-trivial. One solution is to consider parallel methods, and to use the increased computational power to perform a deeper exploration of the solution space in a similar time. It is, however, difficult to estimate a priori whether parallelisation will provide the expected improvement. In this paper we consider a well-known method, genetic algorithms, and evaluate on two distinct problem types the behaviour of the classic and parallel implementations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Do the political values of the general public form a coherent system? What might be the source of coherence? We view political values as expressions, in the political domain, of more basic personal values. Basic personal values (e.g., security, achievement, benevolence, hedonism) are organized on a circular continuum that reflects their conflicting and compatible motivations. We theorize that this circular motivational structure also gives coherence to political values. We assess this theorizing with data from 15 countries, using eight core political values (e.g., free enterprise, law and order) and ten basic personal values. We specify the underlying basic values expected to promote or oppose each political value. We offer different hypotheses for the 12 non-communist and three post-communist countries studied, where the political context suggests different meanings of a basic or political value. Correlation and regression analyses support almost all hypotheses. Moreover, basic values account for substantially more variance in political values than age, gender, education, and income. Multidimensional scaling analyses demonstrate graphically how the circular motivational continuum of basic personal values structures relations among core political values. This study strengthens the assumption that individual differences in basic personal values play a critical role in political thought.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Essentialism is an ontological belief that there exists an underlying essence to a category. This article advances and tests in three studies the hypothesis that communication about a social category, and expected or actual mutual validation, promotes essentialism about a social category. In Study 1, people who wrote communications about a social category to their ingroup audiences essentialized it more strongly than those who simply memorized about it. In Study 2, communicators whose messages about a novel social category were more elaborately discussed with a confederate showed a stronger tendency to essentialize it. In Study 3, communicators who elaborately talked about a social category with a naive conversant also essentialized the social category. A meta-analysis of the results supported the hypothesis that communication promotes essentialism. Although essentialism has been discussed primarily in perceptual and cognitive domains, the role of social processes as its antecedent deserves greater attention.