905 resultados para dual pathway model
Resumo:
Prostate cancer metastasis is reliant on the reciprocal interactions between cancer cells and the bone niche/micro-environment. The production of suitable matrices to study metastasis, carcinogenesis and in particular prostate cancer/bone micro-environment interaction has been limited to specific protein matrices or matrix secreted by immortalised cell lines that may have undergone transformation processes altering signaling pathways and modifying gene or receptor expression. We hypothesize that matrices produced by primary human osteoblasts are a suitable means to develop an in vitro model system for bone metastasis research mimicking in vivo conditions. We have used a decellularized matrix secreted from primary human osteoblasts as a model for prostate cancer function in the bone micro-environment. We show that this collagen I rich matrix is of fibrillar appearance, highly mineralized, and contains proteins, such as osteocalcin, osteonectin and osteopontin, and growth factors characteristic of bone extracellular matrix (ECM). LNCaP and PC3 cells grown on this matrix, adhere strongly, proliferate, and express markers consistent with a loss of epithelial phenotype. Moreover, growth of these cells on the matrix is accompanied by the induction of genes associated with attachment, migration, increased invasive potential, Ca2+ signaling and osteolysis. In summary, we show that growth of prostate cancer cells on matrices produced by primary human osteoblasts mimics key features of prostate cancer bone metastases and thus is a suitable model system to study the tumor/bone micro-environment interaction in this disease.
Resumo:
This paper investigates the Cooroy Mill community precinct (Sunshine Coast, Queensland), as a case study, seeking to understand the way local dynamics interplay and work with the community strengths to build a governance model of best fit. As we move to an age of ubiquitous computing and creative economies, the definition of public place and its governance take on new dimensions, which – while often utilizing models of the past – will need to acknowledge and change to the direction of the future. This paper considers a newly developed community precinct that has been built on three key principles: to foster creative expression with new media, to establish a knowledge economy in a regional area, and to subscribe to principles of community engagement. The study involved qualitative interviews with key stakeholders and a review of common practice models of governance along a spectrum from community control to state control. The paper concludes with a call for governance structures that are locally situated and tailored, inclusive, engaging, dynamic and flexible in order to build community capacity, encourage creativity, and build knowledge economies within emerging digital media cityscapes.
Resumo:
The osteogenic potential of human adipose-derived precursor cells seeded on medical-grade polycaprolactone-tricalcium phosphate scaffolds was investigated in this in vivo study. Three study groups were investigated: (1) induced—stimulated with osteogenic factors only after seeding into scaffold; (2) preinduced—induced for 2 weeks before seeding into scaffolds; and (3) uninduced—cells without any introduced induction. For all groups, scaffolds were implanted subcutaneously into the dorsum of athymic rats. The scaffold/cell constructs were harvested at the end of 6 or 12 weeks and analyzed for osteogenesis. Gross morphological examination using scanning electron microscopy indicated good integration of host tissue with scaffold/cell constructs and extensive tissue infiltration into the scaffold interior. Alizarin Red histology and immunostaining showed a heightened level of mineralization and an increase in osteonectin, osteopontin, and collagen type I protein expression in both the induced and preinduced groups compared with the uninduced groups. However, no significant differences were observed in these indicators when compared between the induced and preinduced groups.
Resumo:
Design teams are confronted with the quandary of choosing apposite building control systems to suit the needs of particular intelligent building projects, due to the availability of innumerable ‘intelligent’ building products and a dearth of inclusive evaluation tools. This paper is organised to develop a model for facilitating the selection evaluation for intelligent HVAC control systems for commercial intelligent buildings. To achieve these objectives, systematic research activities have been conducted to first develop, test and refine the general conceptual model using consecutive surveys; then, to convert the developed conceptual framework into a practical model; and, finally, to evaluate the effectiveness of the model by means of expert validation. The results of the surveys are that ‘total energy use’ is perceived as the top selection criterion, followed by the‘system reliability and stability’, ‘operating and maintenance costs’, and ‘control of indoor humidity and temperature’. This research not only presents a systematic and structured approach to evaluate candidate intelligent HVAC control system against the critical selection criteria (CSC), but it also suggests a benchmark for the selection of one control system candidate against another.
Resumo:
This paper investigates what happened in one Australian primary school as part of the establishment, use and development of a computer laboratory over a period of two years. As part of a school renewal project, the computer lab was introduced as an ‘innovative’ way to improve the skills of teachers and children in information and communication technologies (ICT) and to lead to curriculum change. However, the way in which the lab was conceptualised and used worked against achieving these goals. The micropolitics of educational change and an input-output understanding of computers meant that change remained structural rather pedagogical or philosophical.
Resumo:
In an open railway access market, the provisions of railway infrastructures and train services are separated and independent. Negotiations between the track owner and train service providers are thus required for the allocation of the track capacity and the formulation of the services timetables, in which each party, i.e. a stakeholder, exhibits intelligence from the previous negotiation experience to obtain the favourable terms and conditions for the track access. In order to analyse the realistic interacting behaviour among the stakeholders in the open railway access market schedule negotiations, intelligent learning capability should be included in the behaviour modelling. This paper presents a reinforcement learning approach on modelling the intelligent negotiation behaviour. The effectiveness of incorporating learning capability in the stakeholder negotiation behaviour is then demonstrated through simulation.
Resumo:
Safety at roadway intersections is of significant interest to transportation professionals due to the large number of intersections in transportation networks, the complexity of traffic movements at these locations that leads to large numbers of conflicts, and the wide variety of geometric and operational features that define them. A variety of collision types including head-on, sideswipe, rear-end, and angle crashes occur at intersections. While intersection crash totals may not reveal a site deficiency, over exposure of a specific crash type may reveal otherwise undetected deficiencies. Thus, there is a need to be able to model the expected frequency of crashes by collision type at intersections to enable the detection of problems and the implementation of effective design strategies and countermeasures. Statistically, it is important to consider modeling collision type frequencies simultaneously to account for the possibility of common unobserved factors affecting crash frequencies across crash types. In this paper, a simultaneous equations model of crash frequencies by collision type is developed and presented using crash data for rural intersections in Georgia. The model estimation results support the notion of the presence of significant common unobserved factors across crash types, although the impact of these factors on parameter estimates is found to be rather modest.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes. Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes.
Resumo:
At least two important transportation planning activities rely on planning-level crash prediction models. One is motivated by the Transportation Equity Act for the 21st Century, which requires departments of transportation and metropolitan planning organizations to consider safety explicitly in the transportation planning process. The second could arise from a need for state agencies to establish incentive programs to reduce injuries and save lives. Both applications require a forecast of safety for a future period. Planning-level crash prediction models for the Tucson, Arizona, metropolitan region are presented to demonstrate the feasibility of such models. Data were separated into fatal, injury, and property-damage crashes. To accommodate overdispersion in the data, negative binomial regression models were applied. To accommodate the simultaneity of fatality and injury crash outcomes, simultaneous estimation of the models was conducted. All models produce crash forecasts at the traffic analysis zone level. Statistically significant (p-values < 0.05) and theoretically meaningful variables for the fatal crash model included population density, persons 17 years old or younger as a percentage of the total population, and intersection density. Significant variables for the injury and property-damage crash models were population density, number of employees, intersections density, percentage of miles of principal arterial, percentage of miles of minor arterials, and percentage of miles of urban collectors. Among several conclusions it is suggested that planning-level safety models are feasible and may play a role in future planning activities. However, caution must be exercised with such models.
Resumo:
A number of studies have focused on estimating the effects of accessibility on housing values by using the hedonic price model. In the majority of studies, estimation results have revealed that housing values increase as accessibility improves, although the magnitude of estimates has varied across studies. Adequately estimating the relationship between transportation accessibility and housing values is challenging for at least two reasons. First, the monocentric city assumption applied in location theory is no longer valid for many large or growing cities. Second, rather than being randomly distributed in space, housing values are clustered in space—often exhibiting spatial dependence. Recognizing these challenges, a study was undertaken to develop a spatial lag hedonic price model in the Seoul, South Korea, metropolitan region, which includes a measure of local accessibility as well as systemwide accessibility, in addition to other model covariates. Although the accessibility measures can be improved, the modeling results suggest that the spatial interactions of apartment sales prices occur across and within traffic analysis zones, and the sales prices for apartment communities are devalued as accessibility deteriorates. Consistent with findings in other cities, this study revealed that the distance to the central business district is still a significant determinant of sales price.
Resumo:
This paper addresses the problem of constructing consolidated business process models out of collections of process models that share common fragments. The paper considers the construction of unions of multiple models (called merged models) as well as intersections (called digests). Merged models are intended for analysts who wish to create a model that subsumes a collection of process models - typically representing variants of the same underlying process - with the aim of replacing the variants with the merged model. Digests, on the other hand, are intended for analysts who wish to identify the most recurring fragments across a collection of process models, so that they can focus their efforts on optimizing these fragments. The paper presents an algorithm for computing merged models and an algorithm for extracting digests from a merged model. The merging and digest extraction algorithms have been implemented and tested against collections of process models taken from multiple application domains. The tests show that the merging algorithm produces compact models and scales up to process models containing hundreds of nodes. Furthermore, a case study conducted in a large insurance company has demonstrated the usefulness of the merging and digest extraction operators in a practical setting.
Resumo:
Survival probability prediction using covariate-based hazard approach is a known statistical methodology in engineering asset health management. We have previously reported the semi-parametric Explicit Hazard Model (EHM) which incorporates three types of information: population characteristics; condition indicators; and operating environment indicators for hazard prediction. This model assumes the baseline hazard has the form of the Weibull distribution. To avoid this assumption, this paper presents the non-parametric EHM which is a distribution-free covariate-based hazard model. In this paper, an application of the non-parametric EHM is demonstrated via a case study. In this case study, survival probabilities of a set of resistance elements using the non-parametric EHM are compared with the Weibull proportional hazard model and traditional Weibull model. The results show that the non-parametric EHM can effectively predict asset life using the condition indicator, operating environment indicator, and failure history.
Resumo:
Background: The hedgehog signaling pathway is vital in early development, but then becomes dormant, except in some cancer tumours. Hedgehog inhibitors are being developed for potential use in cancer. Objectives/Methods: The objective of this evaluation is to review the initial clinical studies of the hedgehog inhibitor, GDC-0449, in subjects with cancer. Results: Phase I trials have shown that GDC-0449 has benefits in subjects with metastatic or locally advanced basal-cell carcinoma and in one subjects with medulloblastoma. GDC-0449 was well tolerated. Conclusions: Long term efficacy and safety studies of GDC-0449 in these conditions and other solid cancers are now underway. These clinical trials with GDC-0449, and trials with other hedgehog inhibitors, will reveal whether it is beneficial and safe to inhibit the hedgehog pathway, in a wide range of solid tumours or not.
Resumo:
With the recent regulatory reforms in a number of countries, railways resources are no longer managed by a single party but are distributed among different stakeholders. To facilitate the operation of train services, a train service provider (SP) has to negotiate with the infrastructure provider (IP) for a train schedule and the associated track access charge. This paper models the SP and IP as software agents and the negotiation as a prioritized fuzzy constraint satisfaction (PFCS) problem. Computer simulations have been conducted to demonstrate the effects on the train schedule when the SP has different optimization criteria. The results show that by assigning different priorities on the fuzzy constraints, agents can represent SPs with different operational objectives.