73 resultados para two sector model
Resumo:
Aim/Background
TRALI is hypothesised to develop via a two-event mechanism involving both the patieint's underlying morbidity and blood product factors. The storage of cellular products has been implicated in cases of non-antibody mediated TRALI, however the pathophysiological mechanisms are undefined. We investigated blood product storage-related modulation of inflmmatory cells and medicators involved in TRALI.
Methods
In an in vitro mode, fresh human whole blood was mixed with culture media (control) or LPS as a 1st event and "transfused" with 10% (v/v) pooled supernatant (SN) from Day 1 (d1, n=75) or Day 42 (D42, n=113) packed red blood cells (PRBCs) as a 2nd event. Following 6hrs, culture SN was used to assess the overall inflammatory response (cytometric bead array) and a duplicate assay containing protein transport inhibitor was used to assess neutrophil- and monocyte-specific inflmamatory responses using multi-colour flow cytometry. Panels: IL-6, IL-8, IL-10, IL-12, IL-1, TNF, MCP-1, IP-10, MIP-1. One-way ANOVA 95% CI.
Results
In the absence of LPS, exposure to D1 or D42 PRBC-SN reduced monocyte expression of IL-6, IL-8 and Il-10. D42 PRBC-SN also reduced monocyte IP-10, and the overall IL-8 production was increased. In the presence of LPS, D1-PRBC SN only modified overall IP-10 levels which were reduced. However, cf LPS alone, the combination of LPS and D42 PRBC-SN resulted in increased neutrophil and monocyte productionof IL-1 and IL-8 as well as reduced monocyte TNF production. Additionally, LPS and D42 PRBC-SN resulted in overall inflmmatory changes: elevated IL-8,
Groundwater flow model of the Logan river alluvial aquifer system Josephville, South East Queensland
Resumo:
The study focuses on an alluvial plain situated within a large meander of the Logan River at Josephville near Beaudesert which supports a factory that processes gelatine. The plant draws water from on site bores, as well as the Logan River, for its production processes and produces approximately 1.5 ML per day (Douglas Partners, 2004) of waste water containing high levels of dissolved ions. At present a series of treatment ponds are used to aerate the waste water reducing the level of organic matter; the water is then used to irrigate grazing land around the site. Within the study the hydrogeology is investigated, a conceptual groundwater model is produced and a numerical groundwater flow model is developed from this. On the site are several bores that access groundwater, plus a network of monitoring bores. Assessment of drilling logs shows the area is formed from a mixture of poorly sorted Quaternary alluvial sediments with a laterally continuous aquifer comprised of coarse sands and fine gravels that is in contact with the river. This aquifer occurs at a depth of between 11 and 15 metres and is overlain by a heterogeneous mixture of silts, sands and clays. The study investigates the degree of interaction between the river and the groundwater within the fluvially derived sediments for reasons of both environmental monitoring and sustainability of the potential local groundwater resource. A conceptual hydrogeological model of the site proposes two hydrostratigraphic units, a basal aquifer of coarse-grained materials overlain by a thick semi-confining unit of finer materials. From this, a two-layer groundwater flow model and hydraulic conductivity distribution was developed based on bore monitoring and rainfall data using MODFLOW (McDonald and Harbaugh, 1988) and PEST (Doherty, 2004) based on GMS 6.5 software (EMSI, 2008). A second model was also considered with the alluvium represented as a single hydrogeological unit. Both models were calibrated to steady state conditions and sensitivity analyses of the parameters has demonstrated that both models are very stable for changes in the range of ± 10% for all parameters and still reasonably stable for changes up to ± 20% with RMS errors in the model always less that 10%. The preferred two-layer model was found to give the more realistic representation of the site, where water level variations and the numerical modeling showed that the basal layer of coarse sands and fine gravels is hydraulically connected to the river and the upper layer comprising a poorly sorted mixture of silt-rich clays and sands of very low permeability limits infiltration from the surface to the lower layer. The paucity of historical data has limited the numerical modelling to a steady state one based on groundwater levels during a drought period and forecasts for varying hydrological conditions (e.g. short term as well as prolonged dry and wet conditions) cannot reasonably be made from such a model. If future modelling is to be undertaken it is necessary to establish a regular program of groundwater monitoring and maintain a long term database of water levels to enable a transient model to be developed at a later stage. This will require a valid monitoring network to be designed with additional bores required for adequate coverage of the hydrogeological conditions at the Josephville site. Further investigations would also be enhanced by undertaking pump testing to investigate hydrogeological properties in the aquifer.
Resumo:
Topic modeling has been widely utilized in the fields of information retrieval, text mining, text classification etc. Most existing statistical topic modeling methods such as LDA and pLSA generate a term based representation to represent a topic by selecting single words from multinomial word distribution over this topic. There are two main shortcomings: firstly, popular or common words occur very often across different topics that bring ambiguity to understand topics; secondly, single words lack coherent semantic meaning to accurately represent topics. In order to overcome these problems, in this paper, we propose a two-stage model that combines text mining and pattern mining with statistical modeling to generate more discriminative and semantic rich topic representations. Experiments show that the optimized topic representations generated by the proposed methods outperform the typical statistical topic modeling method LDA in terms of accuracy and certainty.
Resumo:
The use of mobile phones while driving is more prevalent among young drivers—a less experienced cohort with elevated crash risk. The objective of this study was to examine and better understand the reaction times of young drivers to a traffic event originating in their peripheral vision whilst engaged in a mobile phone conversation. The CARRS-Q Advanced Driving Simulator was used to test a sample of young drivers on various simulated driving tasks, including an event that originated within the driver’s peripheral vision, whereby a pedestrian enters a zebra crossing from a sidewalk. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), hands-free and handheld. In addition to driving the simulator each participant completed questionnaires related to driver demographics, driving history, usage of mobile phones while driving, and general mobile phone usage history. The participants were 21 to 26 years old and split evenly by gender. Drivers’ reaction times to a pedestrian in the zebra crossing were modelled using a parametric accelerated failure time (AFT) duration model with a Weibull distribution. Also tested where two different model specifications to account for the structured heterogeneity arising from the repeated measures experimental design. The Weibull AFT model with gamma heterogeneity was found to be the best fitting model and identified four significant variables influencing the reaction times, including phone condition, driver’s age, license type (Provisional license holder or not), and self-reported frequency of usage of handheld phones while driving. The reaction times of drivers were more than 40% longer in the distracted condition compared to baseline (not distracted). Moreover, the impairment of reaction times due to mobile phone conversations was almost double for provisional compared to open license holders. A reduction in the ability to detect traffic events in the periphery whilst distracted presents a significant and measurable safety concern that will undoubtedly persist unless mitigated.
Resumo:
GO423 was initiated in 2012 as part of a community effort to ensure the vitality of the Queensland Games Sector. In common with other industrialised nations, the game industry in Australia is a reasonably significant contributor to Gross National Product (GNP). Games are played in 92% of Australian homes and the average adult player has been playing them for at least twelve years with 26% playing for more than thirty years (Brand, 2011). Like the games and interactive entertainment industries in other countries, the Australian industry has its roots in the small team model of the 1980s. So, for example, Beam Software, which was established in Melbourne in 1980, was started by two people and Krome Studios was started in 1999 by three. Both these companies grew to employing over 100 people in their heydays (considered large by Antipodean standards), not by producing their own intellectual property (IP) but by content generation for off shore parent companies. Thus our bigger companies grew on a model of service provision and tended not to generate their own IP (Darchen, 2012). There are some no-table exceptions where IP has originated locally and been ac-quired by international companies but in the case of some of the works of which we are most proud, the Australian company took on the role of “Night Elf” – a convenience due to affordances of the time zone which allowed our companies to work while the parent companies slept in a different time zone. In the post GFC climate, the strong Australian dollar and the vulnerability of such service provision means that job security is virtually non-existent with employees invariably being on short-term contracts. These issues are exacerbated by the decline of middle-ground games (those which fall between the triple-A titles and the smaller games often produced for a casual audience). The response to this state of affairs has been the change in the Australian games industry to new recognition of its identity as a wider cultural sector and the rise (or return) of an increasing number of small independent game development companies. ’In-dies’ consist of small teams, often making games for mobile and casual platforms, that depend on producing at least one if not two games a year and who often explore more radical definitions of games as designed cultural objects. The need for innovation and creativity in the Australian context is seen as a vital aspect of the current changing scene where we see the emphasis on the large studio production model give way to an emerging cultural sector model where small independent teams are engaged in shorter design and production schedules driven by digital distribution. In terms of Quality of Life (QoL) this new digital distribution brings with it the danger of 'digital isolation' - a studio can work from home and deliver from home. Community events thus become increasingly important. The GO423 Symposium is a response to these perceived needs and the event is based on the understanding that our new small creative teams depend on the local community of practice in no small way. GO423 thus offers local industry participants the opportunity to talk to each other about their work, to talk to potential new members about their work and to show off their work in a small intimate situation, encouraging both feedback and support.
Resumo:
Unsaturated water flow in soil is commonly modelled using Richards’ equation, which requires the hydraulic properties of the soil (e.g., porosity, hydraulic conductivity, etc.) to be characterised. Naturally occurring soils, however, are heterogeneous in nature, that is, they are composed of a number of interwoven homogeneous soils each with their own set of hydraulic properties. When the length scale of these soil heterogeneities is small, numerical solution of Richards’ equation is computationally impractical due to the immense effort and refinement required to mesh the actual heterogeneous geometry. A classic way forward is to use a macroscopic model, where the heterogeneous medium is replaced with a fictitious homogeneous medium, which attempts to give the average flow behaviour at the macroscopic scale (i.e., at a scale much larger than the scale of the heterogeneities). Using the homogenisation theory, a macroscopic equation can be derived that takes the form of Richards’ equation with effective parameters. A disadvantage of the macroscopic approach, however, is that it fails in cases when the assumption of local equilibrium does not hold. This limitation has seen the introduction of two-scale models that include at each point in the macroscopic domain an additional flow equation at the scale of the heterogeneities (microscopic scale). This report outlines a well-known two-scale model and contributes to the literature a number of important advances in its numerical implementation. These include the use of an unstructured control volume finite element method and image-based meshing techniques, that allow for irregular micro-scale geometries to be treated, and the use of an exponential time integration scheme that permits both scales to be resolved simultaneously in a completely coupled manner. Numerical comparisons against a classical macroscopic model confirm that only the two-scale model correctly captures the important features of the flow for a range of parameter values.
Resumo:
BACKGROUND CONTEXT: The Neck Disability Index frequently is used to measure outcomes of the neck. The statistical rigor of the Neck Disability Index has been assessed with conflicting outcomes. To date, Confirmatory Factor Analysis of the Neck Disability Index has not been reported for a suitably large population study. Because the Neck Disability Index is not a condition-specific measure of neck function, initial Confirmatory Factor Analysis should consider problematic neck patients as a homogenous group. PURPOSE: We sought to analyze the factor structure of the Neck Disability Index through Confirmatory Factor Analysis in a symptomatic, homogeneous, neck population, with respect to pooled populations and gender subgroups. STUDY DESIGN: This was a secondary analysis of pooled data. PATIENT SAMPLE: A total of 1,278 symptomatic neck patients (67.5% female, median age 41 years), 803 nonspecific and 475 with whiplash-associated disorder. OUTCOME MEASURES: The Neck Disability Index was used to measure outcomes. METHODS: We analyzed pooled baseline data from six independent studies of patients with neck problems who completed Neck Disability Index questionnaires at baseline. The Confirmatory Factor Analysis was considered in three scenarios: the full sample and separate sexes. Models were compared empirically for best fit. RESULTS: Two-factor models have good psychometric properties across both the pooled and sex subgroups. However, according to these analyses, the one-factor solution is preferable from both a statistical perspective and parsimony. The two-factor model was close to significant for the male subgroup (p<.07) where questions separated into constructs of mental function (pain, reading headaches and concentration) and physical function (personal care, lifting, work, driving, sleep, and recreation). CONCLUSIONS: The Neck Disability Index demonstrated a one-factor structure when analyzed by Confirmatory Factor Analysis in a pooled, homogenous sample of neck problem patients. However, a two-factor model did approach significance for male subjects where questions separated into constructs of mental and physical function. Further investigations in different conditions, subgroup and sex-specific populations are warranted.
Resumo:
The focus of this paper is two-dimensional computational modelling of water flow in unsaturated soils consisting of weakly conductive disconnected inclusions embedded in a highly conductive connected matrix. When the inclusions are small, a two-scale Richards’ equation-based model has been proposed in the literature taking the form of an equation with effective parameters governing the macroscopic flow coupled with a microscopic equation, defined at each point in the macroscopic domain, governing the flow in the inclusions. This paper is devoted to a number of advances in the numerical implementation of this model. Namely, by treating the micro-scale as a two-dimensional problem, our solution approach based on a control volume finite element method can be applied to irregular inclusion geometries, and, if necessary, modified to account for additional phenomena (e.g. imposing the macroscopic gradient on the micro-scale via a linear approximation of the macroscopic variable along the microscopic boundary). This is achieved with the help of an exponential integrator for advancing the solution in time. This time integration method completely avoids generation of the Jacobian matrix of the system and hence eases the computation when solving the two-scale model in a completely coupled manner. Numerical simulations are presented for a two-dimensional infiltration problem.
Resumo:
We examine the interaction between commodity taxes and parallel imports in a two-country model with imperfect competition. While governments determine non-cooperatively their commodity tax rate, the volume of parallel imports is determined endogenously by the retailing sector. We compare the positive and normative implications of having commodity taxes based on destination or origin principle. We show that, as the volume of parallel imports increases, non-cooperative origin taxes converge, while destination taxes diverge. Moreover, origin taxes are more similar and lead to higher aggregate welfare levels than destination taxes.
Resumo:
In this study, we investigate the qualitative and quantitative effects of an R&D subsidy for a clean technology and a Pigouvian tax on a dirty technology on environmental R&D when it is uncertain how long the research takes to complete. The model is formulated as an optimal stopping problem, in which the number of successes required to complete the R&D project is finite and learning about the probability of success is incorporated. We show that the optimal R&D subsidy with the consideration of learning is higher than that without it. We also find that an R&D subsidy performs better than a Pigouvian tax unless suppliers have sufficient incentives to continue cost-reduction efforts after the new technology success-fully replaces the old one. Moreover, by using a two-project model, we show that a uniform subsidy is better than a selective subsidy.
Resumo:
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).
Resumo:
People have adopted various formats of media such as graphics, photo and text (nickname) in order to represent themselves when communicate with others online. Avatar is known as a visual form representing a user oneself and one's identity wished. Its form can vary from a two-dimensional model to a three-dimensional model, and can be visualised with various visual forms and styles. In general, two-dimensional images including an animated image are used in online forum communities and live chat software; while three-dimensional models are often used in computer games. Avatar design is often regarded as a graphic designer's visual image creation or a user's output based on one's personal preference, yet it often causes the avatar design having no consideration of its practical visual design and users' interactive communication experience aspects. This paper will review various types and styles of avatar and discuss about avatar design from visual design and online user experience perspectives. It aims to raise a design discourse in avatar design and build up a well-articulated set of design principles for effective avatar design.