910 resultados para Three models
Resumo:
The similarity between the Peleg, Pilosof –Boquet–Batholomai and Singh–Kulshrestha models was investigated using the hydration behaviours of whey protein concentrate, wheat starch and whey protein isolate at 30 °C in 100% relative humidity. The three models were shown to be mathematically the same within experimental variations, and they yielded parameters that are related. The models, in their linear and original forms, were suitable (r2 > 0.98) in describing the sorption behaviours of the samples, and are sensitive to the length of the sorption segment used in the computation. The whey proteins absorbed more moisture than the wheat starch, and the isolate exhibited a higher sorptive ability than the concentrate.
Resumo:
This paper proposes three models of adding relations to an organization structure which is a complete K-ary tree of height H: (i) a model of adding an edge between two nodes with the same depth N, (ii) a model of adding edges between every pair of nodes with the same depth N and (iii) a model of adding edges between every pair of siblings with the same depth N. For each of the three models, an optimal depth N* is obtained by maximizing the total shortening path length which is the sum of shortening lengths of shortest paths between every pair of all nodes. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a comparative study of three closely related Bayesian models for unsupervised document level sentiment classification, namely, the latent sentiment model (LSM), the joint sentiment-topic (JST) model, and the Reverse-JST model. Extensive experiments have been conducted on two corpora, the movie review dataset and the multi-domain sentiment dataset. It has been found that while all the three models achieve either better or comparable performance on these two corpora when compared to the existing unsupervised sentiment classification approaches, both JST and Reverse-JST are able to extract sentiment-oriented topics. In addition, Reverse-JST always performs worse than JST suggesting that the JST model is more appropriate for joint sentiment topic detection.
Resumo:
Despite the extensive implementation of Superstreets on congested arterials, reliable methodologies for such designs remain unavailable. The purpose of this research is to fill the information gap by offering reliable tools to assist traffic professionals in the design of Superstreets with and without signal control. The entire tool developed in this thesis consists of three models. The first model is used to determine the minimum U-turn offset length for an Un-signalized Superstreet, given the arterial headway distribution of the traffic flows and the distribution of critical gaps among drivers. The second model is designed to estimate the queue size and its variation on each critical link in a signalized Superstreet, based on the given signal plan and the range of observed volumes. Recognizing that the operational performance of a Superstreet cannot be achieved without an effective signal plan, the third model is developed to produce a signal optimization method that can generate progression offsets for heavy arterial flows moving into and out of such an intersection design.
Resumo:
Purpose: To investigate the pathogenesis of high fat diet (HFD)-induced hyperlipidemia (HLP) in mice, rats and hamsters and to comparatively evaluate their sensitivity to HFD. Methods: Mice, rats and hamsters were fed with high-fat diet formulation (HFD, n = 8) or a control diet (control, n = 8) for 4 weeks. Changes in body weight, relative liver weight, serum lipid profile, expressions of hepatic marker gene of lipid metabolism and liver morphology were observed in three hyperlipidemic models. Results: Elevated total cholesterol (TC), triglyceride, low density lipoprotein-cholesterol (LDL-C) and high density lipoprotein-cholesterol (HDL-C) levels and body weight were observed in all hyperlipidemic animals (p < 0.05), while hepatic steatosis was manifested in rat and hamster HLP models, and increased hepatic TC level was only seen (p < 0.05) in hamster HLP model. Suppression of HMG-CoA reductase and up-regulation of lipoproteinlipase were observed in all HFD groups. Hepatic gene expression of LDLR, CYP7A1, LCAT, SR-B1, and ApoA I, which are a response to reverse cholesterol transport (RCT), were inhibited by HFD in the three models. Among these models, simultaneous suppression of HMG-CR, LCAT, LDLR and SR-BI and elevated LPL were features of the hamster model. Conclusion: As the results show, impaired RCT and excessive fat accumulation are major contributors to pathogenesis of HFD-induced murine HLP. Thus, the hamster model is more appropriate for hyperlipidemia research.
Resumo:
The eyelids play an important role in lubricating and protecting the surface of the eye. Each blink serves to spread fresh tears, remove debris and replenish the smooth optical surface of the eye. Yet little is known about how the eyelids contact the ocular surface and what pressure distribution exists between the eyelids and cornea. As the principal refractive component of the eye, the cornea is a major element of the eye’s optics. The optical properties of the cornea are known to be susceptible to the pressure exerted by the eyelids. Abnormal eyelids, due to disease, have altered pressure on the ocular surface due to changes in the shape, thickness or position of the eyelids. Normal eyelids also cause corneal distortions that are most often noticed when they are resting closer to the corneal centre (for example during reading). There were many reports of monocular diplopia after reading due to corneal distortion, but prior to videokeratoscopes these localised changes could not be measured. This thesis has measured the influence of eyelid pressure on the cornea after short-term near tasks and techniques were developed to quantify eyelid pressure and its distribution. The profile of the wave-like eyelid-induced corneal changes and the refractive effects of these distortions were investigated. Corneal topography changes due to both the upper and lower eyelids were measured for four tasks involving two angles of vertical downward gaze (20° and 40°) and two near work tasks (reading and steady fixation). After examining the depth and shape of the corneal changes, conclusions were reached regarding the magnitude and distribution of upper and lower eyelid pressure for these task conditions. The degree of downward gaze appears to alter the upper eyelid pressure on the cornea, with deeper changes occurring after greater angles of downward gaze. Although the lower eyelid was further from the corneal centre in large angles of downward gaze, its effect on the cornea was greater than that of the upper eyelid. Eyelid tilt, curvature, and position were found to be influential in the magnitude of eyelid-induced corneal changes. Refractively these corneal changes are clinically and optically significant with mean spherical and astigmatic changes of about 0.25 D after only 15 minutes of downward gaze (40° reading and steady fixation conditions). Due to the magnitude of these changes, eyelid pressure in downward gaze offers a possible explanation for some of the day-to-day variation observed in refraction. Considering the magnitude of these changes and previous work on their regression, it is recommended that sustained tasks performed in downward gaze should be avoided for at least 30 minutes before corneal and refractive assessment requiring high accuracy. Novel procedures were developed to use a thin (0.17 mm) tactile piezoresistive pressure sensor mounted on a rigid contact lens to measure eyelid pressure. A hydrostatic calibration system was constructed to convert raw digital output of the sensors to actual pressure units. Conditioning the sensor prior to use regulated the measurement response and sensor output was found to stabilise about 10 seconds after loading. The influences of various external factors on sensor output were studied. While the sensor output drifted slightly over several hours, it was not significant over the measurement time of 30 seconds used for eyelid pressure, as long as the length of the calibration and measurement recordings were matched. The error associated with calibrating at room temperature but measuring at ocular surface temperature led to a very small overestimation of pressure. To optimally position the sensor-contact lens combination under the eyelid margin, an in vivo measurement apparatus was constructed. Using this system, eyelid pressure increases were observed when the upper eyelid was placed on the sensor and a significant increase was apparent when the eyelid pressure was increased by pulling the upper eyelid tighter against the eye. For a group of young adult subjects, upper eyelid pressure was measured using this piezoresistive sensor system. Three models of contact between the eyelid and ocular surface were used to calibrate the pressure readings. The first model assumed contact between the eyelid and pressure sensor over more than the pressure cell width of 1.14 mm. Using thin pressure sensitive carbon paper placed under the eyelid, a contact imprint was measured and this width used for the second model of contact. Lastly as Marx’s line has been implicated as the region of contact with the ocular surface, its width was measured and used as the region of contact for the third model. The mean eyelid pressures calculated using these three models for the group of young subjects were 3.8 ± 0.7 mmHg (whole cell), 8.0 ± 3.4 mmHg (imprint width) and 55 ± 26 mmHg (Marx’s line). The carbon imprints using Pressurex-micro confirmed previous suggestions that a band of the eyelid margin has primary contact with the ocular surface and provided the best estimate of the contact region and hence eyelid pressure. Although it is difficult to directly compare the results with previous eyelid pressure measurement attempts, the eyelid pressure calculated using this model was slightly higher than previous manometer measurements but showed good agreement with the eyelid force estimated using an eyelid tensiometer. The work described in this thesis has shown that the eyelids have a significant influence on corneal shape, even after short-term tasks (15 minutes). Instrumentation was developed using piezoresistive sensors to measure eyelid pressure. Measurements for the upper eyelid combined with estimates of the contact region between the cornea and the eyelid enabled quantification of the upper eyelid pressure for a group of young adult subjects. These techniques will allow further investigation of the interaction between the eyelids and the surface of the eye.
Resumo:
The collective purpose of these two studies was to determine a link between the V02 slow component and the muscle activation patterns that occur during cycling. Six, male subjects performed an incremental cycle ergometer exercise test to determine asub-TvENT (i.e. 80% of TvENT) and supra-TvENT (TvENT + 0.75*(V02 max - TvENT) work load. These two constant work loads were subsequently performed on either three or four occasions for 8 mins each, with V02 captured on a breath-by-breath basis for every test, and EMO of eight major leg muscles collected on one occasion. EMG was collected for the first 10 s of every 30 s period, except for the very first 10 s period. The V02 data was interpolated, time aligned, averaged and smoothed for both intensities. Three models were then fitted to the V02 data to determine the kinetics responses. One of these models was mono-exponential, while the other two were biexponential. A second time delay parameter was the only difference between the two bi-exponential models. An F-test was used to determine significance between the biexponential models using the residual sum of squares term for each model. EMO was integrated to obtain one value for each 10 s period, per muscle. The EMG data was analysed by a two-way repeated measures ANOV A. A correlation was also used to determine significance between V02 and IEMG. The V02 data during the sub-TvENT intensity was best described by a mono-exponential response. In contrast, during supra-TvENT exercise the two bi-exponential models best described the V02 data. The resultant F-test revealed no significant difference between the two models and therefore demonstrated that the slow component was not delayed relative to the onset of the primary component. Furthermore, only two parameters were deemed to be significantly different based upon the two models. This is in contrast to other findings. The EMG data, for most muscles, appeared to follow the same pattern as V02 during both intensities of exercise. On most occasions, the correlation coefficient demonstrated significance. Although some muscles demonstrated the same relative increase in IEMO based upon increases in intensity and duration, it cannot be assumed that these muscles increase their contribution to V02 in a similar fashion. Larger muscles with a higher percentage of type II muscle fibres would have a larger increase in V02 over the same increase in intensity.
Resumo:
The objective of this thesis is to investigate the corporate governance attributes of smaller listed Australian firms. This study is motivated by evidence that these firms are associated with more regulatory concerns, the introduction of ASX Corporate Governance Recommendations in 2004, and a paucity of research to guide regulators and stakeholders of smaller firms. While there is an extensive body of literature examining the effectiveness of corporate governance, the literature principally focuses on larger companies, resulting in a deficiency in the understanding of the nature and effectiveness of corporate governance in smaller firms. Based on a review of agency theory literature, a theoretical model is developed that posits that agency costs are mitigated by internal governance mechanisms and transparency. The model includes external governance factors but in many smaller firms these factors are potentially absent, increasing the reliance on the internal governance mechanisms of the firm. Based on the model, the observed greater regulatory intervention in smaller companies may be due to sub-optimal internal governance practices. Accordingly, this study addresses four broad research questions (RQs). First, what is the extent and nature of the ASX Recommendations that have been adopted by smaller firms (RQ1)? Second, what firm characteristics explain differences in the recommendations adopted by smaller listed firms (RQ2), and third, what firm characteristics explain changes in the governance of smaller firms over time (RQ3)? Fourth, how effective are the corporate governance attributes of smaller firms (RQ4)? Six hypotheses are developed to address the RQs. The first two hypotheses explore the extent and nature of corporate governance, while the remaining hypotheses evaluate its effectiveness. A time-series, cross-sectional approach is used to evaluate the effectiveness of governance. Three models, based on individual governance attributes, an index of six items derived from the literature, and an index based on the full list of ASX Recommendations, are developed and tested using a sample of 298 smaller firms with annual observations over a five-year period (2002-2006) before and after the introduction of the ASX Recommendations in 2004. With respect to (RQ1) the results reveal that the overall adoption of the recommendations increased from 66 per cent in 2004 to 74 per cent in 2006. Interestingly, the adoption rate for recommendations regarding the structure of the board and formation of committees is significantly lower than the rates for other categories of recommendations. With respect to (RQ2) the results reveal that variations in rates of adoption are explained by key firm differences including, firm size, profitability, board size, audit quality, and ownership dispersion, while the results for (RQ3) were inconclusive. With respect to (RQ4), the results provide support for the association between better governance and superior accounting-based performance. In particular, the results highlight the importance of the independence of both the board and audit committee chairs, and of greater accounting-based expertise on the audit committee. In contrast, while there is little evidence that a majority independent board is associated with superior outcomes, there is evidence linking board independence with adverse audit opinion outcomes. These results suggest that board and chair independence are substitutes; in the presence of an independent chair a majority independent board may be an unnecessary and costly investment for smaller firms. The findings make several important contributions. First, the findings contribute to the literature by providing evidence on the extent, nature and effectiveness of governance in smaller firms. The findings also contribute to the policy debate regarding future development of Australia’s corporate governance code. The findings regarding board and chair independence, and audit committee characteristics, suggest that policy-makers could consider providing additional guidance for smaller companies. In general, the findings offer support for the “if not, why not?” approach of the ASX, rather than a prescriptive rules-based approach.
Resumo:
Google, Facebook, Twitter, LinkedIn, etc. are some of the prominent large-scale digital service providers that are having tremendous impact on societies, corporations and individuals. However, despite the rapid uptake and their obvious influence on the behavior of individuals and the business models and networks of organizations, we still lack a deeper, theory-guided understanding of the related phenomenon. We use Teece’s notion of complementary assets and extend it towards ‘digital complementary assets’ (DCA) in an attempt to provide such a theory-guided understanding of these digital services. Building on Teece’s theory, we make three contributions. First, we offer a new conceptualization of digital complementary assets in the form of digital public goods and digital public assets. Second, we differentiate three models for how organizations can engage with such digital complementary assets. Third, user-base is found to be a critical factor when considering appropriability.
Resumo:
In second language classrooms, listening is gaining recognition as an active element in the processes of learning and using a second language. Currently, however, much of the teaching of listening prioritises comprehension without sufficient emphasis on the skills and strategies that enhance learners’ understanding of spoken language. This paper presents an argument for rethinking the emphasis on comprehension and advocates augmenting current teaching with an explicit focus on strategies. Drawing on the literature, the paper provides three models of strategy instruction for the teaching and development of listening skills. The models include steps for implementation that accord with their respective approaches to explicit instruction. The final section of the paper synthesises key points from the models as a guide for application in the second language classroom. The premise underpinning the paper is that the teaching of strategies can provide learners with active and explicit measures for managing and expanding their listening capacities, both in the learning and ‘real world’ use of a second language.
Resumo:
This paper investigates the first year experience of undergraduates with a view to discovering some of the factors which determine a successful negotiation of the transitional phase. The paper begins with a theoretical framework of transition based on the three models of Van Gennep (1960), Viney (1980) and Tinto (1987) and applied to the educational transition from school to University. A new model of transition is presented which looks at the relationship between social and academic adjustment of students to university over time.
Resumo:
This thesis examines the formation and governance patterns of the social and spatial concentration of creative people and creative business in cities. It develops a typology for creative places, adding the terms 'scene' and 'quarter' to 'clusters', to fill in the literature gap of partial emphasis on the 'creative clusters' model as an organising mechanism for regional and urban policy. In this framework, a cluster is the gathering of firms with a core focus on economic benefits; a quarter is the urban milieu usually driven by a growth coalition consisting of local government, real estate agents and residential communities; and a scene is the spontaneous assembly of artists, performers and fans with distinct cultural forms. The framework is applied to China, specifically to Hangzhou – a second-tier city in central eastern China that is ambitious to become a 'national cultural and creative industries centre'. The thesis selects three cases – Ideal & Silian 166 Creative Industries Park, White-horse Lake Eco-creative City and LOFT49 Creative Industries Park – to represent scene, quarter and cluster respectively. Drawing on in-depth interviews with initiators, managers and creative professionals of these places, together with extensive documentary analysis, the thesis investigates the composition of actors, characteristics of the locality and the diversity of activities. The findings illustrate that, in China, planning and government intervention is the key to the governance of creative space; spontaneous development processes exist, but these need a more tolerant environment, a greater diversity of cultural forms and more time to develop. Moreover, the main business development model is still real estate based: this model needs to incorporate more mature business models and an enhanced IP protection system. The thesis makes a contribution to literature on economic and cultural geography, urban planning and creative industries theory. It advocates greater attention to self-management, collaborative governance mechanisms and business strategies for scenes, quarters and clusters. As intersections exist in the terms discussed, a mixed toolkit of the three models is required to advance the creative city discourse in China.
Resumo:
The serviceability and safety of bridges are crucial to people’s daily lives and to the national economy. Every effort should be taken to make sure that bridges function safely and properly as any damage or fault during the service life can lead to transport paralysis, catastrophic loss of property or even casualties. Nonetheless, aggressive environmental conditions, ever-increasing and changing traffic loads and aging can all contribute to bridge deterioration. With often constrained budget, it is of significance to identify bridges and bridge elements that should be given higher priority for maintenance, rehabilitation or replacement, and to select optimal strategy. Bridge health prediction is an essential underpinning science to bridge maintenance optimization, since the effectiveness of optimal maintenance decision is largely dependent on the forecasting accuracy of bridge health performance. The current approaches for bridge health prediction can be categorised into two groups: condition ratings based and structural reliability based. A comprehensive literature review has revealed the following limitations of the current modelling approaches: (1) it is not evident in literature to date that any integrated approaches exist for modelling both serviceability and safety aspects so that both performance criteria can be evaluated coherently; (2) complex system modelling approaches have not been successfully applied to bridge deterioration modelling though a bridge is a complex system composed of many inter-related bridge elements; (3) multiple bridge deterioration factors, such as deterioration dependencies among different bridge elements, observed information, maintenance actions and environmental effects have not been considered jointly; (4) the existing approaches are lacking in Bayesian updating ability to incorporate a variety of event information; (5) the assumption of series and/or parallel relationship for bridge level reliability is always held in all structural reliability estimation of bridge systems. To address the deficiencies listed above, this research proposes three novel models based on the Dynamic Object Oriented Bayesian Networks (DOOBNs) approach. Model I aims to address bridge deterioration in serviceability using condition ratings as the health index. The bridge deterioration is represented in a hierarchical relationship, in accordance with the physical structure, so that the contribution of each bridge element to bridge deterioration can be tracked. A discrete-time Markov process is employed to model deterioration of bridge elements over time. In Model II, bridge deterioration in terms of safety is addressed. The structural reliability of bridge systems is estimated from bridge elements to the entire bridge. By means of conditional probability tables (CPTs), not only series-parallel relationship but also complex probabilistic relationship in bridge systems can be effectively modelled. The structural reliability of each bridge element is evaluated from its limit state functions, considering the probability distributions of resistance and applied load. Both Models I and II are designed in three steps: modelling consideration, DOOBN development and parameters estimation. Model III integrates Models I and II to address bridge health performance in both serviceability and safety aspects jointly. The modelling of bridge ratings is modified so that every basic modelling unit denotes one physical bridge element. According to the specific materials used, the integration of condition ratings and structural reliability is implemented through critical failure modes. Three case studies have been conducted to validate the proposed models, respectively. Carefully selected data and knowledge from bridge experts, the National Bridge Inventory (NBI) and existing literature were utilised for model validation. In addition, event information was generated using simulation to demonstrate the Bayesian updating ability of the proposed models. The prediction results of condition ratings and structural reliability were presented and interpreted for basic bridge elements and the whole bridge system. The results obtained from Model II were compared with the ones obtained from traditional structural reliability methods. Overall, the prediction results demonstrate the feasibility of the proposed modelling approach for bridge health prediction and underpin the assertion that the three models can be used separately or integrated and are more effective than the current bridge deterioration modelling approaches. The primary contribution of this work is to enhance the knowledge in the field of bridge health prediction, where more comprehensive health performance in both serviceability and safety aspects are addressed jointly. The proposed models, characterised by probabilistic representation of bridge deterioration in hierarchical ways, demonstrated the effectiveness and pledge of DOOBNs approach to bridge health management. Additionally, the proposed models have significant potential for bridge maintenance optimization. Working together with advanced monitoring and inspection techniques, and a comprehensive bridge inventory, the proposed models can be used by bridge practitioners to achieve increased serviceability and safety as well as maintenance cost effectiveness.
Resumo:
This paper investigates the first year experience of undergraduates with a view to discovering some of the factors which determine a successful negotiation of the transitional phase. The paper begins with a theoretical framework of transition based on the three models of Van Gennep (1960), Viney (1980) and Tinto (1987) and applied to the educational transition from school to University. A new model of transition is presented which looks at the relationship between social and academic adjustment of students to university over time.
Resumo:
This study considered the problem of predicting survival, based on three alternative models: a single Weibull, a mixture of Weibulls and a cure model. Instead of the common procedure of choosing a single “best” model, where “best” is defined in terms of goodness of fit to the data, a Bayesian model averaging (BMA) approach was adopted to account for model uncertainty. This was illustrated using a case study in which the aim was the description of lymphoma cancer survival with covariates given by phenotypes and gene expression. The results of this study indicate that if the sample size is sufficiently large, one of the three models emerge as having highest probability given the data, as indicated by the goodness of fit measure; the Bayesian information criterion (BIC). However, when the sample size was reduced, no single model was revealed as “best”, suggesting that a BMA approach would be appropriate. Although a BMA approach can compromise on goodness of fit to the data (when compared to the true model), it can provide robust predictions and facilitate more detailed investigation of the relationships between gene expression and patient survival. Keywords: Bayesian modelling; Bayesian model averaging; Cure model; Markov Chain Monte Carlo; Mixture model; Survival analysis; Weibull distribution