930 resultados para Rational Polynomial Coefficient Model
Resumo:
We present a porous medium model of the growth and deterioration of the viable sublayers of an epidermal skin substitute. It consists of five species: cells, intracellular and extracellular calcium, tight junctions, and a hypothesised signal chemical emanating from the stratum corneum. The model is solved numerically in Matlab using a finite difference scheme. Steady state calcium distributions are predicted that agree well with the experimental data. Our model also demonstrates epidermal skin substitute deterioration if the calcium diffusion coefficient is reduced compared to reported values in the literature.
Resumo:
An enhanced mill extraction model has been developed to calculate mill performance parameters and to predict the extraction performance of a milling unit. The model takes into account the fibre suspended in juice streams and calculates filling ratio, reabsorption factor, imbibition coefficient, and separation efficiency using more complete definitions than those used in previous extraction models. A mass balance model is used to determine the fibre, brix and moisture mass flows between milling units so that a complete milling train, including the return stream from the juice screen, is modelled. Model solutions are presented to determine the effect of different levels of fibre in juice and efficiency of fibre separation in the juice screen on brix extraction. The model provides more accurate results than earlier models leading to better understanding and improvement of the milling process.
Resumo:
A strongly progressive surveying and mapping industry depends on a shared understanding of the industry as it exists, some shared vision or imagination of what the industry might become, and some shared action plan capable of bringing about a realisation of that vision. The emphasis on sharing implies a need for consensus reached through widespread discussion and mutual understanding. Unless this occurs, concerted action is unlikely. A more likely outcome is that industry representatives will negate each other's efforts in their separate bids for progress. The process of bringing about consensual viewpoints is essentially one of establishing an industry identity. Establishing the industry's identity and purpose is a prerequisite for rational development of the industry's education and training, its promotion and marketing, and operational research that can deal .with industry potential and efficiency. This paper interprets evolutionary developments occurring within Queensland's surveying and mapping industry within a framework that sets out logical requirements for a viable industry.
Resumo:
The objective of this research was to investigate the effect of suspension parameters on dynamic load-sharing of longitudinal-connected air suspensions of a tri-axle semi-trailer. A novel nonlinear model of a multi-axle semi-trailer with longitudinal-connected air suspension was formulated based on fluid mechanics and thermodynamics and was validated through test results. The effects of suspension parameters on dynamic load-sharing and road-friendliness of the semi-trailer were analyzed. Simulation results indicate that the road-friendliness metric DLC (Dynamic Load Coefficient), is generally in accordance with the load-sharing metric - DLSC (Dynamic Load Sharing Coefficient). When the static height or static pressure increases, the DLSC optimization ratio declines monotonically. The effect of employing larger air lines and connectors on the DLSC optimization ratio gives varying results as road roughness increases and as driving speed increases. The results also indicate that if the air line diameter is always assumed to be larger than the connector diameter, the influence of air line diameter on load-sharing is more significant than that of the connector.
Resumo:
Organizations from every industry sector seek to enhance their business performance and competitiveness through the deployment of contemporary information systems (IS), such as Enterprise Systems (ERP). Investments in ERP are complex and costly, attracting scrutiny and pressure to justify their cost. Thus, IS researchers highlight the need for systematic evaluation of information system success, or impact, which has resulted in the introduction of varied models for evaluating information systems. One of these systematic measurement approaches is the IS-Impact Model introduced by a team of researchers at Queensland University of technology (QUT) (Gable, Sedera, & Chan, 2008). The IS-Impact Model is conceptualized as a formative, multidimensional index that consists of four dimensions. Gable et al. (2008) define IS-Impact as "a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups" (p.381). The IT Evaluation Research Program (ITE-Program) at QUT has grown the IS-Impact Research Track with the central goal of conducting further studies to enhance and extend the IS-Impact Model. The overall goal of the IS-Impact research track at QUT is "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable, 2009). In order to achieve that, the IS-Impact research track advocates programmatic research having the principles of tenacity, holism, and generalizability through extension research strategies. This study was conducted within the IS-Impact Research Track, to further generalize the IS-Impact Model by extending it to the Saudi Arabian context. According to Hofsted (2012), the national culture of Saudi Arabia is significantly different from the Australian national culture making the Saudi Arabian culture an interesting context for testing the external validity of the IS-Impact Model. The study re-visits the IS-Impact Model from the ground up. Rather than assume the existing instrument is valid in the new context, or simply assess its validity through quantitative data collection, the study takes a qualitative, inductive approach to re-assessing the necessity and completeness of existing dimensions and measures. This is done in two phases: Exploratory Phase and Confirmatory Phase. The exploratory phase addresses the first research question of the study "Is the IS-Impact Model complete and able to capture the impact of information systems in Saudi Arabian Organization?". The content analysis, used to analyze the Identification Survey data, indicated that 2 of the 37 measures of the IS-Impact Model are not applicable for the Saudi Arabian Context. Moreover, no new measures or dimensions were identified, evidencing the completeness and content validity of the IS-Impact Model. In addition, the Identification Survey data suggested several concepts related to IS-Impact, the most prominent of which was "Computer Network Quality" (CNQ). The literature supported the existence of a theoretical link between IS-Impact and CNQ (CNQ is viewed as an antecedent of IS-Impact). With the primary goal of validating the IS-Impact model within its extended nomological network, CNQ was introduced to the research model. The Confirmatory Phase addresses the second research question of the study "Is the Extended IS-Impact Model Valid as a Hierarchical Multidimensional Formative Measurement Model?". The objective of the Confirmatory Phase was to test the validity of IS-Impact Model and CNQ Model. To achieve that, IS-Impact, CNQ, and IS-Satisfaction were operationalized in a survey instrument, and then the research model was assessed by employing the Partial Least Squares (PLS) approach. The CNQ model was validated as a formative model. Similarly, the IS-Impact Model was validated as a hierarchical multidimensional formative construct. However, the analysis indicated that one of the IS-Impact Model indicators was insignificant and can be removed from the model. Thus, the resulting Extended IS-Impact Model consists of 4 dimensions and 34 measures. Finally, the structural model was also assessed against two aspects: explanatory and predictive power. The analysis revealed that the path coefficient between CNQ and IS-Impact is significant with t-value= (4.826) and relatively strong with â = (0.426) with CNQ explaining 18% of the variance in IS-Impact. These results supported the hypothesis that CNQ is antecedent of IS-Impact. The study demonstrates that the quality of Computer Network affects the quality of the Enterprise System (ERP) and consequently the impacts of the system. Therefore, practitioners should pay attention to the Computer Network quality. Similarly, the path coefficient between IS-Impact and IS-Satisfaction was significant t-value = (17.79) and strong â = (0.744), with IS-Impact alone explaining 55% of the variance in Satisfaction, consistent with results of the original IS-Impact study (Gable et al., 2008). The research contributions include: (a) supporting the completeness and validity of IS-Impact Model as a Hierarchical Multi-dimensional Formative Measurement Model in the Saudi Arabian context, (b) operationalizing Computer Network Quality as conceptualized in the ITU-T Recommendation E.800 (ITU-T, 1993), (c) validating CNQ as a formative measurement model and as an antecedent of IS Impact, and (d) conceptualizing and validating IS-Satisfaction as a reflective measurement model and as an immediate consequence of IS Impact. The CNQ model provides a framework to perceptually measure Computer Network Quality from multiple perspectives. The CNQ model features an easy-to-understand, easy-to-use, and economical survey instrument.
Resumo:
Bus Rapid Transit (BRT) station is the interface between passenger and service. The station is crucial to line operation as it is typically the only location where buses can pass each other. Congestion may occur here when buses maneuvering into and out of the platform lane interfere with bus flow, or when a queue of buses forms upstream of the platform lane blocking the passing lane. However, some systems include operation where express buses pass the critical station, resulting in a proportion of non stopping buses. It is important to understand the operation of the critical busway station under this type of operation, as it affects busway line capacity. This study uses micro simulation to treat the BRT station operation and to analyze the relationship between station Limit state bus capacity (B_ls), Total Bus Capacity (B_ttl). First, the simulation model is developed for Limit state scenario and then a mathematical model is defined, calibrated for a specified range of controlled scenarios of mean and coefficient of variation of dwell time. Thereafter, the proposed B_ls model is extended to consider non stopping buses and B_ttlmodel is defined. The proposed models provides better understanding to the BRT line capacity and is useful for transit authorities for designing better BRT operation.
Resumo:
The influence of pH on interfacial energy and wettability distributed over the phospholipid bilayer surface were studied, and the importance of cartilage hydrophobicity (wettability) on the coefficient of friction (f) was established. It is argued that the wettability of cartilage signifi antly depends on the number of phospholipid bilayers acting as solid lubricant; the hypothesis was proven by conducting friction tests with normal and lipid- depleted cartilage samples. A lamellar-roller-bearing lubrication model was devised involving two mechanisms: (i) lamellar frictionless movement of bilayers, and (ii) roller-bearing lubrication mode through structured synovial fluid, which operates when lamellar spheres, liposomes and macromolecules act like a roller-bearing situated between two cartilage surfaces in effective biological lubrication.
Resumo:
The Bus Rapid Transit (BRT) station is the interface between passengers and services. The station is crucial to line operation as it is typically the only location where buses can pass each other. Congestion may occur here when buses maneuvering into and out of the platform lane interfere with bus flow, or when a queue of buses forms upstream of the platform lane blocking the passing lane. Further, some systems include operation where express buses do not observe the station, resulting in a proportion of non-stopping buses. It is important to understand the operation of the station under this type of operation and its effect on BRT line capacity. This study uses microscopic traffic simulation modeling to treat the BRT station operation and to analyze the relationship between station bus capacity and BRT line bus capacity. First, the simulation model is developed for the limit state scenario and then a statistical model is defined and calibrated for a specified range of controlled scenarios of dwell time characteristics. A field survey was conducted to verify the parameters such as dwell time, clearance time and coefficient of variation of dwell time to obtain relevant station bus capacity. The proposed model for BRT bus capacity provides a better understanding of BRT line capacity and is useful to transit authorities in BRT planning, design and operation.
Resumo:
There has been an increasing focus on the development of test methods to evaluate the durability performance of concrete. This paper contributes to this focus by presenting a study that evaluates the effect of water accessible porosity and oven-dry unit weight on the resistance of both normal and light-weight concrete to chloride-ion penetration. Based on the experimental results and regression analyses, empirical models are established to correlate the total charge passed and the chloride migration coefficient with the basic properties of concrete such as water accessible porosity, oven dry unit weight, and compressive strength. These equations can be broadly applied to both normal and lightweight aggregate concretes. The model was also validated by an independent set of experimental results from two different concrete mixtures. The model provides a very good estimate on the concrete’s durability performance in respect to the resistance to chloride ion penetration.
Resumo:
The reliable response to weak biological signals requires that they be amplified with fidelity. In E. coli, the flagellar motors that control swimming can switch direction in response to very small changes in the concentration of the signaling protein CheY-P, but how this works is not well understood. A recently proposed allosteric model based on cooperative conformational spread in a ring of identical protomers seems promising as it is able to qualitatively reproduce switching, locked state behavior and Hill coefficient values measured for the rotary motor. In this paper we undertook a comprehensive simulation study to analyze the behavior of this model in detail and made predictions on three experimentally observable quantities: switch time distribution, locked state interval distribution, Hill coefficient of the switch response. We parameterized the model using experimental measurements, finding excellent agreement with published data on motor behavior. Analysis of the simulated switching dynamics revealed a mechanism for chemotactic ultrasensitivity, in which cooperativity is indispensable for realizing both coherent switching and effective amplification. These results showed how cells can combine elements of analog and digital control to produce switches that are simultaneously sensitive and reliable. © 2012 Ma et al.
Resumo:
Switchgrass was treated by 1% (w/w) H₂SO₄in batch tube reactors at temperatures ranging from 140–220°C for up to 60 minutes. In this study, release patterns of glucose, 5-hydroxymethylfurfural (5-HMF), and levulinic acid from switchgrass cellulose were investigated through a mechanistic kinetic model. The predictions were consistent with the measured products of interest when new parameters reflecting the effects of reaction limitations, such as cellulose crystallinity, acid soluble lignin–glucose complex (ASL–glucose) and humins that cannot be quantitatively analyzed, were included. The new mechanistic kinetic model incorporating these parameters simulated the experimental data with R² above 0.97. Results showed that glucose yield was most sensitive to variations in the parameter regarding the cellulose crystallinity at low temperatures (140–180°C), while the impact of crystallinity on the glucose yield became imperceptible at elevated temperatures (200–220 °C). Parameters related to the undesired products (e.g. ASL–glucose and humins) were the most sensitive factors compared with rate constants and other additional parameters in impacting the levulinic acid yield at elevated temperatures (200–220°C), while their impacts were negligible at 140–180°C. These new findings provide a more rational explanation for the kinetic changes in dilute acid pretreatment performance and suggest that the influences of cellulose crystallinity and undesired products including ASL–glucose and humins play key roles in determining the generation of glucose, 5-HMF and levulinic acid from biomass-derived cellulose.
Resumo:
Spatial data analysis has become more and more important in the studies of ecology and economics during the last decade. One focus of spatial data analysis is how to select predictors, variance functions and correlation functions. However, in general, the true covariance function is unknown and the working covariance structure is often misspecified. In this paper, our target is to find a good strategy to identify the best model from the candidate set using model selection criteria. This paper is to evaluate the ability of some information criteria (corrected Akaike information criterion, Bayesian information criterion (BIC) and residual information criterion (RIC)) for choosing the optimal model when the working correlation function, the working variance function and the working mean function are correct or misspecified. Simulations are carried out for small to moderate sample sizes. Four candidate covariance functions (exponential, Gaussian, Matern and rational quadratic) are used in simulation studies. With the summary in simulation results, we find that the misspecified working correlation structure can still capture some spatial correlation information in model fitting. When the sample size is large enough, BIC and RIC perform well even if the the working covariance is misspecified. Moreover, the performance of these information criteria is related to the average level of model fitting which can be indicated by the average adjusted R square ( [GRAPHICS] ), and overall RIC performs well.
Resumo:
A simple stochastic model of a fish population subject to natural and fishing mortalities is described. The fishing effort is assumed to vary over different periods but to be constant within each period. A maximum-likelihood approach is developed for estimating natural mortality (M) and the catchability coefficient (q) simultaneously from catch-and-effort data. If there is not enough contrast in the data to provide reliable estimates of both M and q, as is often the case in practice, the method can be used to obtain the best possible values of q for a range of possible values of M. These techniques are illustrated with tiger prawn (Penaeus semisulcatus) data from the Northern Prawn Fishery of Australia.
Resumo:
The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This theory is developed further by delineating criteria for evaluating competing explanations and by applying the theory to social scientific modelling practices and to the key concepts of equilibrium and mechanism. The dissertation is comprised of an introductory essay and six published original research articles. The main theses about model-based explanations in the social sciences argued for in the articles are the following. 1) The concept of explanatory power, often used to argue for the superiority of one explanation over another, compasses five dimensions which are partially independent and involve some systematic trade-offs. 2) All equilibrium explanations do not causally explain the obtaining of the end equilibrium state with the multiple possible initial states. Instead, they often constitutively explain the macro property of the system with the micro properties of the parts (together with their organization). 3) There is an important ambivalence in the concept mechanism used in many model-based explanations and this difference corresponds to a difference between two alternative research heuristics. 4) Whether unrealistic assumptions in a model (such as a rational choice model) are detrimental to an explanation provided by the model depends on whether the representation of the explanatory dependency in the model is itself dependent on the particular unrealistic assumptions. Thus evaluating whether a literally false assumption in a model is problematic requires specifying exactly what is supposed to be explained and by what. 5) The question of whether an explanatory relationship depends on particular false assumptions can be explored with the process of derivational robustness analysis and the importance of robustness analysis accounts for some of the puzzling features of the tradition of model-building in economics. 6) The fact that economists have been relatively reluctant to use true agent-based simulations to formulate explanations can partially be explained by the specific ideal of scientific understanding implicit in the practise of orthodox economics.
Resumo:
In this study I discuss G. W. Leibniz's (1646-1716) views on rational decision-making from the standpoint of both God and man. The Divine decision takes place within creation, as God freely chooses the best from an infinite number of possible worlds. While God's choice is based on absolutely certain knowledge, human decisions on practical matters are mostly based on uncertain knowledge. However, in many respects they could be regarded as analogous in more complicated situations. In addition to giving an overview of the divine decision-making and discussing critically the criteria God favours in his choice, I provide an account of Leibniz's views on human deliberation, which includes some new ideas. One of these concerns is the importance of estimating probabilities in making decisions one estimates both the goodness of the act itself and its consequences as far as the desired good is concerned. Another idea is related to the plurality of goods in complicated decisions and the competition this may provoke. Thirdly, heuristic models are used to sketch situations under deliberation in order to help in making the decision. Combining the views of Marcelo Dascal, Jaakko Hintikka and Simo Knuuttila, I argue that Leibniz applied two kinds of models of rational decision-making to practical controversies, often without explicating the details. The more simple, traditional pair of scales model is best suited to cases in which one has to decide for or against some option, or to distribute goods among parties and strive for a compromise. What may be of more help in more complicated deliberations is the novel vectorial model, which is an instance of the general mathematical doctrine of the calculus of variations. To illustrate this distinction, I discuss some cases in which he apparently applied these models in different kinds of situation. These examples support the view that the models had a systematic value in his theory of practical rationality.