959 resultados para Cohesive zone model
Resumo:
Computational models for cardiomyocyte action potentials (AP) often make use of a large parameter set. This parameter set can contain some elements that are fitted to experimental data independently of any other element, some elements that are derived concurrently with other elements to match experimental data, and some elements that are derived purely from phenomenological fitting to produce the desired AP output. Furthermore, models can make use of several different data sets, not always derived for the same conditions or even the same species. It is consequently uncertain whether the parameter set for a given model is physiologically accurate. Furthermore, it is only recently that the possibility of degeneracy in parameter values in producing a given simulation output has started to be addressed. In this study, we examine the effects of varying two parameters (the L-type calcium current (I(CaL)) and the delayed rectifier potassium current (I(Ks))) in a computational model of a rabbit ventricular cardiomyocyte AP on both the membrane potential (V(m)) and calcium (Ca(2+)) transient. It will subsequently be determined if there is degeneracy in this model to these parameter values, which will have important implications on the stability of these models to cell-to-cell parameter variation, and also whether the current methodology for generating parameter values is flawed. The accuracy of AP duration (APD) as an indicator of AP shape will also be assessed.
Resumo:
The action potential (ap) of a cardiac cell is made up of a complex balance of ionic currents which flow across the cell membrane in response to electrical excitation of the cell. Biophysically detailed mathematical models of the ap have grown larger in terms of the variables and parameters required to model new findings in subcellular ionic mechanisms. The fitting of parameters to such models has seen a large degree of parameter and module re-use from earlier models. An alternative method for modelling electrically exciteable cardiac tissue is a phenomenological model, which reconstructs tissue level ap wave behaviour without subcellular details. A new parameter estimation technique to fit the morphology of the ap in a four variable phenomenological model is presented. An approximation of a nonlinear ordinary differential equation model is established that corresponds to the given phenomenological model of the cardiac ap. The parameter estimation problem is converted into a minimisation problem for the unknown parameters. A modified hybrid Nelder–Mead simplex search and particle swarm optimization is then used to solve the minimisation problem for the unknown parameters. The successful fitting of data generated from a well known biophysically detailed model is demonstrated. A successful fit to an experimental ap recording that contains both noise and experimental artefacts is also produced. The parameter estimation method’s ability to fit a complex morphology to a model with substantially more parameters than previously used is established.
Resumo:
This paper presents a new insight into the mechanism of biolubrication of articulating mammalian joints that includes the function of surface-active phospholipids (SAPLs). SAPLs can be adsorbed on surface of cartilage membranes as a hydrophobic monolayer (H-phobic-M Madel or Hills' Model) or as a newly proposed hydrophilic bilayer (H-philic-B Model). With respect to the synovial joint's frictionless work, three processes are identified namely: monolayer/bilayer phospholipids binding to cartilage with lubricin interaction; influence of induced-pressure on interaction of hyaluronan with phospholipids; and biolubrication arising from two gliding articular hydrophilic surfaces acting as reverse micelle. Lubricin is considered to play critical role as a supplier of phospholipids, which overlay the articular surface of articular cartilage. Hyaluronic acid is considered to play a critical mediating role in the interaction between the hydrophilic part of phospholipids, the articular surface and water (hydration) in facilitating the lubrication process. Tivo models of frictionless lubrication processes, namely hydrophobic (H-phobic-M Model) and our conceptual hydrophilic (H-philic-B Model), are compared. © Institution of Engineers Australia, 2008.
Resumo:
Young novice drivers are significantly more likely to be killed or injured in car crashes than older, experienced drivers. Graduated driver licensing (GDL), which allows the novice to gain driving experience under less-risky circumstances, has resulted in reduced crash incidence; however, the driver's psychological traits are ignored. This paper explores the relationships between gender, age, anxiety, depression, sensitivity to reward and punishment, sensation-seeking propensity, and risky driving. Participants were 761 young drivers aged 17–24 (M= 19.00, SD= 1.56) with a Provisional (intermediate) driver's licence who completed an online survey comprising socio-demographic questions, the Impulsive Sensation Seeking Scale, Kessler's Psychological Distress Scale, the Sensitivity to Punishment and Sensitivity to Reward Questionnaire, and the Behaviour of Young Novice Drivers Scale. Path analysis revealed depression, reward sensitivity, and sensation-seeking propensity predicted the self-reported risky behaviour of the young novice drivers. Gender was a moderator; and the anxiety level of female drivers also influenced their risky driving. Interventions do not directly consider the role of rewards and sensation seeking, or the young person's mental health. An approach that does take these variables into account may contribute to improved road safety outcomes for both young and older road users.
Resumo:
To facilitate the implementation of workflows, enterprise and workflow system vendors typically provide workflow templates for their software. Each of these templates depicts a variant of how the software supports a certain business process, allowing the user to save the effort of creating models and links to system components from scratch by selecting and activating the appropriate template. A combination of the strengths from different templates is however only achievable by manually adapting the templates which is cumbersome. We therefore suggest in this paper to combine different workflow templates into a single configurable workflow template. Using the workflow modeling language of SAP’s WebFlow engine, we show how such a configurable workflow modeling language can be created by identifying the configurable elements in the original language. Requirements imposed on configurations inhibit invalid configurations. Based on a default configuration such configurable templates can be used as easy as the traditional templates. The suggested approach is also applicable to other workflow modeling languages
Resumo:
We consider a continuous time model for election timing in a Majoritarian Parliamentary System where the government maintains a constitutional right to call an early election. Our model is based on the two-party-preferred data that measure the popularity of the government and the opposition over time. We describe the poll process by a Stochastic Differential Equation (SDE) and use a martingale approach to derive a Partial Differential Equation (PDE) for the government’s expected remaining life in office. A comparison is made between a three-year and a four-year maximum term and we also provide the exercise boundary for calling an election. Impacts on changes in parameters in the SDE, the probability of winning the election and maximum terms on the call exercise boundaries are discussed and analysed. An application of our model to the Australian Federal Election for House of Representatives is also given.
Resumo:
In this paper we construct a mathematical model for the genetic regulatory network of the lactose operon. This mathematical model contains transcription and translation of the lactose permease (LacY) and a reporter gene GFP. The probability of transcription of LacY is determined by 14 binding states out of all 50 possible binding states of the lactose operon based on the quasi-steady-state assumption for the binding reactions, while we calculate the probability of transcription for the reporter gene GFP based on 5 binding states out of 19 possible binding states because the binding site O2 is missing for this reporter gene. We have tested different mechanisms for the transport of thio-methylgalactoside (TMG) and the effect of different Hill coefficients on the simulated LacY expression levels. Using this mathematical model we have realized one of the experimental results with different LacY concentrations, which are induced by different concentrations of TMG.
Resumo:
This paper contributes to the rigor vs. relevance debate in the Information Systems (IS) discipline. Using the Action Research methodology, this study evaluates the relevance of a rigorously validated IS evaluation model in practice. The study captures observations of operational end-users employing a market leading Enterprise System application for procurement and order fulfillment in an organization. The analysis of the observations demonstrates the broad relevance of the measurement instrument. More importantly, the study identifies several improvements and possible confusions in applying the instrument in the practice.
Resumo:
Aim: In this paper we discuss the use of the Precede-Proceed model when investigating health promotion options for breast cancer survivors. Background: Adherence to recommended health behaviors can optimize well-being after cancer treatment. Guided by the Precede-Proceed approach, we studied the behaviors of breast cancer survivors in our health service area. Data sources: The interview data from the cohort of breast cancer survivors are used in this paper to illustrate the use of Precede-Proceed in this nursing research context. Interview data were collected from June to December 2009. We also searched Medline, CINAHL, PsychInfo and PsychExtra up to 2010 for relevant literature in English to interrogate the data from other theoretical perspectives. Discussion: The Precede-Proceed model is theoretically-complex. The deductive analytic process guided by the model usefully explained some of the health behaviors of cancer survivors, although it could not explicate many other findings. A complementary inductive approach to the analysis and subsequent interpretation by way of Uncertainty in Illness Theory and other psychosocial perspectives provided a comprehensive account of the qualitative data that resulted in contextually-relevant recommendations for nursing practice. Implications for nursing: Nursing researchers using Precede-Proceed should maintain theoretical flexibility when interpreting qualitative data. Perspectives not embedded in the model might need to be considered to ensure that the data are analyzed in a contextually-relevant way. Conclusion: Precede-Proceed provides a robust framework for nursing researchers investigating health promotion in cancer survivors; however additional theoretical lenses to those embedded in the model can enhance data interpretation.
Resumo:
Significant empirical data from the fields of management and business strategy suggest that it is a good idea for a company to make in-house the components and processes underpinning a new technology. Other evidence suggests exactly the opposite, saying that firms would be better off buying components and processes from outside suppliers. One possible explanation for this lack of convergence is that earlier research in this area has overlooked two important aspects of the problem: reputation and trust. To gain insight into how these variables may impact make-buy decisions throughout the innovation process, the Sporas algorithm for measuring reputation was added to an existing agent-based model of how firms interact with each other throughout the development of new technologies. The model�s results suggest that reputation and trust do not play a significant role in the long-term fortunes of an individual firm as it contends with technological change in the marketplace. Accordingly, this model serves as a cue for management researchers to investigate more thoroughly the temporal limitations and contingencies that determine how the trust between firms may affect the R&D process.
Developing a model of embedding academic numeracy in university programs : a case study from nursing
Resumo:
This is a study of the academic numeracy of nursing students. This study develops a theoretical model for the design and delivery of university courses in academic numeracy. The following objectives are addressed: 1. To investigate nursing students' current knowledge of academic numeracy; 2. To investigate how nursing students’ knowledge and skills in academic numeracy can be enhanced using a developmental psychology framework; and 3. To utilise data derived from meeting objectives 1 and 2 to develop a theoretical model to embed academic numeracy in university programs. This study draws from Valsiner’s Human Development Theory (Valsiner, 1997, 2007). It is a quasi-experimental intervention case study (Faltis, 1997) and takes a multimethod approach using pre- and post-tests; observation notes; and semi-structured teaching sessions to document a series of microgenetic studies of student numeracy. Each microgenetic study is centered on the lived experience of students becoming more numerate. The method for this section is based on Vygotsky’s double stimulation (Valsiner, 2000a; 2007). Data collection includes interviews on students’ past experience with mathematics; their present feelings and experiences and how these present feelings and experiences are transformed. The findings from this study have provided evidence that the course developed for nursing students, underpinned by an appropriate framework, does improve academic numeracy. More specifically, students improved their content knowledge of and confidence in mathematics in areas that were directly related to their degree. The study used Valsiner’s microgenetic approach to development to trace the course as it was being taught and two students’ personal academic numeracy journeys. It highlighted particularly troublesome concepts, then outlined scaffolding and pathways used to develop understanding. This approach to academic numeracy development was summarised into a four-faceted model at the university, program, course and individual level. This model can be applied successfully to similar contexts. Thus the thesis advances both theory and practice in this under-researched and under-theorised area.
Resumo:
This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, “to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice” (Gable et al, 2006). IS-Impact is defined as “a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups” (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the “impact” half includes Organizational-Impact and Individual-Impact dimensions; the “quality” half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalizable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employ perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalization of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. The objective of this study is to develop a Mandarin version IS-Impact model, encompassing a list of China-specific IS-Impact measures, aiding in a better understanding of the IS-Impact phenomenon in a Chinese organizational context. The IS-Impact model provides a much needed theoretical guidance for this investigation of ES and ES impacts in a Chinese context. The appropriateness and soundness of employing the IS-Impact model as a theoretical foundation are evident: the model originated from a sound theory of IS Success (1992), developed through rigorous validation, and also derived in the context of Enterprise Systems. Based on the IS-Impact model, this study investigates a number of research questions (RQs). Firstly, the research investigated what essential impacts have been derived from ES by Chinese users and organizations [RQ1]. Secondly, we investigate which salient quality features of ES are perceived by Chinese users [RQ2]. Thirdly, we seek to answer whether the quality and impacts measures are sufficient to assess ES-success in general [RQ3]. Lastly, the study attempts to address whether the IS-Impact measurement model is appropriate for Chinese organizations in terms of evaluating their ES [RQ4]. An open-ended, qualitative identification survey was employed in the study. A large body of short text data was gathered from 144 Chinese users and 633 valid IS-Impact statements were generated from the data set. A generally inductive approach was applied in the qualitative data analysis. Rigorous qualitative data coding resulted in 50 first-order categories with 6 second-order categories that were grounded from the context of Chinese organization. The six second-order categories are: 1) System Quality; 2) Information Quality; 3) Individual Impacts;4) Organizational Impacts; 5) User Quality and 6) IS Support Quality. The final research finding of the study is the contextualized Mandarin version IS-Impact measurement model that includes 38 measures organized into 4 dimensions: System Quality, information Quality, Individual Impacts and Organizational Impacts. The study also proposed two conceptual models to harmonize the IS-Impact model and the two emergent constructs – User Quality and IS Support Quality by drawing on previous IS effectiveness literatures and the Work System theory proposed by Alter (1999) respectively. The study is significant as it is the first effort that empirically and comprehensively investigates IS-Impact in China. Specifically, the research contributions can be classified into theoretical contributions and practical contributions. From the theoretical perspective, through qualitative evidence, the study test and consolidate IS-Impact measurement model in terms of the quality of robustness, completeness and generalizability. The unconventional research design exhibits creativity of the study. The theoretical model does not work as a top-down a priori seeking for evidence demonstrating its credibility; rather, the study allows a competitive model to emerge from the bottom-up and open-coding analysis. Besides, the study is an example extending and localizing pre-existing theory developed in Western context when the theory is introduced to a different context. On the other hand, from the practical perspective, It is first time to introduce prominent research findings in field of IS Success to Chinese academia and practitioner. This study provides a guideline for Chinese organizations to assess their Enterprise System, and leveraging IT investment in the future. As a research effort in ITPS track, this study contributes the research team with an alternative operationalization of the dependent variable. The future research can take on the contextualized Mandarin version IS-Impact framework as a theoretical a priori model, further quantitative and empirical testing its validity.
Resumo:
This paper presents a key based generic model for digital image watermarking. The model aims at addressing an identified gap in the literature by providing a basis for assessing different watermarking requirements in various digital image applications. We start with a formulation of a basic watermarking system, and define system inputs and outputs. We then proceed to incorporate the use of keys in the design of various system components. Using the model, we also define a few fundamental design and evaluation parameters. To demonstrate the significance of the proposed model, we provide an example of how it can be applied to formally define common attacks.
Resumo:
The present study aims to validate the current best-practice model of implementation effectiveness in small and mid-size businesses. Data from 135 organizations largely confirm the original model across various types of innovation. In addition, we extended this work by highlighting the importance of human resources in implementation effectiveness and the consequences of innovation effectiveness on future adoption attitudes. We found that the availability of skilled employees was positively related to implementation effectiveness. Furthermore, organizations that perceived a high level of benefits from implemented innovations were likely to have a positive attitude towards future innovation adoption. The implications of our improvements to the original model of implementation effectiveness are discussed.
Resumo:
Metallic materials exposed to oxygen-enriched atmospheres – as commonly used in the medical, aerospace, aviation and numerous chemical processing industries – represent a significant fire hazard which must be addressed during design, maintenance and operation. Hence, accurate knowledge of metallic materials flammability is required. Reduced gravity (i.e. space-based) operations present additional unique concerns, where the absence of gravity must also be taken into account. The flammability of metallic materials has historically been quantified using three standardised test methods developed by NASA, ASTM and ISO. These tests typically involve the forceful (promoted) ignition of a test sample (typically a 3.2 mm diameter cylindrical rod) in pressurised oxygen. A test sample is defined as flammable when it undergoes burning that is independent of the ignition process utilised. In the standardised tests, this is indicated by the propagation of burning further than a defined amount, or „burn criterion.. The burn criterion in use at the onset of this project was arbitrarily selected, and did not accurately reflect the length a sample must burn in order to be burning independent of the ignition event and, in some cases, required complete consumption of the test sample for a metallic material to be considered flammable. It has been demonstrated that a) a metallic material.s propensity to support burning is altered by any increase in test sample temperature greater than ~250-300 oC and b) promoted ignition causes an increase in temperature of the test sample in the region closest to the igniter, a region referred to as the Heat Affected Zone (HAZ). If a test sample continues to burn past the HAZ (where the HAZ is defined as the region of the test sample above the igniter that undergoes an increase in temperature of greater than or equal to 250 oC by the end of the ignition event), it is burning independent of the igniter, and should be considered flammable. The extent of the HAZ, therefore, can be used to justify the selection of the burn criterion. A two dimensional mathematical model was developed in order to predict the extent of the HAZ created in a standard test sample by a typical igniter. The model was validated against previous theoretical and experimental work performed in collaboration with NASA, and then used to predict the extent of the HAZ for different metallic materials in several configurations. The extent of HAZ predicted varied significantly, ranging from ~2-27 mm depending on the test sample thermal properties and test conditions (i.e. pressure). The magnitude of the HAZ was found to increase with increasing thermal diffusivity, and decreasing pressure (due to slower ignition times). Based upon the findings of this work, a new burn criterion requiring 30 mm of the test sample to be consumed (from the top of the ignition promoter) was recommended and validated. This new burn criterion was subsequently included in the latest revision of the ASTM G124 and NASA 6001B international test standards that are used to evaluate metallic material flammability in oxygen. These revisions also have the added benefit of enabling the conduct of reduced gravity metallic material flammability testing in strict accordance with the ASTM G124 standard, allowing measurement and comparison of the relative flammability (i.e. Lowest Burn Pressure (LBP), Highest No-Burn Pressure (HNBP) and average Regression Rate of the Melting Interface(RRMI)) of metallic materials in normal and reduced gravity, as well as determination of the applicability of normal gravity test results to reduced gravity use environments. This is important, as currently most space-based applications will typically use normal gravity information in order to qualify systems and/or components for reduced gravity use. This is shown here to be non-conservative for metallic materials which are more flammable in reduced gravity. The flammability of two metallic materials, Inconel® 718 and 316 stainless steel (both commonly used to manufacture components for oxygen service in both terrestrial and space-based systems) was evaluated in normal and reduced gravity using the new ASTM G124-10 test standard. This allowed direct comparison of the flammability of the two metallic materials in normal gravity and reduced gravity respectively. The results of this work clearly show, for the first time, that metallic materials are more flammable in reduced gravity than in normal gravity when testing is conducted as described in the ASTM G124-10 test standard. This was shown to be the case in terms of both higher regression rates (i.e. faster consumption of the test sample – fuel), and burning at lower pressures in reduced gravity. Specifically, it was found that the LBP for 3.2 mm diameter Inconel® 718 and 316 stainless steel test samples decreased by 50% from 3.45 MPa (500 psia) in normal gravity to 1.72 MPa (250 psia) in reduced gravity for the Inconel® 718, and 25% from 3.45 MPa (500 psia) in normal gravity to 2.76 MPa (400 psia) in reduced gravity for the 316 stainless steel. The average RRMI increased by factors of 2.2 (27.2 mm/s in 2.24 MPa (325 psia) oxygen in reduced gravity compared to 12.8 mm/s in 4.48 MPa (650 psia) oxygen in normal gravity) for the Inconel® 718 and 1.6 (15.0 mm/s in 2.76 MPa (400 psia) oxygen in reduced gravity compared to 9.5 mm/s in 5.17 MPa (750 psia) oxygen in normal gravity) for the 316 stainless steel. Reasons for the increased flammability of metallic materials in reduced gravity compared to normal gravity are discussed, based upon the observations made during reduced gravity testing and previous work. Finally, the implications (for fire safety and engineering applications) of these results are presented and discussed, in particular, examining methods for mitigating the risk of a fire in reduced gravity.