239 resultados para SEMI-EMPIRICAL THEORY
Resumo:
This paper outlines a major empirical study that is being undertaken by an interdisciplinary team into genetic discrimination in Australia. The 3-year study will examine the nature and extent of this newly emerging phenomenon across the perspectives of consumers, third parties, and the legal system and will analyze its social and legal dimensions. Although the project is confined to Australia, it is expected that the outcomes will have significance for the wider research community as this is the most substantial study of its kind to be undertaken to date into genetic discrimination.
Resumo:
This study explores several important aspects of the management of new product development (NPD) in the Chinese steel industry. Specifically it explores NPD success factors, the importance of management functions to new product success and measures of new product success from the perspective of the industry's practitioners. Based on a sample of 190 industrial practitioners from 18 Chinese steel companies, the study provides a mixed picture as China makes the transition from a centrally-controlled to market-based economy. On one hand, respondents ranked understanding users' needs as the most important factor influencing the performance of the new products. Further, formulating new product strategy and strengthening market research are perceived as the most important managerial functions in NPD. However, technical performance measures are regarded as more important and are more widely used in industry than market-based or financial measures of success.
Resumo:
Semi-aquatic animals represent a transitional locomotor condition characterised by the possession of morphological features that allow locomotion both in water and on land. Most ecologically important behaviours of crocodilians occur in the water, raising the question of whether their 'terrestrial construction' constrains aquatic locomotion. Moreover, the demands for aquatic locomotion change with life-history stage. It was the aim of this research to determine the kinematic characteristics and efficiency of aquatic locomotion in different-sized crocodiles (Crocodylus porosus). Aquatic propulsion was achieved primarily by tail undulations, and the use of limbs during swimming was observed only in very small animals or at low swimming velocities in larger animals. Over the range of swimming speeds we examined, tail beat amplitude did not change with increasing velocity, but amplitude increased significantly with body length. However, amplitude expressed relative to body length decreased with increasing body length. Tail beat frequency increased with swimming velocity but there were no differences in frequency between different-sized animals. Mechanical power generated during swimming and thrust increased non-linearly with swimming velocity, but disproportionally so that kinematic efficiency decreased with increasing swimming velocity. The importance of unsteady forces, expressed as the reduced frequency, increased with increasing swimming velocity. Amplitude is the main determinant of body-size-related increases in swimming velocity but, compared with aquatic mammals and fish, crocodiles are slow swimmers probably because of constraints imposed by muscle performance and unsteady forces opposing forward movement. Nonetheless, the kinematic efficiency of aquatic locomotion in crocodiles is comparable to that of fully aquatic mammals, and it is considerably greater than that of semi-aquatic mammals.
Resumo:
Conceptual modelling is an activity undertaken during information systems development work to build a representation of selected semantics about some real-world domain. Ontological theories have been developed to account for the structure and behavior of the real world in general. In this paper, I discuss why ontological theories can be used to inform conceptual modelling research, practice, and pedagogy. I provide examples from my research to illustrate how a particular ontological theory has enabled me to improve my understanding of certain conceptual modelling practices and grammars. I describe, also, how some colleagues and I have used this theory to generate several counter-intuitive, sometimes surprising predictions about widely advocated conceptual modelling practices - predictions that subsequently were supported in empirical research we undertook. Finally, I discuss several possibilities and pitfalls I perceived to be associated with our using ontological theories to underpin research on conceptual modelling.
Resumo:
This article presents a fairness theory-based conceptual framework for studying and managing consumers’ emotions during service recovery attempts. The conceptual framework highlights the central role played by counterfactual thinking and accountability. Findings from five focus groups are also presented to lend further support to the conceptual framework. Essentially, the article argues that a service failure event triggers an emotional response in the consumer, and from here the consumer commences an assessment of the situation, considering procedural justice, interactional justice, and distributive justice elements, while engaging in counterfactual thinking and apportioning accountability. More specifically, the customer assesses whether the service provider could and should have done something more to remedy the problem and how the customer would have felt had these actions been taken. The authors argue that during this process situational effort is taken into account when assessing accountability. When service providers do not appear to exhibit an appropriate level of effort, consumers attribute this to the service provider not caring. This in turn leads to the customer feeling more negative emotions, such as anger and frustration. Managerial implications of the study are discussed.
Resumo:
Loss of magnetic medium solids from dense medium circuits is a substantial contributor to operating cost. Much of this loss is by way of wet drum magnetic separator effluent. A model of the separator would be useful for process design, optimisation and control. A review of the literature established that although various rules of thumb exist, largely based on empirical or anecdotal evidence, there is no model of magnetics recovery in a wet drum magnetic separator which includes as inputs all significant machine and operating variables. A series of trials, in both factorial experiments and in single variable experiments, was therefore carried out using a purpose built rig which featured a small industrial scale (700 mm lip length, 900 mm diameter) wet drum magnetic separator. A substantial data set of 191 trials was generated in the work. The results of the factorial experiments were used to identify the variables having a significant effect on magnetics recovery. Observations carried out as an adjunct to this work, as well as magnetic theory, suggests that the capture of magnetic particles in the wet drum magnetic separator is by a flocculation process. Such a process should be defined by a flocculation rate and a flocculation time; the latter being defined by the volumetric flowrate and the volume within the separation zone. A model based on this concept and containing adjustable parameters was developed. This model was then fitted to a randomly chosen 80% of the data, and validated by application to the remaining 20%. The model is shown to provide a satisfactory fit to the data over three orders of magnitude of magnetics loss. (C) 2003 Elsevier Science BY. All rights reserved.
Resumo:
Today, the standard approach for the kinetic analysis of dynamic PET studies is compartment models, in which the tracer and its metabolites are confined to a few well-mixed compartments. We examine whether the standard model is suitable for modern PET data or whether theories including more physiologic realism can advance the interpretation of dynamic PET data. A more detailed microvascular theory is developed for intravascular tracers in single-capillary and multiple-capillary systems. The microvascular models, which account for concentration gradients in capillaries, are validated and compared with the standard model in a pig liver study. Methods: Eight pigs underwent a 5-min dynamic PET study after O-15-carbon monoxide inhalation. Throughout each experiment, hepatic arterial blood and portal venous blood were sampled, and flow was measured with transit-time flow meters. The hepatic dual-inlet concentration was calculated as the flow-weighted inlet concentration. Dynamic PET data were analyzed with a traditional single-compartment model and 2 microvascular models. Results: Microvascular models provided a better fit of the tissue activity of an intravascular tracer than did the compartment model. In particular, the early dynamic phase after a tracer bolus injection was much improved. The regional hepatic blood flow estimates provided by the microvascular models (1.3 +/- 0.3 mL min(-1) mL(-1) for the single-capillary model and 1.14 +/- 0.14 min(-1) mL(-1) for the multiple-capillary model) (mean +/- SEM mL of blood min(-1) mL of liver tissue(-1)) were in agreement with the total blood flow measured by flow meters and normalized to liver weight (1.03 +/- 0.12 mL min(-1) mL(-1)). Conclusion: Compared with the standard compartment model, the 2 microvascular models provide a superior description of tissue activity after an intravascular tracer bolus injection. The microvascular models include only parameters with a clear-cut physiologic interpretation and are applicable to capillary beds in any organ. In this study, the microvascular models were validated for the liver and provided quantitative regional flow estimates in agreement with flow measurements.
Resumo:
Modeling physiological processes using tracer kinetic methods requires knowledge of the time course of the tracer concentration in blood supplying the organ. For liver studies, however, inaccessibility of the portal vein makes direct measurement of the hepatic dual-input function impossible in humans. We want to develop a method to predict the portal venous time-activity curve from measurements of an arterial time-activity curve. An impulse-response function based on a continuous distribution of washout constants is developed and validated for the gut. Experiments with simultaneous blood sampling in aorta and portal vein were made in 13 anesthetized pigs following inhalation of intravascular [O-15] CO or injections of diffusible 3-O[ C-11] methylglucose (MG). The parameters of the impulse-response function have a physiological interpretation in terms of the distribution of washout constants and are mathematically equivalent to the mean transit time ( T) and standard deviation of transit times. The results include estimates of mean transit times from the aorta to the portal vein in pigs: (T) over bar = 0.35 +/- 0.05 min for CO and 1.7 +/- 0.1 min for MG. The prediction of the portal venous time-activity curve benefits from constraining the regression fits by parameters estimated independently. This is strong evidence for the physiological relevance of the impulse-response function, which includes asymptotically, and thereby justifies kinetically, a useful and simple power law. Similarity between our parameter estimates in pigs and parameter estimates in normal humans suggests that the proposed model can be adapted for use in humans.
Resumo:
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Motivated by application of current superalgebras in the study of disordered systems such as the random XY and Dirac models, we investigate gl(2\2) current superalgebra at general level k. We construct its free field representation and corresponding Sugawara energy-momentum tensor in the non-standard basis. Three screen currents of the first kind are also presented. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
In this paper we examine the effects of varying several experimental parameters in the Kane quantum computer architecture: A-gate voltage, the qubit depth below the silicon oxide barrier, and the back gate depth to explore how these variables affect the electron density of the donor electron. In particular, we calculate the resonance frequency of the donor nuclei as a function of these parameters. To do this we calculated the donor electron wave function variationally using an effective-mass Hamiltonian approach, using a basis of deformed hydrogenic orbitals. This approach was then extended to include the electric-field Hamiltonian and the silicon host geometry. We found that the phosphorous donor electron wave function was very sensitive to all the experimental variables studied in our work, and thus to optimize the operation of these devices it is necessary to control all parameters varied in this paper.
Resumo:
Published mobility measurements obtained by capillary zone electrophoresis of human growth hormone peptides are described reasonably well by the classical theoretical relationships for electrophoretic migration. This conformity between theory and experiment has rendered possible a more critical assessment of a commonly employed empirical relationship between mobility (u), net charge (z) and molecular mass (M) of peptides in capillary electrophoresis. The assumed linear dependence between u and z/M-2/3 is shown to be an approximate description of a shallow curvilinear dependence convex to the abscissa. An improved procedure for the calculation of peptide charge (valence) is also described. (C) 2003 Elsevier B.V. All rights reserved.