895 resultados para bare public-key model
Resumo:
Background The various cell types and their relative numbers in multicellular organisms are controlled by growth factors and related extracellular molecules which affect genetic expression pathways. However, these substances may have both/either inhibitory and/or stimulatory effects on cell division and cell differentiation depending on the cellular environment. It is not known how cells respond to these substances in such an ambiguous way. Many cellular effects have been investigated and reported using cell culture from cancer cell lines in an effort to define normal cellular behaviour using these abnormal cells. A model is offered to explain the harmony of cellular life in multicellular organisms involving interacting extracellular substances. Methods A basic model was proposed based on asymmetric cell division and evidence to support the hypothetical model was accumulated from the literature. In particular, relevant evidence was selected for the Insulin-Like Growth Factor system from the published data, especially from certain cell lines, to support the model. The evidence has been selective in an attempt to provide a picture of normal cellular responses, derived from the cell lines. Results The formation of a pair of coupled cells by asymmetric cell division is an integral part of the model as is the interaction of couplet molecules derived from these cells. Each couplet cell will have a receptor to measure the amount of the couplet molecule produced by the other cell; each cell will be receptor-positive or receptor-negative for the respective receptors. The couplet molecules will form a binary complex whose level is also measured by the cell. The hypothesis is heavily supported by selective collection of circumstantial evidence and by some direct evidence. The basic model can be expanded to other cellular interactions. Conclusions These couplet cells and interacting couplet molecules can be viewed as a mechanism that provides a controlled and balanced division-of-labour between the two progeny cells, and, in turn, their progeny. The presence or absence of a particular receptor for a couplet molecule will define a cell type and the presence or absence of many such receptors will define the cell types of the progeny within cell lineages.
Resumo:
Introduction The Skin Self-Examination Attitude Scale (SSEAS) is a brief measure that allows for the assessment of attitudes in relation to skin self-examination. This study evaluated the psychometric properties of the SSEAS using Item Response Theory (IRT) methods in a large sample of men ≥ 50 years in Queensland, Australia. Methods A sample of 831 men (420 intervention and 411 control) completed a telephone assessment at the 13-month follow-up of a randomized-controlled trial of a video-based intervention to improve skin self-examination (SSE) behaviour. Descriptive statistics (mean, standard deviation, item–total correlations, and Cronbach’s alpha) were compiled and difficulty parameters were computed with Winsteps using the polytomous Rasch Rating Scale Model (RRSM). An item person (Wright) map of the SSEAS was examined for content coverage and item targeting. Results The SSEAS have good psychometric properties including good internal consistency (Cronbach’s alpha = 0.80), fit with the model and no evidence for differential item functioning (DIF) due to experimental trial grouping was detected. Conclusions The present study confirms the SSEA scale as a brief, useful and reliable tool for assessing attitudes towards skin self-examination in a population of men 50 years or older in Queensland, Australia. The 8-item scale shows unidimensionality, allowing levels of SSE attitude, and the item difficulties, to be ranked on a single continuous scale. In terms of clinical practice, it is very important to assess skin cancer self-examination attitude to identify people who may need a more extensive intervention to allow early detection of skin cancer.
Resumo:
Background Today, finding an ideal biomaterial to treat the large bone defects, delayed unions and non-unions remains a challenge for orthopaedic surgeions and researchers. Several studies have been carried out on the subject of bone regeneration, each having its own advantages. The present study has been designed in vivo to evaluate the effects of cellular auto-transplantation of tail vertebrae on healing of experimental critical bone defect in a dog model. Methods Six indigenous breeds of dog with 32 ± 3.6 kg average weight from both sexes (5 males and 1 female) received bilateral critical-sized ulnar segmental defects. After determining the health condition, divided to 2 groups: The Group I were kept as control I (n = 1) while in Group II (experimental group; n = 5) bioactive bone implants were inserted. The defects were implanted with either autogeneic coccygeal bone grafts in dogs with 3-4 cm diaphyseal defects in the ulna. Defects were stabilized with internal plate fixation, and the control defects were not stabilized. Animals were euthanized at 16 weeks and analyzed by histopathology. Results Histological evaluation of this new bone at sixteen weeks postoperatively revealed primarily lamellar bone, with the formation of new cortices and normal-appearing marrow elements. And also reformation cortical compartment and reconstitution of marrow space were observed at the graft-host interface together with graft resorption and necrosis responses. Finally, our data were consistent with the osteoconducting function of the tail autograft. Conclusions Our results suggested that the tail vertebrae autograft seemed to be a new source of autogenous cortical bone in order to supporting segmental long bone defects in dogs. Furthermore, cellular autotransplantation was found to be a successful replacement for the tail vertebrae allograft bone at 3-4 cm segmental defects in the canine mid- ulna. Clinical application using graft expanders or bone autotransplantation should be used carefully and requires further investigation.
Resumo:
Background Optimal infant nutrition comprises exclusive breastfeeding, with complementary foods introduced from six months of age. How parents make decisions regarding this is poorly studied. This study begins to address the dearth of research into the decision-making processes used by first-time mothers relating to the introduction of complementary foods. Methods This qualitative explorative study was conducted using interviews (13) and focus groups (3). A semi-structured interview guide based on the Theory of Planned Behaviour (TPB). The TPB, a well-validated decision-making model, identifies the key determinants of a behaviour through behavioural beliefs, subjective norms, and perceived behavioural control over the behaviour. It is purported that these beliefs predict behavioural intention to perform the behaviour, and performing the behaviour. A purposive, convenience, sample of 21 metropolitan parents recruited through advertising at local playgroups and childcare centres, and electronically through the University community email list self-selected to participate. Data were analysed thematically within the theoretical constructs: behavioural beliefs, subjective norms and perceived behavioural control. Data relating to sources of information about the introduction of complementary foods were also collected. Results Overall, first-time mothers found that waiting until six months was challenging despite knowledge of the WHO recommendations and an initial desire to comply with this guideline. Beliefs that complementary foods would assist the infants' weight gain, sleeping patterns and enjoyment at meal times were identified. Barriers preventing parents complying with the recommendations included subjective and group norms, peer influences, infant cues indicating early readiness and food labelling inconsistencies. The most valued information source was from peers who had recently introduced complementary foods. Conclusions First-time mothers in this study did not demonstrate a good understanding of the rationale behind the WHO recommendations, nor did they understand fully the signs of readiness of infants to commence solid foods. Factors that assisted waiting until six months were a trusting relationship with a health professional whose practice and advice was consistent with the recommendations and/or when their infant was developmentally ready for complementary foods at six months and accepted them with ease and enthusiasm. Barriers preventing parents complying with the recommendations included subjective and group norms, peer influences, infant cues indicating early readiness and food labelling inconsistencies.
Resumo:
HIV risk in vulnerable groups such as itinerant male street labourers is often examined via a focus on individual determinants. This study provides a test of a modified Information-Motivation-Behavioral Skills (IMB) model to predict condom use behaviour among male street workers in urban Vietnam. In a cross-sectional survey using a social mapping technique, 450 male street labourers from 13 districts of Hanoi, Vietnam were recruited and interviewed. Collected data were first examined for completeness; structural equation modelling was then employed to test the model fit. Condoms were used inconsistently by many of these men, and usage varied in relation to a number of factors. A modified IMB model had a better fit than the original IMB model in predicting condom use behaviour. This modified model accounted for 49% of the variance, versus 10% by the original version. In the modified model, the influence of psychosocial factors was moderately high, whilst the influence of HIV prevention information, motivation and perceived behavioural skills was moderately low, explaining in part the limited level of condom use behaviour. This study provides insights into social factors that should be taken into account in public health planning to promote safer sexual behaviour among Asian male street labourers.
Resumo:
Using an OLG-model with endogenous growth and public capital we show, that an international capital tax competition leads to inefficiently low tax rates, and as a consequence to lower welfare levels and growth rates. Each national government has an incentive to reduce the capital income tax rates in its effort to ensure that this policy measure increases the domestic private capital stock, domestic income and domestic economic growth. This effort is justified as long as only one country applies this policy. However, if all countries follow this path then all of them will be made worse off in the long run.
Resumo:
Japan is in the midst of massive law reform. Mired in ongoing recession since the early 1990s, Japan has been implementing a new regulatory blueprint to kickstart a sluggish economy through structural change. A key element to this reform process is a rethink of corporate governance and its stakeholder relations. With a patchwork of legislative initiatives in areas as diverse as corporate law, finance, labour relations, consumer protection, public administration and civil justice, this new model is beginning to take shape. But to what extent does this model represent a break from the past? Some commentators are breathlessly predicting the "Americanisation" of Japanese law. They see the triumph of Western-style capitalism - the "End of History", to borrow the words of Francis Fukuyama - with its emphasis on market-based, arms-length transactions. Others are more cautious, advancing the view that there new reforms are merely "creative twists" on what is a uniquely (although slowly evolving) strand of Japanese capitalism. This paper takes issue with both interpretations. It argues that the new reforms merely follow Japan's long tradition of 'adopting and adapting' foreign models to suit domestic purposes. They are neither the wholesale importation of "Anglo-Saxon" regulatory principles nor a thin veneer over a 'uniquely unique' form of Confucian cultural capitalism. Rather, they represent a specific and largely political solution (conservative reformism) to a current economic problem (recession). The larger themes of this paper are 'change' and 'continuity'. 'Change' suggests evolution to something identifiable; 'continuity' suggests adhering to an existing state of affairs. Although notionally opposites, 'change' and 'continuity' have something in common - they both suggest some form of predictability and coherence in regulatory reform. Our paper, by contrast, submits that Japanese corporate governance reform or, indeed, law reform more generally in Japan, is context-specific, multi-layered (with different dimensions not necessarily pulling all in the same direction for example, in relations with key outside suppliers), and therefore more random or 'chaotic'.
Resumo:
Background Different from other indicators of cardiac function, such as ejection fraction and transmitral early diastolic velocity, myocardial strain is promising to capture subtle alterations that result from early diseases of the myocardium. In order to extract the left ventricle (LV) myocardial strain and strain rate from cardiac cine-MRI, a modified hierarchical transformation model was proposed. Methods A hierarchical transformation model including the global and local LV deformations was employed to analyze the strain and strain rate of the left ventricle by cine-MRI image registration. The endocardial and epicardial contour information was introduced to enhance the registration accuracy by combining the original hierarchical algorithm with an Iterative Closest Points using Invariant Features algorithm. The hierarchical model was validated by a normal volunteer first and then applied to two clinical cases (i.e., the normal volunteer and a diabetic patient) to evaluate their respective function. Results Based on the two clinical cases, by comparing the displacement fields of two selected landmarks in the normal volunteer, the proposed method showed a better performance than the original or unmodified model. Meanwhile, the comparison of the radial strain between the volunteer and patient demonstrated their apparent functional difference. Conclusions The present method could be used to estimate the LV myocardial strain and strain rate during a cardiac cycle and thus to quantify the analysis of the LV motion function.
Resumo:
Background: Rupture of vulnerable atheromatous plaque in the carotid and coronary arteries often leads to stroke and heart attack respectively. The role of calcium deposition and its contribution to plaque stability is controversial. This study uses both an idealized and a patient-specific model to evaluate the effect of a calcium deposit on the stress distribution within an atheromatous plaque. Methods: Using a finite-element method, structural analysis was performed on an idealized plaque model and the location of a calcium deposit within it was varied. In addition to the idealized model, in vivo high-resolution MR imaging was performed on 3 patients with carotid atheroma and stress distributions were generated. The individual plaques were chosen as they had calcium at varying locations with respect to the lumen and the fibrous cap. Results: The predicted maximum stress was increased by 47.5% when the calcium deposit was located in the thin fibrous cap in the model when compared with that in a model without a deposit. The result of adding a calcium deposit either to the lipid core or remote from the lumen resulted in almost no increase in maximal stress. Conclusion: Calcification at the thin fibrous cap may result in high stress concentrations, ultimately increasing the risk of plaque rupture. Assessing the location of calcification may, in the future, aid in the risk stratification of patients with carotid stenosis.
Resumo:
The aim of this research was to develop a set of reliable, valid preparedness metrics, built around a comprehensive framework for assessing hospital preparedness. This research used a combination of qualitative and quantitative methods which included interview and a Delphi study as well as a survey of hospitals in the Sichuan Province of China. The resultant framework is constructed around the stages of disaster management and includes nine key elements. Factor Analysis identified four contributing factors. The comparison of hospitals' preparedness using these four factors, revealed that tertiary-grade, teaching and general hospitals performed better than secondary-grade, non-teaching and non-general hospitals.
Resumo:
New antiretroviral drugs that offer large genetic barriers to resistance, such as the recently approved inhibitors of HIV-1 protease, tipranavir and darunavir, present promising weapons to avert the failure of current therapies for HIV infection. Optimal treatment strategies with the new drugs, however, are yet to be established. A key limitation is the poor understanding of the process by which HIV surmounts large genetic barriers to resistance. Extant models of HIV dynamics are predicated on the predominance of deterministic forces underlying the emergence of resistant genomes. In contrast, stochastic forces may dominate, especially when the genetic barrier is large, and delay the emergence of resistant genomes. We develop a mathematical model of HIV dynamics under the influence of an antiretroviral drug to predict the waiting time for the emergence of genomes that carry the requisite mutations to overcome the genetic barrier of the drug. We apply our model to describe the development of resistance to tipranavir in in vitro serial passage experiments. Model predictions of the times of emergence of different mutant genomes with increasing resistance to tipranavir are in quantitative agreement with experiments, indicating that our model captures the dynamics of the development of resistance to antiretroviral drugs accurately. Further, model predictions provide insights into the influence of underlying evolutionary processes such as recombination on the development of resistance, and suggest guidelines for drug design: drugs that offer large genetic barriers to resistance with resistance sites tightly localized on the viral genome and exhibiting positive epistatic interactions maximally inhibit the emergence of resistant genomes.
Resumo:
The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This theory is developed further by delineating criteria for evaluating competing explanations and by applying the theory to social scientific modelling practices and to the key concepts of equilibrium and mechanism. The dissertation is comprised of an introductory essay and six published original research articles. The main theses about model-based explanations in the social sciences argued for in the articles are the following. 1) The concept of explanatory power, often used to argue for the superiority of one explanation over another, compasses five dimensions which are partially independent and involve some systematic trade-offs. 2) All equilibrium explanations do not causally explain the obtaining of the end equilibrium state with the multiple possible initial states. Instead, they often constitutively explain the macro property of the system with the micro properties of the parts (together with their organization). 3) There is an important ambivalence in the concept mechanism used in many model-based explanations and this difference corresponds to a difference between two alternative research heuristics. 4) Whether unrealistic assumptions in a model (such as a rational choice model) are detrimental to an explanation provided by the model depends on whether the representation of the explanatory dependency in the model is itself dependent on the particular unrealistic assumptions. Thus evaluating whether a literally false assumption in a model is problematic requires specifying exactly what is supposed to be explained and by what. 5) The question of whether an explanatory relationship depends on particular false assumptions can be explored with the process of derivational robustness analysis and the importance of robustness analysis accounts for some of the puzzling features of the tradition of model-building in economics. 6) The fact that economists have been relatively reluctant to use true agent-based simulations to formulate explanations can partially be explained by the specific ideal of scientific understanding implicit in the practise of orthodox economics.
Resumo:
Background It is often believed that by ensuring the ongoing completion of competency documents and life-long learning in nursing practice guarantees quality patient care. This is probably true in most cases where it provides reassurances that the nursing team is maintaining a safe “generalised” level of practice. However, competency does not always promise quality performance. There are a number of studies that have reported differences in what practitioners know and what they actually do despite being deemed competent. Aim The aim of this study was to assess whether our current competency documentation is fit for purpose and to ascertain whether performance assessment needs to be a key component in determining competence. Method 15 nurses within a General ICU who had been on the unit <4 years agreed to participate in this project. Using participant observation and assessing performance against key indicators of the Benner Novice to Expert5 model the participants were supported and assessed over the course of a ‘normal’ nursing shift. Results The results were surprising both positively and negatively. First, the nurses felt more empowered in their clinical decision making skills; second, it identified individual learning needs and milestones in educational development. There were some key challenges identified which included 5 nurses over estimating their level of competence, practice was still very much focused on task acquisition and skill and surprisingly some nurses still felt dominated by the other health professionals within the unit. Conclusion We found that the capacity and capabilities of our nursing workforce needs continual ongoing support especially if we want to move our staff from capable task-doer to competent performers. Using the key novice to expert indicators identified the way forward for us in how we assess performance and competence in practice particularly where promotion to higher grades is based on existing documentation.
Resumo:
Public-Private Partnerships (PPP) are established globally as an important mode of procurement and the features of PPP, not least of which the transfer of risk, appeal to governments and particularly in the current economic climate. There are many other advantages of PPP that are claimed as outweighing the costs of PPP and affording Value for Money (VfM) relative to traditionally financed projects or non-PPP. That said, it is the case that we lack comparative whole-life empirical studies of VfM in PPP and non-PPP. Whilst we await this kind of study, the pace and trajectory of PPP seem set to continue and so in the meantime, the virtues of seeking to improve PPP appear incontrovertible. The decision about which projects, or parts of projects, to offer to the market as a PPP and the decision concerning the allocation or sharing risks as part of engagement of the PPP consortium are among the most fundamental decisions that determine whether PPP deliver VfM. The focus in the paper is on latter decision concerning governments’ attitudes towards risk and more specifically, the effect of this decision on the nature of the emergent PPP consortium, or PPP model, including its economic behavior and outcomes. This paper presents an exploration into the extent to which the seemingly incompatible alternatives of risk allocation and risk sharing, represented by the orthodox/conventional PPP model and the heterodox/alliance PPP model respectively, can be reconciled along with suggestions for new research directions to inform this reconciliation. In so doing, an important step is taken towards charting a path by which governments can harness the relative strengths of both kinds of PPP model.
Resumo:
Space-fractional operators have been used with success in a variety of practical applications to describe transport processes in media characterised by spatial connectivity properties and high structural heterogeneity altering the classical laws of diffusion. This study provides a systematic investigation of the spatio-temporal effects of a space-fractional model in cardiac electrophysiology. We consider a simplified model of electrical pulse propagation through cardiac tissue, namely the monodomain formulation of the Beeler-Reuter cell model on insulated tissue fibres, and obtain a space-fractional modification of the model by using the spectral definition of the one-dimensional continuous fractional Laplacian. The spectral decomposition of the fractional operator allows us to develop an efficient numerical method for the space-fractional problem. Particular attention is paid to the role played by the fractional operator in determining the solution behaviour and to the identification of crucial differences between the non-fractional and the fractional cases. We find a positive linear dependence of the depolarization peak height and a power law decay of notch and dome peak amplitudes for decreasing orders of the fractional operator. Furthermore, we establish a quadratic relationship in conduction velocity, and quantify the increasingly wider action potential foot and more pronounced dispersion of action potential duration, as the fractional order is decreased. A discussion of the physiological interpretation of the presented findings is made.