959 resultados para Classical tradicion
Resumo:
To enhance the therapeutic efficacy and reduce the adverse effects of traditional Chinese medicine, practitioners often prescribe combinations of plant species and/or minerals, called formulae. Unfortunately, the working mechanisms of most of these compounds are difficult to determine and thus remain unknown. In an attempt to address the benefits of formulae based on current biomedical approaches, we analyzed the components of Yinchenhao Tang, a classical formula that has been shown to be clinically effective for treating hepatic injury syndrome. The three principal components of Yinchenhao Tang are Artemisia annua L., Gardenia jasminoids Ellis, and Rheum Palmatum L., whose major active ingredients are 6,7-dimethylesculetin (D), geniposide (G), and rhein (R), respectively. To determine the mechanisms underlying the efficacy of this formula, we conducted a systematic analysis of the therapeutic effects of the DGR compound using immunohistochemistry, biochemistry, metabolomics, and proteomics. Here, we report that the DGR combination exerts a more robust therapeutic effect than any one or two of the three individual compounds by hitting multiple targets in a rat model of hepatic injury. Thus, DGR synergistically causes intensified dynamic changes in metabolic biomarkers, regulates molecular networks through target proteins, has a synergistic/additive effect, and activates both intrinsic and extrinsic pathways.
Resumo:
AIM: Zhi Zhu Wan (ZZW) is a classical Chinese medical formulation used for the treatment of functional dyspepsia that attributed to Spleen-deficiency Syndrome. ZZW contains Atractylodes Rhizome and Fructus Citrus Immaturus, the later originates from both Citrus aurantium L. (BZZW) and Citrus sinensis Osbeck (RZZW). The present study is designed to elucidate disparities in the clinical efficacy of two ZZW varieties based on the pharmacokinetics of naringenin and hesperetin. MEHTOD: After oral administration of ZZWs, blood sample was collected from healthy volunteers at designed time points. Naringenin and hesperetin were detected in plasma by RP-HPLC, pharmacokinetic parameters were processed using mode-independent methods with WINNONLIN. RESULTS: After oral administration of BZZW, both naringenin and hesperetin were detected in plasma, and demonstrated similar pharmacokinetic parameters. Ka was 0.384+/-0.165 and 0.401+/-0.159, T(1/2(ke))(h) was 5.491+/-3.926 and 5.824+/-3.067, the AUC (mg/Lh) was 34.886+/-22.199 and 39.407+/-19.535 for naringenin and hesperetin, respectively. However, in the case of RZZW, only hesperetin was found in plasma, but the pharmacokinetic properties for hesperetin in RZZW was different from that in BZZW. T(max) for hesperetin in RZZW is about 8.515h, and its C(max) is much larger than that of BZZW. Moreover, it was eliminated slowly as it possessed a much larger AUC value. CONCLUSION: The distinct therapeutic orientations of the Chinese medical formula ZZWs with different Fructus Citrus Immaturus could be elucidated based on the pharmacokinetic parameters of constituents after oral administration.
Resumo:
Focal segmental glomerulosclerosis (FSGS) is the consequence of a disease process that attacks the kidney's filtering system, causing serious scarring. More than half of FSGS patients develop chronic kidney failure within 10 years, ultimately requiring dialysis or renal transplantation. There are currently several genes known to cause the hereditary forms of FSGS (ACTN4, TRPC6, CD2AP, INF2, MYO1E and NPHS2). This study involves a large, unique, multigenerational Australian pedigree in which FSGS co-segregates with progressive heart block with apparent X-linked recessive inheritance. Through a classical combined approach of linkage and haplotype analysis, we identified a 21.19 cM interval implicated on the X chromosome. We then used a whole exome sequencing approach to identify two mutated genes, NXF5 and ALG13, which are located within this linkage interval. The two mutations NXF5-R113W and ALG13-T141L segregated perfectly with the disease phenotype in the pedigree and were not found in a large healthy control cohort. Analysis using bioinformatics tools predicted the R113W mutation in the NXF5 gene to be deleterious and cellular studies support a role in the stability and localization of the protein suggesting a causative role of this mutation in these co-morbid disorders. Further studies are now required to determine the functional consequence of these novel mutations to development of FSGS and heart block in this pedigree and to determine whether these mutations have implications for more common forms of these diseases in the general population.
Resumo:
Since the advent of cytogenetic analysis, knowledge about fundamental aspects of cancer biology has increased, allowing the processes of cancer development and progression to be more fully understood and appreciated. Classical cytogenetic analysis of solid tumors had been considered difficult, but new advances in culturing techniques and the addition of new cytogenetic technologies have enabled a more comprehensive analysis of chromosomal aberrations associated with solid tumors. Our purpose in this review is to discuss the cytogenetic findings on a number of nonmelanoma skin cancers, including squamous- and basal cell carcinomas, keratoacanthoma, squamous cell carcinoma in situ (Bowen's disease), and solar keratosis. Through classical cytogenetic techniques, as well as fluorescence-based techniques such as fluorescence in situ hybridization and comparative genomic hybridization, numerous chromosomal alterations have been identified. These aberrations may aid in further defining the stages and classifications of nonmelanoma skin cancer and also may implicate chromosomal regions involved in progression and metastatic potential. This information, along with the development of newer technologies (including laser capture microdissection and comparative genomic hybridization arrays) that allow for more refined analysis, will continue to increase our knowledge about the role of chromosomal events at all stages of cancer development and progression and, more specifically, about how they are associated with nonmelanoma skin cancer.
Resumo:
The addition of surface tension to the classical Stefan problem for melting a sphere causes the solution to blow up at a finite time before complete melting takes place. This singular behaviour is characterised by the speed of the solid-melt interface and the flux of heat at the interface both becoming unbounded in the blow-up limit. In this paper, we use numerical simulation for a particular energy-conserving one-phase version of the problem to show that kinetic undercooling regularises this blow-up, so that the model with both surface tension and kinetic undercooling has solutions that are regular right up to complete melting. By examining the regime in which the dimensionless kinetic undercooling parameter is small, our results demonstrate how physically realistic solutions to this Stefan problem are consistent with observations of abrupt melting of nanoscaled particles.
Resumo:
Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.
Resumo:
This report documents the key findings of a year-long collaborative research project focusing on the London Symphony Orchestra’s (LSO) development, implementation and testing of a mobile ticketing and information system. This ticketing system was developed in association with the LSO’s technical partners, Kodime Limited and in collaboration with the Aurora Orchestra.
Resumo:
Indirect inference (II) is a methodology for estimating the parameters of an intractable (generative) model on the basis of an alternative parametric (auxiliary) model that is both analytically and computationally easier to deal with. Such an approach has been well explored in the classical literature but has received substantially less attention in the Bayesian paradigm. The purpose of this paper is to compare and contrast a collection of what we call parametric Bayesian indirect inference (pBII) methods. One class of pBII methods uses approximate Bayesian computation (referred to here as ABC II) where the summary statistic is formed on the basis of the auxiliary model, using ideas from II. Another approach proposed in the literature, referred to here as parametric Bayesian indirect likelihood (pBIL), we show to be a fundamentally different approach to ABC II. We devise new theoretical results for pBIL to give extra insights into its behaviour and also its differences with ABC II. Furthermore, we examine in more detail the assumptions required to use each pBII method. The results, insights and comparisons developed in this paper are illustrated on simple examples and two other substantive applications. The first of the substantive examples involves performing inference for complex quantile distributions based on simulated data while the second is for estimating the parameters of a trivariate stochastic process describing the evolution of macroparasites within a host based on real data. We create a novel framework called Bayesian indirect likelihood (BIL) which encompasses pBII as well as general ABC methods so that the connections between the methods can be established.
Resumo:
Every year a number of pedestrians are struck by trains resulting in death and serious injury. While much research has been conducted on train-vehicle collisions, very little is currently known about the aetiology of train-pedestrian collisions. To date, scant research has been undertaken to investigate the demographics of rule breakers, the frequency of deliberate violation versus error making and the influence of the classic deterrence approach on subsequent behaviours. Aim This study aimed to to identify pedestrians’ self-reported reasons for engaging in violations at crossing, the frequency and nature of rule breaking and whether the threat of sanctions influence such events. Method A questionnaire was administered to 511 participants of all ages. Results Analysis revealed that pedestrians (particularly younger groups) were more likely to commit deliberate violations rather than make crossing errors e.g., mistakes. The most frequent reasons given for deliberate violations were participants were running late and did not want to miss their train or participants believed that the gate was taking too long to open so may be malfunctioning. In regards to classical deterrence, an examination of the perceived threat of being apprehended and fined for a crossing violation revealed participants reported the highest mean scores for swiftness of punishment, which suggests they were generally aware that they would receive an “on the spot” fine. However, the overall mean scores for certainty and severity of sanctions (for violating the rules) indicate that the participants did not perceive the certainty and severity of sanctions as very high. This paper will further discuss the research findings in regards to the development of interventions designed to improve pedestrian crossing safety.
Resumo:
This article examines manual textual categorisation by human coders with the hypothesis that the law of total probability may be violated for difficult categories. An empirical evaluation was conducted to compare a one step categorisation task with a two step categorisation task using crowdsourcing. It was found that the law of total probability was violated. Both a quantum and classical probabilistic interpretations for this violation are presented. Further studies are required to resolve whether quantum models are more appropriate for this task.
Resumo:
Although the endocannabinoid system (ECS) has been implicated in brain development and various psychiatric disorders, precise mechanisms of the ECS on mood and anxiety disorders remain unclear. Here, we have investigated developmental and disease-related expression pattern of the cannabinoid receptor 1 (CB1) and the cannabinoid receptor 2 (CB2) genes in the dorsolateral prefrontal cortex (PFC) of humans. Using mice selectively bred for high and low fear, we further investigated potential association between fear memory and the cannabinoid receptor expression in the brain. The CB1, not the CB2, mRNA levels in the PFC gradually decrease during postnatal development ranging in age from birth to 50 years (r 2 > 0.6 & adj. p < 0.05). The CB1 levels in the PFC of major depression patients were higher when compared to the age-matched controls (adj. p < 0.05). In mice, the CB1, not the CB2, levels in the PFC were positively correlated with freezing behavior in classical fear conditioning (p < 0.05). These results suggest that the CB1 in the PFC may play a significant role in regulating mood and anxiety symptoms. Our study demonstrates the advantage of utilizing data from postmortem brain tissue and a mouse model of fear to enhance our understanding of the role of the cannabinoid receptors in mood and anxiety disorders
Resumo:
Pavlovian fear conditioning, also known as classical fear conditioning is an important model in the study of the neurobiology of normal and pathological fear. Progress in the neurobiology of Pavlovian fear also enhances our understanding of disorders such as posttraumatic stress disorder (PTSD) and with developing effective treatment strategies. Here we describe how Pavlovian fear conditioning is a key tool for understanding both the neurobiology of fear and the mechanisms underlying variations in fear memory strength observed across different phenotypes. First we discuss how Pavlovian fear models aspects of PTSD. Second, we describe the neural circuits of Pavlovian fear and the molecular mechanisms within these circuits that regulate fear memory. Finally, we show how fear memory strength is heritable; and describe genes which are specifically linked to both changes in Pavlovian fear behavior and to its underlying neural circuitry. These emerging data begin to define the essential genes, cells and circuits that contribute to normal and pathological fear.
Resumo:
By presenting an overview of institutional theory, specifically the concepts of organizational fields, institutional pressures, and legitimacy, in addition to classical rhetoric, we have sought to highlight that there are links within the literature between the concepts of institutional theory and legitimacy, and also legitimacy and classical rhetoric. To date however, the three concepts – institutional pressures, legitimacy, and rhetoric – have not been explicitly linked. Through building on the current literature, and using the notion of legitimacy as the axis to connect institutional pressures with rhetoric, we argue that certain rhetorical devices may in fact be used to build and construct legitimacy in relation to the different institutional pressures an organization may face within a field. We believe that this preliminary framework may be useful to the field of CSR communication, whereby it may assist in constructing legitimate CSR communication in response to the various pressures an organization may face in relation to CSR.
Resumo:
In recent years, there has been a significant increase in the popularity of ontological analysis of conceptual modelling techniques. To date, related research explores the ontological deficiencies of classical techniques such as ER or UML modelling, as well as business process modelling techniques such as ARIS or even Web Services standards such as BPEL4WS, BPML, ebXML, BPSS and WSCI. While the ontologies that form the basis of these analyses are reasonably mature, it is the actual process of an ontological analysis that still lacks rigour. The current procedure is prone to individual interpretations and is one reason for criticism of the entire ontological analysis. This paper presents a procedural model for ontological analysis based on the use of meta models, multiple coders and metrics. The model is supported by examples from various ontological analyses.
Resumo:
Determining what consequences are likely to serve as effective punishment for any given behaviour is a complex task. This chapter focuses specifically on illegal road user behaviours and the mechanisms used to punish and deter them. Traffic law enforcement has traditionally used the threat and/or receipt of legal sanctions and penalties to deter illegal and risky behaviours. This process represents the use of positive punishment, one of the key behaviour modification mechanisms. Behaviour modification principles describe four types of reinforcers: positive and negative punishments and positive and negative reinforcements. The terms ‘positive’ and ‘negative’ are not used in an evaluative sense here. Rather, they represent the presence (positive) or absence (negative) of stimuli to promote behaviour change. Punishments aim to inhibit behaviour and reinforcements aim to encourage it. This chapter describes a variety of punishments and reinforcements that have been and could be used to modify illegal road user behaviours. In doing so, it draws on several theoretical perspectives that have defined behavioural reinforcement and punishment in different ways. Historically, the main theoretical approach used to deter risky road use has been classical deterrence theory which has focussed on the perceived certainty, severity and swiftness of penalties. Stafford and Warr (1993) extended the traditional deterrence principles to include the positive reinforcement concept of punishment avoidance. Evidence of the association between punishment avoidance experiences and behaviour has been established for a number of risky road user behaviours including drink driving, unlicensed driving, and speeding. We chose a novel way of assessing punishment avoidance by specifying two sub-constructs (detection evasion and punishment evasion). Another theorist, Akers, described the idea of competing reinforcers, termed differential reinforcement, within social learning theory (1977). Differential reinforcement describes a balance of reinforcements and punishments as influential on behaviour. This chapter describes comprehensive way of conceptualising a broad range of reinforcement and punishment concepts, consistent with Akers’ differential reinforcement concept, within a behaviour modification framework that incorporates deterrence principles. The efficacy of three theoretical perspectives to explain self-reported speeding among a sample of 833 Australian car drivers was examined. Results demonstrated that a broad range of variables predicted speeding including personal experiences of evading detection and punishment for speeding, intrinsic sensations, practical benefits expected from speeding, and an absence of punishing effects from being caught. Not surprisingly, being younger was also significantly related to more frequent speeding, although in a regression analysis, gender did not retain a significant influence once all punishment and reinforcement variables were entered. The implications for speed management, as well as road user behaviour modification more generally, are discussed in light of these findings. Overall, the findings reported in this chapter suggest that a more comprehensive approach is required to manage the behaviour of road users which does not rely solely on traditional legal penalties and sanctions.