872 resultados para Understanding of derivative
Resumo:
Transforming today’s energy systems in industrialized countries requires a substantial reduction of the total energy consumption at the individual level. Selected instruments have been found to be effective in changing people’s behavior in single domains. However, the so far weak success story on reducing overall energy consumption indicates that our understanding of the determining factors of individual energy consumption as well as of its change is far from being conclusive. Among others, the scientific state of the art is dominated by analyzing single domains of consumption and by neglecting embodied energy. It also displays strong disciplinary splits and the literature often fails to distinguish between explaining behavior and explaining change of behavior. Moreover, there are knowledge gaps regarding the legitimacy and effectiveness of the governance of individual consumption behavior and its change. Against this backdrop, the aim of this paper is to establish an integrated interdisciplinary framework that offers a systematic basis for linking the different aspects in research on energy related consumption behavior, thus paving the way for establishing a better evidence base to inform societal actions. The framework connects the three relevant analytical aspects of the topic in question: (1) It systematically and conceptually frames the objects, i.e. the energy consumption behavior and its change (explananda); (2) it structures the factors that potentially explain the energy consumption behavior and its change (explanantia); (3) it provides a differentiated understanding of change inducing interventions in terms of governance. Based on the existing states of the art approaches from different disciplines within the social sciences the proposed framework is supposed to guide interdisciplinary empirical research.
Resumo:
The policy development process leading to the Labour government's white paper of December 1997—The new NHS: Modern, Dependable—is the focus of this project and the public policy development literature is used to aid in the understanding of this process. Policy makers who had been involved in the development of the white paper were interviewed in order to acquire a thorough understanding of who was involved in this process and how they produced the white paper. A theoretical framework is used that sorts policy development models into those that focus on knowledge and experience, and those which focus on politics and influence. This framework is central to understanding the evidence gathered from the individuals and associations that participated in this policy development process. The main research question to be asked in this project is to what extent do either of these sets of policy development models aid in understanding and explicating the process by which the Labour government's policies were developed. The interview evidence, along with published evidence, show that a clear pattern of policy change emerged from this policy development process, and the Knowledge-Experience and Politics-Influence policy making models both assist in understanding this process. The early stages of the policy development process were characterized as hierarchical and iterative, yet also very collaborative among those participating, with knowledge and experience being quite prevalent. At every point in the process, however, informal networks of political influence were used and noted to be quite prevalent by all of the individuals interviewed. The later stages of the process then became increasingly noninclusive, with decisions made by a select group of internal and external policy makers. These policy making models became an important tool with which to understand the policy development process. This Knowledge-Experience and Politics-Influence dichotomy of policy development models could therefore be useful in analyzing other types of policy development. ^
Resumo:
Manuscript 1: “Conceptual Analysis: Externalizing Nursing Knowledge” We use concept analysis to establish that the report tool nurses prepare, carry, reference, amend, and use as a temporary data repository are examples of cognitive artifacts. This tool, integrally woven throughout the work and practice of nurses, is important to cognition and clinical decision-making. Establishing the tool as a cognitive artifact will support new dimensions of study. Such studies can characterize how this report tool supports cognition, internal representation of knowledge and skills, and external representation of knowledge of the nurse. Manuscript 2: “Research Methods: Exploring Cognitive Work” The purpose of this paper is to describe a complex, cross-sectional, multi-method approach to study of personal cognitive artifacts in the clinical environment. The complex data arrays present in these cognitive artifacts warrant the use of multiple methods of data collection. Use of a less robust research design may result in an incomplete understanding of the meaning, value, content, and relationships between personal cognitive artifacts in the clinical environment and the cognitive work of the user. Manuscript 3: “Making the Cognitive Work of Registered Nurses Visible” Purpose: Knowledge representations and structures are created and used by registered nurses to guide patient care. Understanding is limited regarding how these knowledge representations, or cognitive artifacts, contribute to working memory, prioritization, organization, cognition, and decision-making. The purpose of this study was to identify and characterize the role a specific cognitive artifact knowledge representation and structure as it contributed to the cognitive work of the registered nurse. Methods: Data collection was completed, using qualitative research methods, by shadowing and interviewing 25 registered nurses. Data analysis employed triangulation and iterative analytic processes. Results: Nurse cognitive artifacts support recall, data evaluation, decision-making, organization, and prioritization. These cognitive artifacts demonstrated spatial, longitudinal, chronologic, visual, and personal cues to support the cognitive work of nurses. Conclusions: Nurse cognitive artifacts are an important adjunct to the cognitive work of nurses, and directly support patient care. Nurses need to be able to configure their cognitive artifact in ways that are meaningful and support their internal knowledge representations.
Resumo:
Hunting is assuming a growing role in the current European forestry and agroforestry landscape. However, consistent statistical sources that provide quantitative information for policy-making, planning and management of game resources are often lacking. In addition, in many instances statistical information can be used without sufficient evaluation or criticism. Recently, the European Commission has declared the importance of high quality hunting statistics and the need to set up a common scheme in Europe for their collection, interpretation and proper use. This work aims to contribute to this current debate on hunting statistics in Europe by exploring data from the last 35 years of Spanish hunting statistics. The analysis focuses on the three major pillars underpinning hunting activity: hunters, hunting grounds and game animals. First, the study aims to provide a better understanding of official hunting statistics for use by researchers, game managers and other potential users. Second, the study highlights the major strengths and weaknesses of the statistical information that was collected. The results of the analysis indicate that official hunting statistics can be incomplete, dispersed and not always homogeneous over a long period of time. This is an issue of which one should be aware when using official hunting data for scientific or technical work. To improve statistical deficiencies associated with hunting data in Spain, our main suggestion is the adoption of a common protocol on data collection to which different regions agree. This protocol should be in accordance with future European hunting statistics and based on robust and well-informed data collection methods. Also it should expand the range of biological, ecological and economic concepts currently included to take account of the profound transformations experienced by the hunting sector in recent years. As much as possible, any future changes in the selection of hunting statistics should allow for comparisons between new variables with the previous ones.
Resumo:
Background Most aerial plant parts are covered with a hydrophobic lipid-rich cuticle, which is the interface between the plant organs and the surrounding environment. Plant surfaces may have a high degree of hydrophobicity because of the combined effects of surface chemistry and roughness. The physical and chemical complexity of the plant cuticle limits the development of models that explain its internal structure and interactions with surface-applied agrochemicals. In this article we introduce a thermodynamic method for estimating the solubilities of model plant surface constituents and relating them to the effects of agrochemicals. Results Following the van Krevelen and Hoftyzer method, we calculated the solubility parameters of three model plant species and eight compounds that differ in hydrophobicity and polarity. In addition, intact tissues were examined by scanning electron microscopy and the surface free energy, polarity, solubility parameter and work of adhesion of each were calculated from contact angle measurements of three liquids with different polarities. By comparing the affinities between plant surface constituents and agrochemicals derived from (a) theoretical calculations and (b) contact angle measurements we were able to distinguish the physical effect of surface roughness from the effect of the chemical nature of the epicuticular waxes. A solubility parameter model for plant surfaces is proposed on the basis of an increasing gradient from the cuticular surface towards the underlying cell wall. Conclusions The procedure enabled us to predict the interactions among agrochemicals, plant surfaces, and cuticular and cell wall components, and promises to be a useful tool for improving our understanding of biological surface interactions.
Resumo:
Background Most aerial plant parts are covered with a hydrophobic lipid-rich cuticle, which is the interface between the plant organs and the surrounding environment. Plant surfaces may have a high degree of hydrophobicity because of the combined effects of surface chemistry and roughness. The physical and chemical complexity of the plant cuticle limits the development of models that explain its internal structure and interactions with surface-applied agrochemicals. In this article we introduce a thermodynamic method for estimating the solubilities of model plant surface constituents and relating them to the effects of agrochemicals. Results Following the van Krevelen and Hoftyzer method, we calculated the solubility parameters of three model plant species and eight compounds that differ in hydrophobicity and polarity. In addition, intact tissues were examined by scanning electron microscopy and the surface free energy, polarity, solubility parameter and work of adhesion of each were calculated from contact angle measurements of three liquids with different polarities. By comparing the affinities between plant surface constituents and agrochemicals derived from (a) theoretical calculations and (b) contact angle measurements we were able to distinguish the physical effect of surface roughness from the effect of the chemical nature of the epicuticular waxes. A solubility parameter model for plant surfaces is proposed on the basis of an increasing gradient from the cuticular surface towards the underlying cell wall. Conclusions The procedure enabled us to predict the interactions among agrochemicals, plant surfaces, and cuticular and cell wall components, and promises to be a useful tool for improving our understanding of biological surface interactions.
Resumo:
Context: Replication plays an important role in experimental disciplines. There are still many uncertain- ties about how to proceed with replications of SE experiments. Should replicators reuse the baseline experiment materials? How much liaison should there be among the original and replicating experiment- ers, if any? What elements of the experimental configuration can be changed for the experiment to be considered a replication rather than a new experiment? Objective: To improve our understanding of SE experiment replication, in this work we propose a classi- fication which is intend to provide experimenters with guidance about what types of replication they can perform. Method: The research approach followed is structured according to the following activities: (1) a litera- ture review of experiment replication in SE and in other disciplines, (2) identification of typical elements that compose an experimental configuration, (3) identification of different replications purposes and (4) development of a classification of experiment replications for SE. Results: We propose a classification of replications which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve. The proposed classification helped to accommodate opposing views within a broader framework, it is capable of accounting for less similar replications to more similar ones regarding the baseline experiment. Conclusion: The aim of replication is to verify results, but different types of replication serve special ver- ification purposes and afford different degrees of change. Each replication type helps to discover partic- ular experimental conditions that might influence the results. The proposed classification can be used to identify changes in a replication and, based on these, understand the level of verification.
Resumo:
The understanding of the molecular mechanisms leading to peptide action entails the identification of a core active site. The major 28-aa neuropeptide, vasoactive intestinal peptide (VIP), provides neuroprotection. A lipophilic derivative with a stearyl moiety at the N-terminal and norleucine residue replacing the Met-17 was 100-fold more potent than VIP in promoting neuronal survival, acting at femtomolar–picomolar concentration. To identify the active site in VIP, over 50 related fragments containing an N-terminal stearic acid attachment and an amidated C terminus were designed, synthesized, and tested for neuroprotective properties. Stearyl-Lys-Lys-Tyr-Leu-NH2 (derived from the C terminus of VIP and the related peptide, pituitary adenylate cyclase activating peptide) captured the neurotrophic effects offered by the entire 28-aa parent lipophilic derivative and protected against β-amyloid toxicity in vitro. Furthermore, the 4-aa lipophilic peptide recognized VIP-binding sites and enhanced choline acetyltransferase activity as well as cognitive functions in Alzheimer’s disease-related in vivo models. Biodistribution studies following intranasal administration of radiolabeled peptide demonstrated intact peptide in the brain 30 min after administration. Thus, lipophilic peptide fragments offer bioavailability and stability, providing lead compounds for drug design against neurodegenerative diseases.
Resumo:
Early project termination is one of the most difficult decisions to be made by Research and Development managers. While there is the risk of terminating good projects, there is also the opposite risk of not terminating bad projects and overspend resources in unproductive research. Criteria used for identifying these projects are common subject of research in Business Administration. In addition, companies might take important lessons from its interrupted projects that could improve their overall portfolio technical and commercial success. Finally, the set and weight of criteria, as well as the procedures companies use for achieve learning from cancelled projects may vary depending on the project type. This research intends to contribute to the understanding of policies applied to projects that were once considered attractive, but by some reason is not appreciated anymore. The research addressed the question: How companies deal with projects that become unattractive? More specifically, this research tried to answer the following questions: (1) Are projects killed or (otherwise) they die naturally by lack of resources? (2) What criteria are used to terminate projects during development? (3) How companies learn from the terminated projects to improve the overall portfolio performance? (4) Are the criteria and learning procedures different for different types of projects? In order to answer these questions, we performed a multiple case study with four companies that are reference in business administration and innovation: (1) Oxiteno, considered the base case, (2) Natura, the literal replication, (3) Mahle and (4) AES, the theoretical replications. The case studies were performed using a semi-structured protocol for interviews, which were recorded and analyzed for comparison. We found that the criteria companies use for selecting projects for termination are very similar to those anticipated by the literature, except for a criteria related to compliance. We have evidences to confirm that the set of criteria is not altered when dealing with different project types, however the weight they are applied indeed varies. We also found that learning with cancelled projects is yet very incipient, with very few structured formal procedures being described for capturing learning with early-terminated projects. However, we could observe that these procedures are more common when dealing with projects labeled as innovative, risky, big and costly, while those smaller and cheaper derivative projects aren\'t subject of a complete investigation on the learning they brought to the company. For these, the most common learning route is the informal, where the project team learns and passes the knowledge though interpersonal information exchange. We explain that as a matter of cost versus benefit of spending time to deeply investigate projects with little potential to bring new knowledge to the project team and the organization
Resumo:
We have studied experimentally jump-to-contact (JC) and jump-out-of-contact (JOC) phenomena in gold electrodes. JC can be observed at first contact when two metals approach each other, while JOC occurs in the last contact before breaking. When the indentation depth between the electrodes is limited to a certain value of conductance, a highly reproducible behaviour in the evolution of the conductance can be obtained for hundreds of cycles of formation and rupture. Molecular dynamics simulations of this process show how the two metallic electrodes are shaped into tips of a well-defined crystallographic structure formed through a mechanical annealing mechanism. We report a detailed analysis of the atomic configurations obtained before contact and rupture of these stable structures and obtained their conductance using first-principles quantum transport calculations. These results help us understand the values of conductance obtained experimentally in the JC and JOC phenomena and improve our understanding of atomic-sized contacts and the evolution of their structural characteristics.
Resumo:
Background: Chitosan oligosaccharide (COS), a deacetylated derivative of chitin, is an abundant, and renewable natural polymer. COS has higher antimicrobial properties than chitosan and is presumed to act by disrupting/permeabilizing the cell membranes of bacteria, yeast and fungi. COS is relatively non-toxic to mammals. By identifying the molecular and genetic targets of COS, we hope to gain a better understanding of the antifungal mode of action of COS. Results: Three different chemogenomic fitness assays, haploinsufficiency (HIP), homozygous deletion (HOP), and multicopy suppression (MSP) profiling were combined with a transcriptomic analysis to gain insight in to the mode of action and mechanisms of resistance to chitosan oligosaccharides. The fitness assays identified 39 yeast deletion strains sensitive to COS and 21 suppressors of COS sensitivity. The genes identified are involved in processes such as RNA biology (transcription, translation and regulatory mechanisms), membrane functions (e.g. signalling, transport and targeting), membrane structural components, cell division, and proteasome processes. The transcriptomes of control wild type and 5 suppressor strains overexpressing ARL1, BCK2, ERG24, MSG5, or RBA50, were analyzed in the presence and absence of COS. Some of the up-regulated transcripts in the suppressor overexpressing strains exposed to COS included genes involved in transcription, cell cycle, stress response and the Ras signal transduction pathway. Down-regulated transcripts included those encoding protein folding components and respiratory chain proteins. The COS-induced transcriptional response is distinct from previously described environmental stress responses (i.e. thermal, salt, osmotic and oxidative stress) and pre-treatment with these well characterized environmental stressors provided little or any resistance to COS. Conclusions: Overexpression of the ARL1 gene, a member of the Ras superfamily that regulates membrane trafficking, provides protection against COS-induced cell membrane permeability and damage. We found that the ARL1 COS-resistant over-expression strain was as sensitive to Amphotericin B, Fluconazole and Terbinafine as the wild type cells and that when COS and Fluconazole are used in combination they act in a synergistic fashion. The gene targets of COS identified in this study indicate that COS’s mechanism of action is different from other commonly studied fungicides that target membranes, suggesting that COS may be an effective fungicide for drug-resistant fungal pathogens.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
MICE (meetings, incentives, conventions, and exhibitions), has generated high foreign exchange revenue for the economy worldwide. In Thailand, MICE tourists are recognized as ‘quality’ visitors, mainly because of their high-spending potential. Having said that, Thailand’s MICE sector has been influenced by a number of crises following September 11, 2001. Consequently, professionals in the MICE sector must be prepared to deal with such complex phenomena of crisis that might happen in the future. While a number of researches have examined the complexity of crises in the tourism context, there has been little focus on such issues in the MICE sector. As chaos theory provides a particularly good model for crisis situations, it is the aim of this paper to propose a chaos theory-based approach to the understanding of complex and chaotic system of the MICE sector in time of crisis.
Resumo:
Previous research on computers and graphics calculators in mathematics education has examined effects on curriculum content and students’ mathematical achievement and attitudes while less attention has been given to the relationship between technology use and issues of pedagogy, in particular the impact on teachers’ professional learning in specific classroom and school environments. This observation is critical in the current context of educational policy making, where it is assumed – often incorrectly – that supplying schools with hardware and software will increase teachers’ use of technology and encourage more innovative teaching approaches. This paper reports on a research program that aimed to develop better understanding of how and under what conditions Australian secondary school mathematics teachers learn to effectively integrate technology into their practice. The research adapted Valsiner’s concepts of the Zone of Proximal Development, Zone of Free Movement and Zone of Promoted Action to devise a theoretical framework for analysing relationships between factors influencing teachers’ use of technology in mathematics classrooms. This paper illustrates how the framework may be used by analysing case studies of a novice teacher and an experienced teacher in different school settings.
Resumo:
To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO.