862 resultados para paradigm shift
Resumo:
Purpose – There has been a tendency in sustainability science to be passive. The purpose of this paper is to introduce an alternative positive framework for a more active and direct approach to sustainable design and assessment that de-couples environmental impacts and economic growth. Design/methodology/approach – This paper deconstructs some systemic gaps that are critical to sustainability in built environment management processes and tools, and reframes negative “sustainable” decision making and assessment frameworks into their positive counterparts. In particular, it addresses the omission of ecology, design and ethics in development assessment. Findings – Development can be designed to provide ecological gains and surplus “eco-services,” but assessment tools and processes favor business-as-usual. Despite the tenacity of the dominant paradigm (DP) in sustainable development institutionalized by the Brundtland Report over 25 years ago, these omissions are easily corrected. Research limitations/implications – The limitation is that the author was unable to find exceptions to the omissions cited here in the extensive literature on urban planning and building assessment tools. However, exceptions prove the rule. The implication is that it is not too late for eco-positive retrofitting of cities to increase natural and social capital. The solutions are just as applicable in places like China and India as the USA, as they pay for themselves. Originality/value – Positive development (PD) is a fundamental paradigm shift that reverses the negative models, methods and metrics of the DP of sustainable development. This paper provides an example of how existing “negative” concepts and practices can be converted into positive ones through a PD prism. Through a new form of bio-physical design, development can be a sustainability solution.
Resumo:
Reciprocal development of the object and subject of learning. The renewal of the learning practices of front-line communities in a telecommunications company as part of the techno-economical paradigm change. Current changes in production have been seen as an indication of a shift from the techno-economical paradigm of a mass-production era to a new paradigm of the information and communication technological era. The rise of knowledge management in the late 1990s can be seen as one aspect of this paradigm shift, as knowledge creation and customer responsiveness were recognized as the prime factors in business competition. However, paradoxical conceptions concerning learning and agency have been presented in the discussion of knowledge management. One prevalent notion in the literature is that learning is based on individuals’ voluntary actions and this has now become incompatible with the growing interest in knowledge-management systems. Furthermore, commonly held view of learning as a general process that is independent of the object of learning contradicts the observation that the current need for new knowledge and new competences are caused by ongoing techno-economic changes. Even though the current view acknowledges that individuals and communities have key roles in knowledge creation, this conception defies the idea of the individuals’ and communities’ agency in developing the practices through which they learn. This research therefore presents a new theoretical interpretation of learning and agency based on Cultural-Historical Activity Theory. This approach overcomes the paradoxes in knowledge-management theory and offers means for understanding and analyzing changes in the ways of learning within work communities. This research is also an evaluation of the Competence-Laboratory method which was developed as part of the study as a special application of Developmental Work Research methodology. The research data comprises the videotaped competence-laboratory processes of four front-line work communities in a telecommunications company. The findings reported in the five articles included in this thesis are based on the analyses of this data. The new theoretical interpretation offered here is based on the assessment that the findings reported in the articles represent one of the front lines of the ongoing historical transformation of work-related learning since the research site represents one of the key industries of the new “knowledge society”. The research can be characterized as elaboration of a hypothesis concerning the development of work related learning. According to the new theoretical interpretation, the object of activity is also the object of distributed learning in work communities. The historical socialization of production has increased the number of actors involved in an activity, which has also increased the number of mutual interdependencies as well as the need for communication. Learning practices and organizational systems of learning are historically developed forms of distributed learning mediated by specific forms of division of labor, specific tools, and specific rules. However, the learning practices of the mass production era become increasingly inadequate to accommodate the conditions in the new economy. This was manifested in the front-line work communities in the research site as an aggravating contradiction between the new objects of learning and the prevailing learning practices. The constituent element of this new theoretical interpretation is the idea of a work community’s learning as part of its collaborative mastery of the developing business activity. The development of the business activity is at the same time a practical and an epistemic object for the community. This kind of changing object cannot be mastered by using learning practices designed for the stable conditions of mass production, because learning has to change along the changes in business. According to the model introduced in this thesis, the transformation of learning proceeds through specific stages: predefined learning tasks are first transformed into learning through re-conceptualizing the object of the activity and of the joint learning and then, as the new object becomes stabilized, into the creation of new kinds of learning practices to master the re-defined object of the activity. This transformation of the form of learning is realized through a stepwise expansion of the work community’s agency. To summarize, the conceptual model developed in this study sets the tool-mediated co-development of the subject and the object of learning as the theoretical starting point for developing new, second-generation knowledge management methods. Key words: knowledge management, learning practice, organizational system of learning, agency
Resumo:
Since the first investigation 25 years ago, the application of genetic tools to address ecological and evolutionary questions in elasmobranch studies has greatly expanded. Major developments in genetic theory as well as in the availability, cost effectiveness and resolution of genetic markers were instrumental for particularly rapid progress over the last 10 years. Genetic studies of elasmobranchs are of direct importance and have application to fisheries management and conservation issues such as the definition of management units and identification of species from fins. In the future, increased application of the most recent and emerging technologies will enable accelerated genetic data production and the development of new markers at reduced costs, paving the way for a paradigm shift from gene to genome-scale research, and more focus on adaptive rather than just neutral variation. Current literature is reviewed in six fields of elasmobranch molecular genetics relevant to fisheries and conservation management (species identification, phylogeography, philopatry, genetic effective population size, molecular evolutionary rate and emerging methods). Where possible, examples from the Indo-Pacific region, which has been underrepresented in previous reviews, are emphasized within a global perspective. (C) 2012 The Authors Journal of Fish Biology (C) 2012 The Fisheries Society of the British Isles
Resumo:
Most countries of Europe, as well as many countries in other parts of the world, are experiencing an increased impact of natural hazards. It is often speculated, but not yet proven, that climate change might influence the frequency and magnitude of certain hydro-meteorological natural hazards. What has certainly been observed is a sharp increase in financial losses caused by natural hazards worldwide. Eventhough Europe appears to be a space that is not affected by natural hazards to such catastrophic extents as other parts of the world are, the damages experienced here are certainly increasing too. Natural hazards, climate change and, in particular, risks have therefore recently been put high on the political agenda of the EU. In the search for appropriate instruments for mitigating impacts of natural hazards and climate change, as well as risks, the integration of these factors into spatial planning practices is constantly receiving higher attention. The focus of most approaches lies on single hazards and climate change mitigation strategies. The current paradigm shift of climate change mitigation to adaptation is used as a basis to draw conclusions and recommendations on what concepts could be further incorporated into spatial planning practices. Especially multi-hazard approaches are discussed as an important approach that should be developed further. One focal point is the definition and applicability of the terms natural hazard, vulnerability and risk in spatial planning practices. Especially vulnerability and risk concepts are so many-fold and complicated that their application in spatial planning has to be analysed most carefully. The PhD thesis is based on six published articles that describe the results of European research projects, which have elaborated strategies and tools for integrated communication and assessment practices on natural hazards and climate change impacts. The papers describe approaches on local, regional and European level, both from theoretical and practical perspectives. Based on these, passed, current and future potential spatial planning applications are reviewed and discussed. In conclusion it is recommended to shift from single hazard assessments to multi-hazard approaches, integrating potential climate change impacts. Vulnerability concepts should play a stronger role than present, and adaptation to natural hazards and climate change should be more emphasized in relation to mitigation. It is outlined that the integration of risk concepts in planning is rather complicated and would need very careful assessment to ensure applicability. Future spatial planning practices should also consider to be more interdisciplinary, i.e. to integrate as many stakeholders and experts as possible to ensure the sustainability of investments.
Resumo:
Background: A paradigm shift in educational policy to create problem solvers and critical thinkers produced the games concept approach (GCA) in Singapore's Revised Syllabus for Physical Education (1999). A pilot study (2001) conducted on 11 primary school student teachers (STs) using this approach identified time management and questioning as two of the major challenges faced by novice teachers. Purpose: To examine the GCA from three perspectives: structure—lesson form in terms of teacher-time and pupil-time; product—how STs used those time fractions; and process—the nature of their questioning (type, timing, and target). Participants and setting: Forty-nine STs from three different PETE cohorts (two-year diploma, four-year degree, two-year post-graduate diploma) volunteered to participate in the study conducted during the penultimate week of their final practicum in public primary and secondary schools. Intervention: Based on the findings of the pilot study, PETE increased the emphasis on GCA content specific knowledge and pedagogical procedures. To further support STs learning to actualise the GCA, authentic micro-teaching experiences that were closely monitored by faculty were provided in schools nearby. Research design: This is a descriptive study of time-management and questioning strategies implemented by STs on practicum. Each lesson was segmented into a number of sub-categories of teacher-time (organisation, demonstration and closure) and pupil-time (practice time and game time). Questions were categorised as knowledge, technical, tactical or affective. Data collection: Each ST was video-taped teaching a GCA lesson towards the end of their final practicum. The STs individually determined the timing of the data collection and the lesson to be observed. Data analysis: Each lesson was segmented into a number of sub-categories of both teacher- and pupil-time. Duration recording using Noldus software (Observer 4.0) segmented the time management of different lesson components. Questioning was coded in terms of type, timing and target. Separate MANOVAs were used to measure the difference between programmes and levels (primary and secondary) in relation to time-management procedures and questioning strategies. Findings: No differences emerged between the programmes or levels in their time-management or questioning strategies. Using the GCA, STs generated more pupil time (53%) than teacher time (47%). STs at the primary level provided more technical practice, and those in secondary schools more small-sided game play. Most questions (58%) were asked during play or practice but were substantially low-order involving knowledge or recall (76%) and only 6.7% were open-ended or divergent and capable of developing tactical awareness. Conclusions: Although STs are delivering more pupil time (practice and game) than teacher-time, the lesson structure requires further fine-tuning to extend the practice task beyond technical drills. Many questions are being asked to generate knowledge about games but lack sufficient quality to enhance critical thinking and tactical awareness, as the GCA intends.
Resumo:
Importance of the field: The shift in focus from ligand based design approaches to target based discovery over the last two to three decades has been a major milestone in drug discovery research. Currently, it is witnessing another major paradigm shift by leaning towards the holistic systems based approaches rather the reductionist single molecule based methods. The effect of this new trend is likely to be felt strongly in terms of new strategies for therapeutic intervention, new targets individually and in combinations, and design of specific and safer drugs. Computational modeling and simulation form important constituents of new-age biology because they are essential to comprehend the large-scale data generated by high-throughput experiments and to generate hypotheses, which are typically iterated with experimental validation. Areas covered in this review: This review focuses on the repertoire of systems-level computational approaches currently available for target identification. The review starts with a discussion on levels of abstraction of biological systems and describes different modeling methodologies that are available for this purpose. The review then focuses on how such modeling and simulations can be applied for drug target discovery. Finally, it discusses methods for studying other important issues such as understanding targetability, identifying target combinations and predicting drug resistance, and considering them during the target identification stage itself. What the reader will gain: The reader will get an account of the various approaches for target discovery and the need for systems approaches, followed by an overview of the different modeling and simulation approaches that have been developed. An idea of the promise and limitations of the various approaches and perspectives for future development will also be obtained. Take home message: Systems thinking has now come of age enabling a `bird's eye view' of the biological systems under study, at the same time allowing us to `zoom in', where necessary, for a detailed description of individual components. A number of different methods available for computational modeling and simulation of biological systems can be used effectively for drug target discovery.
Resumo:
In Taita Hills, south-eastern Kenya, remnants of indigenous mountain rainforests play a crucial role as water towers and socio-cultural sites. They are pressurized due to poverty, shortage of cultivable land and the fading of traditional knowledge. This study examines the traditional ecological knowledge of Taitas and the ways it may be applied within transforming natural resource management regimes. I have analyzed some justifications for and hindrances to ethnodevelopment and participatory forest management in light of recently renewed Kenyan forest policies. Mixed methods were applied by combining an ethnographic approach with participatory GIS. I learned about traditionally protected forests and their ecological and cultural status through a seek out the expert method and with remote sensing data and tools. My informants were: 107 household interviewees, 257 focus group participants, 73 key informants and 87 common informants in participatory mapping. Religious leaders and state officials shared their knowledge for this study. I have gained a better understanding of the traditionally protected forests and sites through examining their ecological characteristics and relation to social dynamics, by evaluating their strengths and hindrances as sites for conservation of cultural and biological diversity. My results show that, these sites are important components of a complex socio-ecological system, which has symbolical status and sacred and mystical elements within it, that contributes to the connectivity of remnant forests in the agroforestry dominated landscape. Altogether, 255 plant species and 220 uses were recognized by the tradition experts, whereas 161 species with 108 beneficial uses were listed by farmers. Out of the traditionally protected forests studied 47 % were on private land and 23% on community land, leaving 9% within state forest reserves. A paradigm shift in conservation is needed; the conservation area approach is not functional for private lands or areas trusted upon communities. The role of traditionally protected forests in community-based forest management is, however, paradoxal, since communal approaches suggests equal participation of people, whereas management of these sites has traditionally been the duty of solely accredited experts in the village. As modernization has gathered pace such experts have become fewer. Sacredness clearly contributes but, it does not equal conservation. Various social, political and economic arrangements further affect the integrity of traditionally protected forests and sites, control of witchcraft being one of them. My results suggest that the Taita have a rich traditional ecological knowledge base, which should be more determinately integrated into the natural resource management planning processes.
Resumo:
Introduction: Advances in genomics technologies are providing a very large amount of data on genome-wide gene expression profiles, protein molecules and their interactions with other macromolecules and metabolites. Molecular interaction networks provide a useful way to capture this complex data and comprehend it. Networks are beginning to be used in drug discovery, in many steps of the modern discovery pipeline, with large-scale molecular networks being particularly useful for the understanding of the molecular basis of the disease. Areas covered: The authors discuss network approaches used for drug target discovery and lead identification in the drug discovery pipeline. By reconstructing networks of targets, drugs and drug candidates as well as gene expression profiles under normal and disease conditions, the paper illustrates how it is possible to find relationships between different diseases, find biomarkers, explore drug repurposing and study emergence of drug resistance. Furthermore, the authors also look at networks which address particular important aspects such as off-target effects, combination-targets, mechanism of drug action and drug safety. Expert opinion: The network approach represents another paradigm shift in drug discovery science. A network approach provides a fresh perspective of understanding important proteins in the context of their cellular environments, providing a rational basis for deriving useful strategies in drug design. Besides drug target identification and inferring mechanism of action, networks enable us to address new ideas that could prove to be extremely useful for new drug discovery, such as drug repositioning, drug synergy, polypharmacology and personalized medicine.
Resumo:
Moore's Law has driven the semiconductor revolution enabling over four decades of scaling in frequency, size, complexity, and power. However, the limits of physics are preventing further scaling of speed, forcing a paradigm shift towards multicore computing and parallelization. In effect, the system is taking over the role that the single CPU was playing: high-speed signals running through chips but also packages and boards connect ever more complex systems. High-speed signals making their way through the entire system cause new challenges in the design of computing hardware. Inductance, phase shifts and velocity of light effects, material resonances, and wave behavior become not only prevalent but need to be calculated accurately and rapidly to enable short design cycle times. In essence, to continue scaling with Moore's Law requires the incorporation of Maxwell's equations in the design process. Incorporating Maxwell's equations into the design flow is only possible through the combined power that new algorithms, parallelization and high-speed computing provide. At the same time, incorporation of Maxwell-based models into circuit and system-level simulation presents a massive accuracy, passivity, and scalability challenge. In this tutorial, we navigate through the often confusing terminology and concepts behind field solvers, show how advances in field solvers enable integration into EDA flows, present novel methods for model generation and passivity assurance in large systems, and demonstrate the power of cloud computing in enabling the next generation of scalable Maxwell solvers and the next generation of Moore's Law scaling of systems. We intend to show the truly symbiotic growing relationship between Maxwell and Moore!
Resumo:
Is the Chandrasekhar mass limit for white dwarfs (WDs) set in stone? Not anymore, recent observations of over-luminous, peculiar type Ia supernovae can be explained if significantly super-Chandrasekhar WDs exist as their progenitors, thus barring them to be used as cosmic distance indicators. However, there is no estimate of a mass limit for these super-Chandrasekhar WD candidates yet. Can they be arbitrarily large? In fact, the answer is no! We arrive at this revelation by exploiting the flux freezing theorem in observed, accreting, magnetized WDs, which brings in Landau quantization of the underlying electron degenerate gas. This essay presents the calculations which pave the way for the ultimate (significantly super-Chandrasekhar) mass limit of WDs, heralding a paradigm shift 80 years after Chandrasekhar's discovery.
Resumo:
A paradigm shift from hard to flexible, organic-based optoelectronics requires fast and reversible mechanical response from actuating materials that are used for conversion of heat or light into mechanical motion. As the limits in the response times of polymer-based actuating materials are reached, which are inherent to the less-than-optimal coupling between the light/heat and mechanical energy in them, 1 a conceptually new approach to mechanical actuation is required to leapfrog the performance of organic actuators. Herein, we explore single crystals of 1,2,4,5-tetrabromobenzene (TBB) as actuating elements and establish relations between their kinematic profile and mechanical properties. Centimeter-size acicular crystals of TBB are the only naturally twinned crystals out of about a dozen known materials that exhibit the thermosalient effect-an extremely rare and visually impressive crystal locomotion. When taken over a phase transition, crystals of this material store mechanical strain and are rapidly self-actuated to sudden jumps to release the internal strain, leaping up to several centimeters. To establish the structural basis for this colossal crystal motility, we investigated the mechanical profile of the crystals from macroscale, in response to externally induced deformation under microscope, to nanoscale, by using nanoindentation. Kinematic analysis based on high-speed recordings of over 200 twinned TBB crystals exposed to directional or nondirectional heating unraveled that the crystal locomotion is a kinematically complex phenomenon that includes at least six kinematic effects. The nanoscale tests confirm the highly elastic nature, with an elastic deformation recovery (60%) that is far superior to those of molecular crystals reported earlier. This property appears to be critical for accumulation of stress required for crystal jumping. Twinned crystals of TBB exposed to moderate directional heating behave as all-organic analogue of a bimetallic `strip, where the lattice misfit between the two crystal components drives reveriible deformation of the crystal.
Resumo:
A critical unmet need for treatment of drug-resistant tuberculosis (TB) is to find novel therapies that are efficacious, safe, and shorten the duration of treatment. Drug discovery approaches for TB primarily target essential genes of the pathogen Mycobacterium tuberculosis (Mtb) but novel strategies such as host-directed therapies and nonmicrobicidal targets are necessary to bring about a paradigm shift in treatment. Drugs targeting the host pathways and nonmicrobicidal proteins can be used only in conjunction with existing drugs as adjunct therapies. Significantly, host-directed adjunct therapies have the potential to decrease duration of treatment, as they are less prone to drug resistance, target the immune responses, and act via novel mechanism of action. Recent advances in targeting host-pathogen interactions have implicated pathways such as eicosanoid regulation and angiogenesis. Furthermore, several approved drugs such as metformin and verapamil have been identified that appear suitable for repurposing for the treatment of TB. These findings and the challenges in the area of host- and/or pathogen-directed adjunct therapies and their implications for TB therapy are discussed.
Resumo:
Examina a aplicação da Lei nº 12.527, de 18 de novembro de 2011 – a Lei de Acesso à Informação (LAI) – tendo como locus a Câmara dos Deputados e o seu órgão de gestão da informação, de relacionamento e de atendimento ao público usuário, o Centro de Documentação e Informação (Cedi) e, em particular, a Coordenação de Relacionamento, Pesquisa e Informação (Corpi). Analisa-se, à luz da Ciência da Informação, o impacto causado pela LAI no processo de provimento de informação e na disponibilidade da informação institucional para a sociedade, no contexto do amplo acesso às informações públicas, desejável na Câmara. A pesquisa, de caráter documental, firma-se em documentos e na legislação produzidos na esfera da Câmara dos Deputados. Para o estudo do caso, utilizou-se entrevista com servidores da Corpi, onde se colheram impressões sobre o impacto da LAI na dinâmica do trabalho de atendimento e pesquisa, identificaram-se os principais problemas percebidos e as suas sugestões de melhoria. Discorre-se, também, subsidiariamente, sobre a gestão da informação como parte do ciclo informacional e condição para o acesso à informação, tópico central desta pesquisa. Aborda-se a questão da cidadania e do controle social, bem o direito à informação e transparência governamental que subjazem à proposta de amplo acesso à informação pública preconizada pela LAI, em razão da mudança de paradigma e do regime de informação a que a LAI conduz. O estudo dos efeitos da LAI no âmbito da Câmara teve como marco temporal o período de maio a dezembro de 2012. Estima-se que os indicadores desta pesquisa possam contribuir com estudos futuros relacionados com a governança da informação na Câmara.
Resumo:
In recent years, participatory approaches have been incorporated in decision-making processes as a way to strengthen the bonds between diverse areas of knowledge and social actors in natural resources management and environmental governance. Despite the favourable context, this paradigm shift is still in an early stage within the development of the Natura 2000 in the European Union, the largest network of protected areas in the world. To enhance the full scope of participatory approaches in this context, this article: (i) briefly reviews the role of participatory approaches in environmental governance, (ii) develops a common framework to evaluate such participatory processes in protected area management, (iii) applies this framework to a real case study, and (iv) based on the lessons learned, provides guidance to improve the future governance of Natura 2000 sites.
Resumo:
Future fossil fuel scarcity and environmental degradation have demonstrated the need for renewable, low-carbon sources of energy to power an increasingly industrialized world. Solar energy with its infinite supply makes it an extraordinary resource that should not go unused. However with current materials, adoption is limited by cost and so a paradigm shift must occur to get everyone on the same page embracing solar technology. Cuprous Oxide (Cu2O) is a promising earth abundant material that can be a great alternative to traditional thin-film photovoltaic materials like CIGS, CdTe, etc. We have prepared Cu2O bulk substrates by the thermal oxidation of copper foils as well Cu2O thin films deposited via plasma-assisted Molecular Beam Epitaxy. From preliminary Hall measurements it was determined that Cu2O would need to be doped extrinsically. This was further confirmed by simulations of ZnO/Cu2O heterojunctions. A cyclic interdependence between, defect concentration, minority carrier lifetime, film thickness, and carrier concentration manifests itself a primary reason for why efficiencies greater than 4% has yet to be realized. Our growth methodology for our thin-film heterostructures allow precise control of the number of defects that incorporate into our film during both equilibrium and nonequilibrium growth. We also report process flow/device design/fabrication techniques in order to create a device. A typical device without any optimizations exhibited open-circuit voltages Voc, values in excess 500mV; nearly 18% greater than previous solid state devices.