874 resultados para Theoretical approaches
New Approaches for Teaching Soil and Rock Mechanics Using Information and Communication Technologies
Resumo:
Soil and rock mechanics are disciplines with a strong conceptual and methodological basis. Initially, when engineering students study these subjects, they have to understand new theoretical phenomena, which are explained through mathematical and/or physical laws (e.g. consolidation process, water flow through a porous media). In addition to the study of these phenomena, students have to learn how to carry out estimations of soil and rock parameters in laboratories according to standard tests. Nowadays, information and communication technologies (ICTs) provide a unique opportunity to improve the learning process of students studying the aforementioned subjects. In this paper, we describe our experience of the incorporation of ICTs into the classical teaching-learning process of soil and rock mechanics and explain in detail how we have successfully developed various initiatives which, in summary, are: (a) implementation of an online social networking and microblogging service (using Twitter) for gradually sending key concepts to students throughout the semester (gradual learning); (b) detailed online virtual laboratory tests for a delocalized development of lab practices (self-learning); (c) integration of different complementary learning resources (e.g. videos, free software, technical regulations, etc.) using an open webpage. The complementary use to the classical teaching-learning process of these ICT resources has been highly satisfactory for students, who have positively evaluated this new approach.
Resumo:
The microfoundations research agenda presents an expanded theoretical perspective because it considers individuals, their characteristics, and their interactions as relevant variables to help us understand firm-level strategic issues. However, microfoundations empirical research faces unique challenges because processes take place at different levels of analysis and these multilevel processes must be considered simultaneously. We describe multilevel modeling and mixed methods as methodological approaches whose use will allow for theoretical advancements. We describe key issues regarding the use of these two types of methods and, more importantly, discuss pressing substantive questions and topics that can be addressed with each of these methodological approaches with the goal of making theoretical advancements regarding the microfoundations research agenda and strategic management studies in general.
Resumo:
This paper aims to identify drivers of physical capital adjustments in agriculture. It begins with a review of some of the most important theories and modelling approaches regarding firms’ adjustments of physical capital, ranging from output-based models to more recent approaches that consider irreversibility and uncertainty. Thereafter, it is suggested that determinants of physical capital adjustments in agriculture can be divided into three main groups, namely drivers related to: i) expected (risk-adjusted) profit, ii) expected societal benefits and costs and iii) expected private nonpecuniary benefits and costs. The discussion that follows focuses on the determinants belonging to the first group and covers aspects related to product market conditions, technological conditions, financial conditions and the role of firm structure and organization. Furthermore, the role of subjective beliefs is emphasized. The main part of this paper is concerned with the demand side of the physical capital market and one section also briefly discusses some aspects related to supply of farm assets.
Resumo:
Willingness to pay models have shown the theoretical relationships between the contingent valuation, cost of illness and the avertive behaviour approaches. In this paper, field survey data are used to compare the relationships between these three approaches and to demonstrate that contingent valuation bids exceed the sum of cost of illness and the avertive behaviour approach estimates. The estimates provide a validity check for CV bids and further support the claim that contingent valuation studies are theoretically consistent.
Resumo:
In this paper, numerical simulations are used in an attempt to find optimal Source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B-1-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.
Resumo:
Purpose - In many scientific and engineering fields, large-scale heat transfer problems with temperature-dependent pore-fluid densities are commonly encountered. For example, heat transfer from the mantle into the upper crust of the Earth is a typical problem of them. The main purpose of this paper is to develop and present a new combined methodology to solve large-scale heat transfer problems with temperature-dependent pore-fluid densities in the lithosphere and crust scales. Design/methodology/approach - The theoretical approach is used to determine the thickness and the related thermal boundary conditions of the continental crust on the lithospheric scale, so that some important information can be provided accurately for establishing a numerical model of the crustal scale. The numerical approach is then used to simulate the detailed structures and complicated geometries of the continental crust on the crustal scale. The main advantage in using the proposed combination method of the theoretical and numerical approaches is that if the thermal distribution in the crust is of the primary interest, the use of a reasonable numerical model on the crustal scale can result in a significant reduction in computer efforts. Findings - From the ore body formation and mineralization points of view, the present analytical and numerical solutions have demonstrated that the conductive-and-advective lithosphere with variable pore-fluid density is the most favorite lithosphere because it may result in the thinnest lithosphere so that the temperature at the near surface of the crust can be hot enough to generate the shallow ore deposits there. The upward throughflow (i.e. mantle mass flux) can have a significant effect on the thermal structure within the lithosphere. In addition, the emplacement of hot materials from the mantle may further reduce the thickness of the lithosphere. Originality/value - The present analytical solutions can be used to: validate numerical methods for solving large-scale heat transfer problems; provide correct thermal boundary conditions for numerically solving ore body formation and mineralization problems on the crustal scale; and investigate the fundamental issues related to thermal distributions within the lithosphere. The proposed finite element analysis can be effectively used to consider the geometrical and material complexities of large-scale heat transfer problems with temperature-dependent fluid densities.
Resumo:
This chapter serves three very important functions within this collection. First, it aims to make the existence of FPDA better known to both gender and language researchers and to the wider community of discourse analysts, by outlining FPDA’s own theoretical and methodological approaches. This involves locating and positioning FPDA in relation, yet in contradistinction to, the fields of discourse analysis to which it is most often compared: Critical Discourse Analysis (CDA) and, to a lesser extent, Conversation Analysis (CA). Secondly, the chapter serves a vital symbolic function. It aims to contest the authority of the more established theoretical and methodological approaches represented in this collection, which currently dominate the field of discourse analysis. FPDA considers that an established field like gender and language study will only thrive and develop if it is receptive to new ways of thinking, divergent methods of study, and approaches that question and contest received wisdoms or established methods. Thirdly, the chapter aims to introduce some new, experimental and ground-breaking FPDA work, including that by Harold Castañeda-Peña and Laurel Kamada (same volume). I indicate the different ways in which a number of young scholars are imaginatively developing the possibilities of an FPDA approach to their specific gender and language projects.
Resumo:
What does ‘care’ mean in contemporary society? How are caring relationships practised in different contexts? What resources do individuals and collectives draw upon in order to care for, care with and care about themselves and others? How do such relationships and practices relate to broader social processes? Care shapes people’s everyday lives and relationships and caring relations and practices influence the economies of different societies. This interdisciplinary book takes a nuanced and context-sensitive approach to exploring caring relationships, identities and practices within and across a variety of cultural, familial, geographical and institutional arenas. Grounded in rich empirical research and discussing key theoretical, policy and practice debates, it provides important, yet often neglected, international and cross-cultural perspectives. It is divided into four sections covering: caring within educational institutions; caring amongst communities and networks; caring and families; and caring across the life-course. Contributing to broader theoretical, philosophical and moral debates associated with the ethics of care, citizenship, justice, relationality and entanglements of power, Critical Approaches to Care is an important work for students and academics studying caring and care work in the fields of health and social care, sociology, social policy, anthropology, education, human geography and politics.
Resumo:
Over recent years, the role of engineering in promoting a sustainable society has received much public attention [1] with particular emphasis given to the need to promote the future prosperity and security of society through the recruitment and education of more engineers [2,3]. From an employment perspective, the Leitch Review [4] suggested that ‘generic’ transferable employability skills development should constitute a more substantial part of university education. This paper argues that the global drivers impacting engineering education [5] correlate strongly to those underpinning the Leitch review, therefore the question of how to promote transferable employability skills within the wider engineering curriculum is increasingly relevant. By exploring the use of heritage in the engineering curriculum as a way to promote learning and engage students, a less familiar approach to study is discussed. This approach moves away from stereotypical notions of the use of information technology as representing the pinnacle of innovation in education. Taking the student experience as its starting point, the paper draws upon the findings of an exploratory study critically analysing the pedagogical value of using heritage in engineering education. It discusses a teaching approach in which engineering students are taken out of their ‘comfort zone’ - away from the classroom, laboratory and computer, to a heritage site some 100 miles away from the university. The primary learning objective underpinning this approach is to develop students’ transferable skills by encouraging them to consider how to apply theoretical concepts to a previously unexplored situation. By reflecting upon students’ perceptions of the value of this approach, and by identifying how heritage may be utilised as an innovative learning and teaching approach in engineering education, this paper makes a notable contribution to current pedagogical debates in the discipline.
Resumo:
In the present state of the art of authorship attribution there seems to be an opposition between two approaches: cognitive and stylistic methodologies. It is proposed in this article that these two approaches are complementary and that the apparent gap between them can be bridged using Systemic Functional Linguistics (SFL) and in particular some of its theoretical constructions, such as codal variation. This article deals with the theoretical explanation of why such a theory would solve the debate between the two approaches and shows how these two views of authorship attribution are indeed complementary. Although the article is fundamentally theoretical, two example experimental trials are reported to show how this theory can be developed into a workable methodology of doing authorship attribution. In Trial 1, a SFL analysis was carried out on a small dataset consisting of three 300-word texts collected from three different authors whose socio-demographic background matched across a number of parameters. This trial led to some conclusions about developing a methodology based on SFL and suggested the development of another trial, which might hint at a more accurate and useful methodology. In Trial 2, Biber's (1988) multidimensional framework is employed, and a final methodology of authorship analysis based on this kind of analysis is proposed for future research. © 2013, EQUINOX PUBLISHING.
Resumo:
Large-scale introduction of Organic Solar Cells (OSCs) onto the market is currently limited by their poor stability in light and air, factors present in normal working conditions for these devices. Thus, great efforts have to be undertaken to understand the photodegradation mechanisms of their organic materials in order to find solutions that mitigate these effects. This study reports on the elucidation of the photodegradation mechanisms occurring in a low bandgap polymer, namely, Si-PCPDTBT (poly[(4,4′-bis(2-ethylhexyl)dithieno[3,2-b:2′,3′-d]silole)-2,6-diyl-alt-(4,7-bis(2-thienyl)-2,1,3-benzothiadiazole)-5,5′-diyl]). Complementary analytical techniques (AFM, HS-SPME-GC-MS, UV-vis and IR spectroscopy) have been employed to monitor the modification of the chemical structure of the polymer upon photooxidative aging and the subsequent consequences on its architecture and nanomechanical properties. Furthermore, these different characterization techniques have been combined with a theoretical approach based on quantum chemistry to elucidate the evolution of the polymer alkyl side chains and backbone throughout exposure. Si-PCPDTBT is shown to be more stable against photooxidation than the commonly studied p-type polymers P3HT and PCDTBT, while modeling demonstrated the benefits of using silicon as a bridging atom in terms of photostability. (Figure Presented).
Resumo:
Multiscale systems that are characterized by a great range of spatial–temporal scales arise widely in many scientific domains. These range from the study of protein conformational dynamics to multiphase processes in, for example, granular media or haemodynamics, and from nuclear reactor physics to astrophysics. Despite the diversity in subject areas and terminology, there are many common challenges in multiscale modelling, including validation and design of tools for programming and executing multiscale simulations. This Theme Issue seeks to establish common frameworks for theoretical modelling, computing and validation, and to help practical applications to benefit from the modelling results. This Theme Issue has been inspired by discussions held during two recent workshops in 2013: ‘Multiscale modelling and simulation’ at the Lorentz Center, Leiden (http://www.lorentzcenter.nl/lc/web/2013/569/info.php3?wsid=569&venue=Snellius), and ‘Multiscale systems: linking quantum chemistry, molecular dynamics and microfluidic hydrodynamics’ at the Royal Society Kavli Centre. The objective of both meetings was to identify common approaches for dealing with multiscale problems across different applications in fluid and soft matter systems. This was achieved by bringing together experts from several diverse communities.
Resumo:
A szerző a tisztán elméleti célokra kifejlesztett Neumann-modellt és a gyakorlati alkalmazások céljára kifejlesztett Leontief-modellt veti össze. A Neumann-modell és a Leontief-modell éves termelési periódust feltételező, zárt, stacionárius változatának hasonló matematikai struktúrája azt a feltételezést sugallja, hogy az utóbbi a Neumann-modell sajátos eseteként értelmezhető. Az egyes modellek közgazdasági tartalmát és feltevéseit részletesen kibontva és egymással összevetve a szerző megmutatja, hogy a fenti következtetés félrevezető, két merőben különböző modellről van szó, nem lehet az egyikből a másikat levezetni. Az ikertermelés és technológiai választék lehetősége a Neumann-modell elengedhetetlen feltevése, az éves termelési periódus feltevése pedig kizárja folyam jellegű kibocsátások explicit figyelembevételét. Mindezek feltevések ugyanakkor idegenek a Leontief-modelltől. A két modell valójában egy általánosabb állomány–folyam jellegű zárt, stacionárius modell sajátos esete, méghozzá azok folyamváltozókra redukált alakja. _____ The paper compares the basic assumptions and methodology of the Von Neumann model, developed for purely abstract theoretical purposes, and those of the Leontief model, designed originally for practical applications. Study of the similar mathematical structures of the Von Neumann model and the closed, stationary Leontief model, with a unit length of production period, often leads to the false conclusion that the latter is just a simplified version of the former. It is argued that the economic assumptions of the two models are quite different, which makes such an assertion unfounded. Technical choice and joint production are indispensable features of the Von Neumann model, and the assumption of unitary length of production period excludes the possibility of taking service flows explicitly into account. All these features are completely alien to the Leontief model, however. It is shown that the two models are in fact special cases of a more general stock-flow stationary model, reduced to forms containing only flow variables.
Resumo:
The volume The Dialectics of Modernity - Recognizing Globalization. Studies on the Theoretical Perspectives of Globalization is the product of a work of that quarter of the century, which has been continuing, since 1989 up today, the true beginning of the globalization. Therefore, because that concept was not existing at that time, the work is not yet directed, in the first years, on the globalization itself. As it can be seen, this concept pushed through only in the second half of the nineties, when the concept could also be already statistically revealed in the world press. How a group of researchers from Hungary was enquirying during the nineties, according to partners of conversation at home and abroad, with whom one could talk about how the new world emerging with 1989 can actually be described, is a long story, the theory of which consists in the fact, that we apparently live in a world, where the most part of the people, even worse, even most of the intellectuals are hardly interested in how this one really looks like. On looking for partners, the circle of the authors of this volume was created. In Hungary, we quickly reached our limit (which much later did not prevent us from appearing, such as if we had always been living in the theoretically worked globalization). The French group around Jacques Poulain reacted the fastest way (and later around Francois de Bernard, with his particularly valuable homepage www.mondialisations.org), not much later the contact with the Russian colleagues around Alexandr Shumakov was created, in which Encyclopedia of the Globalization our contribution could already appear in 2003. On these traces, we came to the productive relationship with Leonid Grinin and Andrey Korotayev. Finally, we mention the Fürstenfeld's initiative, founded since 2009 with Melitta Becker's help in the framework of the Centre for the Interdisciplinary Research in this Austrian city. A relevant part of the author inside this book participated from the beginning in the work of the group. The individual contributions to this volume are linked together by a common interest in knowledge. This is the theoretical view of the phenomenon of the globalization. From the beginning, it was not further defined or limited to certain approaches, particularly an independent theory of the globalization was not intended. We started from the fact, that every legitimately revealed theoretical approach can contribute legitimately to a later theory of the globalization. In this way, the further contacts with Nico Stehr and the members of the Dresden group for the investigation of the security problems arose, mainly with Ernst Woit. Hegel defined the philosophy as the flight of the Owl of Minerva, which "begins its flight only with the falling twilight". Through the theoretical investigation of the globalization always becoming interdisciplinary, we wanted by no means to debate about this incomparable aphorism. We simply started from the conviction, that a new reality should not remain without any description.
Resumo:
This dissertation introduces a new approach for assessing the effects of pediatric epilepsy on the language connectome. Two novel data-driven network construction approaches are presented. These methods rely on connecting different brain regions using either extent or intensity of language related activations as identified by independent component analysis of fMRI data. An auditory description decision task (ADDT) paradigm was used to activate the language network for 29 patients and 30 controls recruited from three major pediatric hospitals. Empirical evaluations illustrated that pediatric epilepsy can cause, or is associated with, a network efficiency reduction. Patients showed a propensity to inefficiently employ the whole brain network to perform the ADDT language task; on the contrary, controls seemed to efficiently use smaller segregated network components to achieve the same task. To explain the causes of the decreased efficiency, graph theoretical analysis was carried out. The analysis revealed no substantial global network feature differences between the patient and control groups. It also showed that for both subject groups the language network exhibited small-world characteristics; however, the patient's extent of activation network showed a tendency towards more random networks. It was also shown that the intensity of activation network displayed ipsilateral hub reorganization on the local level. The left hemispheric hubs displayed greater centrality values for patients, whereas the right hemispheric hubs displayed greater centrality values for controls. This hub hemispheric disparity was not correlated with a right atypical language laterality found in six patients. Finally it was shown that a multi-level unsupervised clustering scheme based on self-organizing maps, a type of artificial neural network, and k-means was able to fairly and blindly separate the subjects into their respective patient or control groups. The clustering was initiated using the local nodal centrality measurements only. Compared to the extent of activation network, the intensity of activation network clustering demonstrated better precision. This outcome supports the assertion that the local centrality differences presented by the intensity of activation network can be associated with focal epilepsy.^