900 resultados para 670308 Printing and publishing processes
Resumo:
Investigation of large, destructive earthquakes is challenged by their infrequent occurrence and the remote nature of geophysical observations. This thesis sheds light on the source processes of large earthquakes from two perspectives: robust and quantitative observational constraints through Bayesian inference for earthquake source models, and physical insights on the interconnections of seismic and aseismic fault behavior from elastodynamic modeling of earthquake ruptures and aseismic processes.
To constrain the shallow deformation during megathrust events, we develop semi-analytical and numerical Bayesian approaches to explore the maximum resolution of the tsunami data, with a focus on incorporating the uncertainty in the forward modeling. These methodologies are then applied to invert for the coseismic seafloor displacement field in the 2011 Mw 9.0 Tohoku-Oki earthquake using near-field tsunami waveforms and for the coseismic fault slip models in the 2010 Mw 8.8 Maule earthquake with complementary tsunami and geodetic observations. From posterior estimates of model parameters and their uncertainties, we are able to quantitatively constrain the near-trench profiles of seafloor displacement and fault slip. Similar characteristic patterns emerge during both events, featuring the peak of uplift near the edge of the accretionary wedge with a decay toward the trench axis, with implications for fault failure and tsunamigenic mechanisms of megathrust earthquakes.
To understand the behavior of earthquakes at the base of the seismogenic zone on continental strike-slip faults, we simulate the interactions of dynamic earthquake rupture, aseismic slip, and heterogeneity in rate-and-state fault models coupled with shear heating. Our study explains the long-standing enigma of seismic quiescence on major fault segments known to have hosted large earthquakes by deeper penetration of large earthquakes below the seismogenic zone, where mature faults have well-localized creeping extensions. This conclusion is supported by the simulated relationship between seismicity and large earthquakes as well as by observations from recent large events. We also use the modeling to connect the geodetic observables of fault locking with the behavior of seismicity in numerical models, investigating how a combination of interseismic geodetic and seismological estimates could constrain the locked-creeping transition of faults and potentially their co- and post-seismic behavior.
Resumo:
We assessed the genetic structure of populations of the widely distributed sea cucumber Holothuria (Holothuria) mammata Grube, 1840, and investigated the effects of marine barriers to gene flow and historical processes. Several potential genetic breaks were considered, which would separate the Atlantic and Mediterranean basins, the isolated Macaronesian Islands from the other locations analysed, and the Western Mediterranean and Aegean Sea (Eastern Mediterranean). We analysed mitochondrial 16S and COI gene sequences from 177 individuals from four Atlantic locations and four Mediterranean locations. Haplotype diversity was high (H = 0.9307 for 16S and 0.9203 for COI), and the haplotypes were closely related (p = 0.0058 for 16S and 0.0071 for COI). The lowest genetic diversities were found in the Aegean Sea population. Our results showed that the COI gene was more variable and more useful for the detection of population structure than the 16S gene. The distribution of mtDNA haplotypes, the pairwise FST values and the results of exact tests and AMOVA revealed: (i) a significant genetic break between the population in the Aegean Sea and those in the other locations, as supported by both mitochondrial genes, and (ii) weak differentiation of the Canary and Azores Islands from the other populations; however, the populations from the Macaronesian Islands, Algarve and West Mediterranean could be considered to be a panmictic metapopulation. Isolation by distance was not identified in H. (H.) mammata. Historical events behind the observed findings, together with the current oceanographic patterns, were proposed and discussed as the main factors that determine the population structure and genetic signature of H. (H.) mammata
Resumo:
The flyer promotes " The Spanish Spoken in Cuba: Antecedents, Processes of Phonetic Variation, and Cubanisms", a lecture by Elizabeth Santana Cepero on the Cuban variety of Spanish that results from historical and socoiolinguistic processes of transculturation and change. The lecture was conducted in Spanish and held on November 19,2015 at FIU Modesto A. Maidique Campus Green Library 220.
Resumo:
Participatory evaluation and participatory action research (PAR) are increasingly used in community-based programs and initiatives and there is a growing acknowledgement of their value. These methodologies focus more on knowledge generated and constructed through lived experience than through social science (Vanderplaat 1995). The scientific ideal of objectivity is usually rejected in favour of a holistic approach that acknowledges and takes into account the diverse perspectives, values and interpretations of participants and evaluation professionals. However, evaluation rigour need not be lost in this approach. Increasing the rigour and trustworthiness of participatory evaluations and PAR increases the likelihood that results are seen as credible and are used to continually improve programs and policies.----- Drawing on learnings and critical reflections about the use of feminist and participatory forms of evaluation and PAR over a 10-year period, significant sources of rigour identified include:----- • participation and communication methods that develop relations of mutual trust and open communication----- • using multiple theories and methodologies, multiple sources of data, and multiple methods of data collection----- • ongoing meta-evaluation and critical reflection----- • critically assessing the intended and unintended impacts of evaluations, using relevant theoretical models----- • using rigorous data analysis and reporting processes----- • participant reviews of evaluation case studies, impact assessments and reports.
Resumo:
Objectives. We tested predictions from the elaborated intrusion (EI) theory of desire, which distinguishes intrusive thoughts and elaborations, and emphasizes the importance of imagery. Secondarily, we undertook preliminary evaluations of the Alcohol Craving Experience (ACE) questionnaire, a new measure based on EI Theory. Methods. Participants (N ¼ 232) were in correspondence-based treatment trials for alcohol abuse or dependence. The study used retrospective reports obtained early in treatment using the ACE, and daily self-monitoring of urges, craving, mood and alcohol consumption. Results. The ACE displayed high internal consistency and test – retest reliability and sound relationships with self-monitored craving, and was related to Baseline alcohol dependence, but not to consumption. Imagery during craving was experienced by 81%,with 2.3 senses involved on average. More frequent imagery was associated with longer episode durations and stronger craving. Transient intrusive thoughts were reported by 87% of respondents, and were more common if they frequently attempted to stop alcohol cognitions. Associations between average daily craving and weekly consumption were seen. Depression and negative mood were associated with more frequent, stronger and longer lasting desires for alcohol. Conclusions. Results supported the distinction of automatic and controlled processes in craving, together with the importance of craving imagery. They were also consistent with prediction of consumption from cross-situational averages of craving, and with positive associations between craving and negative mood. However, this study’s retrospective reporting and correlational design require that its results be interpreted cautiously. Research using ecological momentary measures and laboratory manipulations is needed before confident inferences about causality can be made.
Resumo:
In recent years, practitioners and researchers alike have turned their attention to knowledge management (KM) in order to increase organisational performance (OP). As a result, many different approaches and strategies have been investigated and suggested for how knowledge should be managed to make organisations more effective and efficient. However, most research has been undertaken in the for-profit sector, with only a few studies focusing on the benefits nonprofit organisations might gain by managing knowledge. This study broadly investigates the impact of knowledge management on the organisational performance of nonprofit organisations. Organisational performance can be evaluated through either financial or non-financial measurements. In order to evaluate knowledge management and organisational performance, non-financial measurements are argued to be more suitable given that knowledge is an intangible asset which often cannot be expressed through financial indicators. Non-financial measurement concepts of performance such as the balanced scorecard or the concept of Intellectual Capital (IC) are well accepted and used within the for-profit and nonprofit sectors to evaluate organisational performance. This study utilised the concept of IC as the method to evaluate KM and OP in the context of nonprofit organisations due to the close link between KM and IC: Indeed, KM is concerned with managing the KM processes of creating, storing, sharing and applying knowledge and the organisational KM infrastructure such as organisational culture or organisational structure to support these processes. On the other hand, IC measures the knowledge stocks in different ontological levels: at the individual level (human capital), at the group level (relational capital) and at the organisational level (structural capital). In other words, IC measures the value of the knowledge which has been managed through KM. As KM encompasses the different KM processes and the KM infrastructure facilitating these processes, previous research has investigated the relationship between KM infrastructure and KM processes. Organisational culture, organisational structure and the level of IT support have been identified as the main factors of the KM infrastructure influencing the KM processes of creating, storing, sharing and applying knowledge. Other research has focused on the link between KM and OP or organisational effectiveness. Based on existing literature, a theoretical model was developed to enable the investigation of the relation between KM (encompassing KM infrastructure and KM processes) and IC. The model assumes an association between KM infrastructure and KM processes, as well as an association between KM processes and the various levels of IC (human capital, structural capital and relational capital). As a result, five research questions (RQ) with respect to the various factors of the KM infrastructure as well as with respect to the relationship between KM infrastructure and IC were raised and included into the research model: RQ 1 Do nonprofit organisations which have a Hierarchy culture have a stronger IT support than nonprofit organisations which have an Adhocracy culture? RQ 2 Do nonprofit organisations which have a centralised organisational structure have a stronger IT support than nonprofit organisations which have decentralised organisational structure? RQ 3 Do nonprofit organisations which have a stronger IT support have a higher value of Human Capital than nonprofit organisations which have a less strong IT support? RQ 4 Do nonprofit organisations which have a stronger IT support have a higher value of Structural Capital than nonprofit organisations which have a less strong IT support? RQ 5 Do nonprofit organisations which have a stronger IT support have a higher value of Relational Capital than nonprofit organisations which have a less strong IT support? In order to investigate the research questions, measurements for IC were developed which were linked to the main KM processes. The final KM/IC model contained four items for evaluating human capital, five items for evaluating structural capital and four items for evaluating relational capital. The research questions were investigated through empirical research using a case study approach with the focus on two nonprofit organisations providing trade promotions services through local offices worldwide. Data for the investigation of the assumptions were collected via qualitative as well as quantitative research methods. The qualitative study included interviews with representatives of the two participating organisations as well as in-depth document research. The purpose of the qualitative study was to investigate the factors of the KM infrastructure (organisational culture, organisational structure, IT support) of the organisations and how these factors were related to each other. On the other hand, the quantitative study was carried out through an online-survey amongst staff of the various local offices. The purpose of the quantitative study was to investigate which impact the level of IT support, as the main instrument of the KM infrastructure, had on IC. Overall several key themes were found as a result of the study: • Knowledge Management and Intellectual Capital were complementary with each other, which should be expressed through measurements of IC based on KM processes. • The various factors of the KM infrastructure (organisational culture, organisational structure and level of IT support) are interdependent. • IT was a primary instrument through which the different KM processes (creating, storing, sharing and applying knowledge) were performed. • A high level of IT support was evident when participants reported higher level of IC (human capital, structural capital and relational capital). The study supported previous research in the field of KM and replicated the findings from other case studies in this area. The study also contributed to theory by placing the KM research within the nonprofit context and analysing the linkage between KM and IC. From the managerial perspective, the findings gave clear indications that would allow interested parties, such as nonprofit managers or consultants to understand more about the implications of KM on OP and to use this knowledge for implementing efficient and effective KM strategies within their organisations.
Resumo:
Effective information and knowledge management (IKM) is critical to corporate success; yet, its actual establishment and management is not yet fully understood. We identify ten organizational elements that need to be addressed to ensure the effective implementation and maintenance of information and knowledge management within organizations. We define these elements and provide key characterizations. We then discuss a case study that describes the implementation of an information system (designed to support IKM) in a medical supplies organization. We apply the framework of organizational elements in our analysis to uncover the enablers and barriers in this systems implementation project. Our analysis suggests that taking the ten organizational elements into consideration when implementing information systems will assist practitioners in managing information and knowledge processes more effectively and efficiently. We discuss implications for future research.
Resumo:
The purpose of this study is to investigate how secondary school media educators might best meet the needs of students who prefer practical production work to ‘theory’ work in media studies classrooms. This is a significant problem for a curriculum area that claims to develop students’ media literacies by providing them with critical frameworks and a metalanguage for thinking about the media. It is a problem that seems to have become more urgent with the availability of new media technologies and forms like video games. The study is located in the field of media education, which tends to draw on structuralist understandings of the relationships between young people and media and suggests that students can be empowered to resist media’s persuasive discourses. Recent theoretical developments suggest too little emphasis has been placed on the participatory aspects of young people playing with, creating and gaining pleasure from media. This study contributes to this ‘participatory’ approach by bringing post structuralist perspectives to the field, which have been absent from studies of secondary school media education. I suggest theories of media learning must take account of the ongoing formation of students’ subjectivities as they negotiate social, cultural and educational norms. Michel Foucault’s theory of ‘technologies of the self’ and Judith Butler’s theories of performativity and recognition are used to develop an argument that media learning occurs in the context of students negotiating various ‘ethical systems’ as they establish their social viability through achieving recognition within communities of practice. The concept of ‘ethical systems’ has been developed for this study by drawing on Foucault’s theories of discourse and ‘truth regimes’ and Butler’s updating of Althusser’s theory of interpellation. This post structuralist approach makes it possible to investigate the ways in which students productively repeat and vary norms to creatively ‘do’ and ‘undo’ the various media learning activities with which they are required to engage. The study focuses on a group of year ten students in an all boys’ Catholic urban school in Australia who undertook learning about video games in a three-week intensive ‘immersion’ program. The analysis examines the ethical systems operating in the classroom, including formal systems of schooling, informal systems of popular cultural practice and systems of masculinity. It also examines the students’ use of semiotic resources to repeat and/or vary norms while reflecting on, discussing, designing and producing video games. The key findings of the study are that students are motivated to learn technology skills and production processes rather than ‘theory’ work. This motivation stems from the students’ desire to become recognisable in communities of technological and masculine practice. However, student agency is not only possible through critical responses to media, but through performative variation of norms through creative ethical practices as students participate with new media technologies. Therefore, the opportunities exist for media educators to create the conditions for variation of norms through production activities. The study offers several implications for media education theory and practice including: the productive possibilities of post structuralism for informing ways of doing media education; the importance of media teachers having the autonomy to creatively plan curriculum; the advantages of media and technology teachers collaborating to draw on a broad range of resources to develop curriculum; the benefits of placing more emphasis on students’ creative uses of media; and the advantages of blending formal classroom approaches to media education with less formal out of school experiences.
Resumo:
An examination of Information Security (IS) and Information Security Management (ISM) research in Saudi Arabia has shown the need for more rigorous studies focusing on the implementation and adoption processes involved with IS culture and practices. Overall, there is a lack of academic and professional literature about ISM and more specifically IS culture in Saudi Arabia. Therefore, the overall aim of this paper is to identify issues and factors that assist the implementation and the adoption of IS culture and practices within the Saudi environment. The goal of this paper is to identify the important conditions for creating an information security culture in Saudi Arabian organizations. We plan to use this framework to investigate whether security culture has emerged into practices in Saudi Arabian organizations.
Resumo:
The emergence of Enterprise Resource Planning systems and Business Process Management have led to improvements in the design, implementation, and overall management of business processes. However, the typical focus of these initiatives has been on internal business operations, assuming a defined and stable context in which the processes are designed to operate. Yet, a lack of context-awareness for external change leads to processes and supporting information systems that are unable to react appropriately and timely enough to change. To increase the alignment of processes with environmental change, we propose a conceptual framework that facilitates the identification of context change. Based on a secondary data analysis of published case studies about process adaptation, we exemplify the framework and identify four general archetypes of context-awareness. The framework, in combination with the learning from the case analysis, provides a first understanding of what, where, how, and when processes are subjected to change.
Resumo:
Effective management of groundwater requires stakeholders to have a realistic conceptual understanding of the groundwater systems and hydrological processes.However, groundwater data can be complex, confusing and often difficult for people to comprehend..A powerful way to communicate understanding of groundwater processes, complex subsurface geology and their relationships is through the use of visualisation techniques to create 3D conceptual groundwater models. In addition, the ability to animate, interrogate and interact with 3D models can encourage a higher level of understanding than static images alone. While there are increasing numbers of software tools available for developing and visualising groundwater conceptual models, these packages are often very expensive and are not readily accessible to majority people due to complexity. .The Groundwater Visualisation System (GVS) is a software framework that can be used to develop groundwater visualisation tools aimed specifically at non-technical computer users and those who are not groundwater domain experts. A primary aim of GVS is to provide management support for agencies, and enhancecommunity understanding.
Resumo:
In this article we explore young children's development of mathematical knowledge and reasoning processes as they worked two modelling problems (the Butter Beans Problem and the Airplane Problem). The problems involve authentic situations that need to be interpreted and described in mathematical ways. Both problems include tables of data, together with background information containing specific criteria to be considered in the solution process. Four classes of third-graders (8 years of age) and their teachers participated in the 6-month program, which included preparatory modelling activities along with professional development for the teachers. In discussing our findings we address: (a) Ways in which the children applied their informal, personal knowledge to the problems; (b) How the children interpreted the tables of data, including difficulties they experienced; (c) How the children operated on the data, including aggregating and comparing data, and looking for trends and patterns; (c) How the children developed important mathematical ideas; and (d) Ways in which the children represented their mathematical understandings.
Resumo:
Background: In order to design appropriate environments for performance and learning of movement skills, physical educators need a sound theoretical model of the learner and of processes of learning. In physical education, this type of modelling informs the organization of learning environments and effective and efficient use of practice time. An emerging theoretical framework in motor learning, relevant to physical education, advocates a constraints-led perspective for acquisition of movement skills and game play knowledge. This framework shows how physical educators could use task, performer and environmental constraints to channel acquisition of movement skills and decision making behaviours in learners. From this viewpoint, learners generate specific movement solutions to satisfy the unique combination of constraints imposed on them, a process which can be harnessed during physical education lessons. Purpose: In this paper the aim is to provide an overview of the motor learning approach emanating from the constraints-led perspective, and examine how it can substantiate a platform for a new pedagogical framework in physical education: nonlinear pedagogy. We aim to demonstrate that it is only through theoretically valid and objective empirical work of an applied nature that a conceptually sound nonlinear pedagogy model can continue to evolve and support research in physical education. We present some important implications for designing practices in games lessons, showing how a constraints-led perspective on motor learning could assist physical educators in understanding how to structure learning experiences for learners at different stages, with specific focus on understanding the design of games teaching programmes in physical education, using exemplars from Rugby Union and Cricket. Findings: Research evidence from recent studies examining movement models demonstrates that physical education teachers need a strong understanding of sport performance so that task constraints can be manipulated so that information-movement couplings are maintained in a learning environment that is representative of real performance situations. Physical educators should also understand that movement variability may not necessarily be detrimental to learning and could be an important phenomenon prior to the acquisition of a stable and functional movement pattern. We highlight how the nonlinear pedagogical approach is student-centred and empowers individuals to become active learners via a more hands-off approach to learning. Summary: A constraints-based perspective has the potential to provide physical educators with a framework for understanding how performer, task and environmental constraints shape each individual‟s physical education. Understanding the underlying neurobiological processes present in a constraints-led perspective to skill acquisition and game play can raise awareness of physical educators that teaching is a dynamic 'art' interwoven with the 'science' of motor learning theories.
Resumo:
In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.
Resumo:
A collection of case studies of individuals and organisations utilising open models in the Asia Pacific and associated regions. The case studies represent activities in nine countries, broader regions such as the Arab nations, and global efforts towards sustainability and social justice, revealing creative ways of participating in the commons. Featured are remix artists, performers, open source software programmers, film makers, collecting institutions and publishing houses focused on democracy and change, who demonstrate a diverse set of motivations to engage with the shared ideals of openness and community collaboration.