44 resultados para THEORETICAL PREDICTION
Resumo:
Nucleation is the first step of a first order phase transition. A new phase is always sprung up in nucleation phenomena. The two main categories of nucleation are homogeneous nucleation, where the new phase is formed in a uniform substance, and heterogeneous nucleation, when nucleation occurs on a pre-existing surface. In this thesis the main attention is paid on heterogeneous nucleation. This thesis wields the nucleation phenomena from two theoretical perspectives: the classical nucleation theory and the statistical mechanical approach. The formulation of the classical nucleation theory relies on equilibrium thermodynamics and use of macroscopically determined quantities to describe the properties of small nuclei, sometimes consisting of just a few molecules. The statistical mechanical approach is based on interactions between single molecules, and does not bear the same assumptions as the classical theory. This work gathers up the present theoretical knowledge of heterogeneous nucleation and utilizes it in computational model studies. A new exact molecular approach on heterogeneous nucleation was introduced and tested by Monte Carlo simulations. The results obtained from the molecular simulations were interpreted by means of the concepts of the classical nucleation theory. Numerical calculations were carried out for a variety of substances nucleating on different substances. The classical theory of heterogeneous nucleation was employed in calculations of one-component nucleation of water on newsprint paper, Teflon and cellulose film, and binary nucleation of water-n-propanol and water-sulphuric acid mixtures on silver nanoparticles. The results were compared with experimental results. The molecular simulation studies involved homogeneous nucleation of argon and heterogeneous nucleation of argon on a planar platinum surface. It was found out that the use of a microscopical contact angle as a fitting parameter in calculations based on the classical theory of heterogeneous nucleation leads to a fair agreement between the theoretical predictions and experimental results. In the presented cases the microscopical angle was found to be always smaller than the contact angle obtained from macroscopical measurements. Furthermore, molecular Monte Carlo simulations revealed that the concept of the geometrical contact parameter in heterogeneous nucleation calculations can work surprisingly well even for very small clusters.
Resumo:
This study offers a reconstruction and critical evaluation of globalization theory, a perspective that has been central for sociology and cultural studies in recent decades, from the viewpoint of media and communications. As the study shows, sociological and cultural globalization theorists rely heavily on arguments concerning media and communications, especially the so-called new information and communication technologies, in the construction of their frameworks. Together with deepening the understanding of globalization theory, the study gives new critical knowledge of the problematic consequences that follow from such strong investment in media and communications in contemporary theory. The book is divided into four parts. The first part presents the research problem, the approach and the theoretical contexts of the study. Followed by the introduction in Chapter 1, I identify the core elements of globalization theory in Chapter 2. At the heart of globalization theory is the claim that recent decades have witnessed massive changes in the spatio-temporal constitution of society, caused by new media and communications in particular, and that these changes necessitate the rethinking of the foundations of social theory as a whole. Chapter 3 introduces three paradigms of media research the political economy of media, cultural studies and medium theory the discussion of which will make it easier to understand the key issues and controversies that emerge in academic globalization theorists treatment of media and communications. The next two parts offer a close reading of four theorists whose works I use as entry points into academic debates on globalization. I argue that we can make sense of mainstream positions on globalization by dividing them into two paradigms: on the one hand, media-technological explanations of globalization and, on the other, cultural globalization theory. As examples of the former, I discuss the works of Manuel Castells (Chapter 4) and Scott Lash (Chapter 5). I maintain that their analyses of globalization processes are overtly media-centric and result in an unhistorical and uncritical understanding of social power in an era of capitalist globalization. A related evaluation of the second paradigm (cultural globalization theory), as exemplified by Arjun Appadurai and John Tomlinson, is presented in Chapter 6. I argue that due to their rejection of the importance of nation states and the notion of cultural imperialism for cultural analysis, and their replacement with a framework of media-generated deterritorializations and flows, these theorists underplay the importance of the neoliberalization of cultures throughout the world. The fourth part (Chapter 7) presents a central research finding of this study, namely that the media-centrism of globalization theory can be understood in the context of the emergence of neoliberalism. I find it problematic that at the same time when capitalist dynamics have been strengthened in social and cultural life, advocates of globalization theory have directed attention to media-technological changes and their sweeping socio-cultural consequences, instead of analyzing the powerful material forces that shape the society and the culture. I further argue that this shift serves not only analytical but also utopian functions, that is, the longing for a better world in times when such longing is otherwise considered impracticable.
Resumo:
This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.
Resumo:
"Trust and Collectives" is a compilation of articles: (I) "On Rational Trust" (in Meggle, G. (ed.) Social Facts & Collective Intentionality, Dr. Hänsel-Hohenhausen AG (currently Ontos), 2002), (II) "Simulating Rational Social Normative Trust, Predictive Trust, and Predictive Reliance Between Agents" (M.Tuomela and S. Hofmann, Ethics and Information Technology 5, 2003), (III) "A Collective's Trust in a Collective's action" (Protosociology, 18-19, 2003), and (IV) "Cooperation and Trust in Group Contexts" (R. Tuomela and M.Tuomela, Mind and Society 4/1, 2005 ). The articles are tied together by an introduction that dwells deeply on the topic of trust. (I) presents a somewhat general version of (RSNTR) and some basic arguments. (II) offers an application of (RSNTR) for a computer simulation of trust.(III) applies (RSNTR) to Raimo Tuomela's "we-mode"collectives (i.e. The Philosophy of Social Practices, Cambridge University Press, 2002). (IV) analyzes cooperation and trust in the context of acting as a member of a collective. Thus, (IV) elaborates on the topic of collective agency in (III) and puts the trust account (RSNTR) to work in a framework of cooperation. The central aim of this work is to construct a well-argued conceptual and theoretical account of rational trust, viz. a person's subjectively rational trust in another person vis-à-vis his performance of an action, seen from a first-person point of view. The main method is conceptual and theoretical analysis understood along the lines of reflective equilibrium. The account of rational social normative trust (RSNTR), which is argued and defended against other views, is the result of the quest. The introduction stands on its own legs as an argued presentation of an analysis of the concept of rational trust and an analysis of trust itself (RSNTR). It is claimed that (RSNTR) is "genuine" trust and embedded in a relationship of mutual respect for the rights of the other party. This relationship is the growing site for trust, a causal and conceptual ground, but it is not taken as a reason for trusting (viz. predictive "trust"). Relevant themes such as risk, decision, rationality, control, and cooperation are discussed and the topics of the articles are briefly presented. In this work it is argued that genuine trust is to be kept apart from predictive "trust." When we trust a person vis-à-vis his future action that concerns ourselves on the basis of his personal traits and/or features of the specific situation we have a prediction-like attitude. Genuine trust develops in a relationship of mutual respect for the mutual rights of the other party. Such a relationship is formed through interaction where the parties gradually find harmony concerning "the rules of the game." The trust account stands as a contribution to philosophical research on central social notions and it could be used as a theoretical model in social psychology, economical and political science where interaction between persons and groups are in focus. The analysis could also serve as a model for a trust component in computer simulation of human action. In the context of everyday life the account clarifies the difference between predictive "trust" and genuine trust. There are no fast shortcuts to trust. Experiences of mutual respect for mutual rights cannot be had unless there is respect.
Resumo:
The purpose of this study is to examine how transformation is defining feminist bioethics and to determine the nature of this transformation. Behind the quest for transformation is core feminism and its political implications, namely, that women and other marginalized groups have been given unequal consideration in society and the sciences and that this situation is unacceptable and should be remedied. The goal of the dissertation is to determine how feminist bioethicists integrate the transformation into their respective fields and how they apply the potential of feminism to bioethical theories and practice. On a theoretical level, feminist bioethicists wish to reveal how current ways of knowing are based on inequality. Feminists pay special attention especially to communal and political contexts and to the power relations endorsed by each community. In addition, feminist bioethicists endorse relational ethics, a relational account of the self in which the interconnectedness of persons is important. On the conceptual level, feminist bioethicists work with beliefs, concepts, and practices that give us our world. As an example, I examine how feminist bioethicists have criticized and redefined the concept of autonomy. Feminist bioethicists emphasize relational autonomy, which is based on the conviction that social relationships shape moral identities and values. On the practical level, I discuss stem cell research as a test case for feminist bioethics and its ability to employ its methodologies. Analyzing these perspectives allowed me first, to compare non-feminist and feminist accounts of stem cell ethics and, second, to analyze feminist perspectives on the novel biotechnology. Along with offering a critical evaluation of the stem cell debate, the study shows that sustainable stem cell policies should be grounded on empirical knowledge about how donors perceive stem cell research and the donation process. The study indicates that feminist bioethics should develop the use of empirical bioethics, which takes the nature of ethics seriously: ethical decisions are provisional and open for further consideration. In addition, the study shows that there is another area of development in feminist bioethics: the understanding of (moral) agency. I argue that agency should be understood to mean that actions create desires.
Resumo:
One of the most fundamental and widely accepted ideas in finance is that investors are compensated through higher returns for taking on non-diversifiable risk. Hence the quantification, modeling and prediction of risk have been, and still are one of the most prolific research areas in financial economics. It was recognized early on that there are predictable patterns in the variance of speculative prices. Later research has shown that there may also be systematic variation in the skewness and kurtosis of financial returns. Lacking in the literature so far, is an out-of-sample forecast evaluation of the potential benefits of these new more complicated models with time-varying higher moments. Such an evaluation is the topic of this dissertation. Essay 1 investigates the forecast performance of the GARCH (1,1) model when estimated with 9 different error distributions on Standard and Poor’s 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of variance from intra-day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. In Essay 2, by using 20 years of daily Standard and Poor 500 index returns, it is found that density forecasts are much improved by allowing for constant excess kurtosis but not improved by allowing for skewness. By allowing the kurtosis and skewness to be time varying the density forecasts are not further improved but on the contrary made slightly worse. In Essay 3 a new model incorporating conditional variance, skewness and kurtosis based on the Normal Inverse Gaussian (NIG) distribution is proposed. The new model and two previously used NIG models are evaluated by their Value at Risk (VaR) forecasts on a long series of daily Standard and Poor’s 500 returns. The results show that only the new model produces satisfactory VaR forecasts for both 1% and 5% VaR Taken together the results of the thesis show that kurtosis appears not to exhibit predictable time variation, whereas there is found some predictability in the skewness. However, the dynamic properties of the skewness are not completely captured by any of the models.
Resumo:
This thesis introduces a practice-theoretical approach to understanding customer value formation to be used in the field of service marketing and management. In contrast to current studies trying to understand value formation by analysing customers as independent actors and thinkers, it is in this work suggested that customer value formation can be better understood by analysing how value is formed in the practices and contexts of the customers. The theoretical approach developed in this thesis is applied in an empirical study of family cruises. The theoretical analysis in this thesis results in a new approach for understanding customer value formation. Customer value is, according to this new approach, something that is formed in practice, meaning that value is formed in constellations of the customer and contextual elements like tools, physical spaces and contextually embedded images and know-how. This view is different from the current views that tend to see value as subjectively created, co-created, perceived or experienced by the customer. The new approach has implications on how we view customer value, but also on the methods and techniques we can use to understand customer value in empirical studies. It is also suggested that services could in fact be reconceptualised as practices. According to the stance presented in this thesis the empirical analysis of customer value should not focus on individual customers, but should instead take the contextual entity of practices as its unit of analysis. Therefore, ethnography is chosen as a method for exploring how customer value is formed in practice in the case of family cruises on a specific cruise vessel. The researcher has studied six families, as well as the context of the cruise vessel with various techniques including non-participant observation, participant observation and interviews in order to create an ethnographic understanding of the practices carried out on board. Twenty-one different practices are reported and discussed in order to provide necessary insight to customer value formation that can be used as input for service development.
Resumo:
Metsäsuunnittelussa tarvittavan metsävaratiedon keräämisessä ollaan Suomessa siirtymässä kuvioittaisesta arvioinnista laserkeilaus- ja ilmakuvapohjaiseen kaukokartoitukseen. Tämän tutkimuksen tarkoitus oli selvittää kuvion kokonaistilavuuden ja läpimittajakauman ennustamisen tarkkuus koealan metsikkö- ja puustotunnuksista MSN-, PRM-, ML- ja FMM-menetelmiä sekä Weibull-jakaumaa hyödyntäen seuraavilla tavoilla: 1. PRM-menetelmällä hilatasolla, 2. PRMmenetelmällä kuviotasolla, 3. ML-menetelmällä hilatasolla ja 4. ML-menetelmällä kuviotasolla. Lisäksi kuvion kokonaistilavuuden ennustamisen tarkkuus selvitettiin hyödyntäen kuviolle tuotettua runkolukusarjaa. Tulokset laskettiin puulajikohtaisesti männylle, kuuselle, koivulle ja muille puulajeille. Puulajien tulokset laskettiin kuviotasolla yhteen. Lisäksi selvitettiin menetelmien laskenta-ajan ja tallennustilan tarve. Tutkimuksen aineistona käytettiin Hämeen ammattikorkeakoulun Evon toimipisteen metsistä mitattuja kiinteäsäteisiä ympyräkoealoja, joita oli 249 kappaletta. Hakkuukoneella mitattiin 12kuvion, joiden pinta-alat vaihtelivat välillä 0,2 – 1,94 hehtaaria, puustotiedot. Aluepohjaisen laserkeilausaineiston pulssin tiheys oli 1,8/m2 ja ilmakuvien pikselikoko 0,5 metriä. Kuvion kokonaistilavuus ennustettiin tai estimoitiin laserkeilaus- ja ilmakuva-aineiston piirteiden avulla koealojen puustotunnuksista. Tulokset laskettiin erikseen kaikille kuvioille ja kuvioille, joiden pinta-ala oli yli 0,5 hehtaaria. Yli 0,5 hehtaarin kuvioita oli 8 kappaletta. Kuvion hilojen naapureina käytettiin 1 - 10 koealaa. Menetelmästä ja naapurien määrästä riippuen kokonaistilavuuden suhteellinen RMSE ja harha vaihtelivat välillä 20,76 – 52,86 prosenttia ja -12,04 – 46,54 prosenttia. Vastaavat luvut yli 0,5 hehtaarin kuvioilla olivat 6,74 – 59,41 prosenttia ja -8,04 – 49,59 prosenttia. Laskenta-aika vaihteli menetelmien ja käytettyjen naapurien määrän mukaan voimakkaasti. Kehittyneemmällä ohjelmoinnilla ja ohjelmistolla laskenta-ajat voivat laskea merkittävästi. Tallennustila ei testatuilla menetelmillä ole rajoittava tekijä laajassakaan mittakaavassa. Läpimittajakauman perusteella PRM-menetelmä ennustaa puulajille erittäin kapean läpimittajakauman, jos koeala koostuu vain muutamasta lähes samankokoisesta puusta. Tämä vaikutti tuloksiin erityisesti menetelmällä PRM2.
Resumo:
The Thesis presents a state-space model for a basketball league and a Kalman filter algorithm for the estimation of the state of the league. In the state-space model, each of the basketball teams is associated with a rating that represents its strength compared to the other teams. The ratings are assumed to evolve in time following a stochastic process with independent Gaussian increments. The estimation of the team ratings is based on the observed game scores that are assumed to depend linearly on the true strengths of the teams and independent Gaussian noise. The team ratings are estimated using a recursive Kalman filter algorithm that produces least squares optimal estimates for the team strengths and predictions for the scores of the future games. Additionally, if the Gaussianity assumption holds, the predictions given by the Kalman filter maximize the likelihood of the observed scores. The team ratings allow probabilistic inference about the ranking of the teams and their relative strengths as well as about the teams’ winning probabilities in future games. The predictions about the winners of the games are correct 65-70% of the time. The team ratings explain 16% of the random variation observed in the game scores. Furthermore, the winning probabilities given by the model are concurrent with the observed scores. The state-space model includes four independent parameters that involve the variances of noise terms and the home court advantage observed in the scores. The Thesis presents the estimation of these parameters using the maximum likelihood method as well as using other techniques. The Thesis also gives various example analyses related to the American professional basketball league, i.e., National Basketball Association (NBA), and regular seasons played in year 2005 through 2010. Additionally, the season 2009-2010 is discussed in full detail, including the playoffs.
Resumo:
This thesis report attempts to improve the models for predicting forest stand structure for practical use, e.g. forest management planning (FMP) purposes in Finland. Comparisons were made between Weibull and Johnson s SB distribution and alternative regression estimation methods. Data used for preliminary studies was local but the final models were based on representative data. Models were validated mainly in terms of bias and RMSE in the main stand characteristics (e.g. volume) using independent data. The bivariate SBB distribution model was used to mimic realistic variations in tree dimensions by including within-diameter-class height variation. Using the traditional method, diameter distribution with the expected height resulted in reduced height variation, whereas the alternative bivariate method utilized the error-term of the height model. The lack of models for FMP was covered to some extent by the models for peatland and juvenile stands. The validation of these models showed that the more sophisticated regression estimation methods provided slightly improved accuracy. A flexible prediction and application for stand structure consisted of seemingly unrelated regression models for eight stand characteristics, the parameters of three optional distributions and Näslund s height curve. The cross-model covariance structure was used for linear prediction application, in which the expected values of the models were calibrated with the known stand characteristics. This provided a framework to validate the optional distributions and the optional set of stand characteristics. Height distribution is recommended for the earliest state of stands because of its continuous feature. From the mean height of about 4 m, Weibull dbh-frequency distribution is recommended in young stands if the input variables consist of arithmetic stand characteristics. In advanced stands, basal area-dbh distribution models are recommended. Näslund s height curve proved useful. Some efficient transformations of stand characteristics are introduced, e.g. the shape index, which combined the basal area, the stem number and the median diameter. Shape index enabled SB model for peatland stands to detect large variation in stand densities. This model also demonstrated reasonable behaviour for stands in mineral soils.
Resumo:
This study is about the challenges of learning in the creation and implementation of new sustainable technologies. The system of biogas production in the Programme of Sustainable Swine Production (3S Programme) conducted by the Sadia food processing company in Santa Catarina State, Brazil, is used as a case example for exploring the challenges, possibilities and obstacles of learning in the use of biogas production as a way to increase the environmental sustainability of swine production. The aim is to contribute to the discussion about the possibilities of developing systems of biogas production for sustainability (BPfS). In the study I develop hypotheses concerning the central challenges and possibilities for developing systems of BPfS in three phases. First, I construct a model of the network of activities involved in the BP for sustainability in the case study. Next, I construct a) an idealised model of the historically evolved concepts of BPfS through an analysis of the development of forms of BP and b) a hypothesis of the current central contradictions within and between the activity systems involved in BP for sustainability in the case study. This hypothesis is further developed through two actual empirical analyses: an analysis of the actors senses in taking part in the system, and an analysis of the disturbance processes in the implementation and operation of the BP system in the 3S Programme. The historical analysis shows that BP for sustainability in the 3S Programme emerged as a feasible solution for the contradiction between environmental protection and concentration, intensification and specialisation in swine production. This contradiction created a threat to the supply of swine to the food processing company. In the food production activity, the contradiction was expressed as a contradiction between the desire of the company to become a sustainable company and the situation in the outsourced farms. For the swine producers the contradiction was expressed between the contradictory rules in which the market exerted pressure which pushed for continual increases in scale, specialisation and concentration to keep the production economically viable, while the environmental rules imposed a limit to this expansion. Although the observed disturbances in the biogas system seemed to be merely technical and localised within the farms, the analysis proposed that these disturbances were formed in and between the activity systems involved in the network of BPfS during the implementation. The disturbances observed could be explained by four contradictions: a) contradictions between the new, more expanded activity of sustainable swine production and the old activity, b) a contradiction between the concept of BP for carbon credits and BP for local use in the BPfS that was implemented, c) contradictions between the new UNFCCC1 methodology for applying for carbon credits and the small size of the farms, and d) between the technologies of biogas use and burning available in the market and the small size of the farms. The main finding of this study relates to the zone of proximal development (ZPD) of the BPfS in Sadia food production chain. The model is first developed as a general model of concepts of BPfS and further developed here to the specific case of the BPfS in the 3S Programme. The model is composed of two developmental dimensions: societal and functional integration. The dimension of societal integration refers to the level of integration with other activities outside the farm. At one extreme, biogas production is self-sufficient and highly independent and the products of BP are consumed within the farm, while at the other extreme BP is highly integrated in markets and networks of collaboration, and BP products are exchanged within the markets. The dimension of functional integration refers to the level of integration between products and production processes so that economies of scope can be achieved by combining several functions using the same utility. At one extreme, BP is specialised in only one product, which allows achieving economies of scale, while at the other extreme there is an integrated production in which several biogas products are produced in order to maximise the outcomes from the BP system. The analysis suggests that BP is moving towards a societal integration, towards the market and towards a functional integration in which several biogas products are combined. The model is a hypothesis to be further tested through interventions by collectively constructing the new proposed concept of BPfS. Another important contribution of this study refers to the concept of the learning challenge. Three central learning challenges for developing a sustainable system of BP in the 3S Programme were identified: 1) the development of cheaper and more practical technologies of burning and measuring the gas, as well as the reduction of costs of the process of certification, 2) the development of new ways of using biogas within farms, and 3) the creation of new local markets and networks for selling BP products. One general learning challenge is to find more varied and synergic ways of using BP products than solely for the production of carbon credits. Both the model of the ZPD of BPfS and the identified learning challenges could be used as learning tools to facilitate the development of biogas production systems. The proposed model of the ZPD could be used to analyse different types of agricultural activities that face a similar contradiction. The findings could be used in interventions to help actors to find their own expansive actions and developmental projects for change. Rather than proposing a standardised best concept of BPfS, the idea of these learning tools is to facilitate the analysis of local situations and to help actors to make their activities more sustainable.
Resumo:
M.A. (Educ.) Anu Kajamaa from the University of Helsinki, Center for Research on Activity, Development and Learning (CRADLE), examines change efforts and their consequences in health care in the public sector. The aim of her academic dissertation is, by providing a new conceptual framework, to widen our understanding of organizational change efforts and their consequences and managerial challenges. Despite the multiple change efforts, the results of health care development projects have not been very promising, and many developmental needs and managerial challenges exist. The study challenges the predominant, well-framed health care change paradigm and calls for an expanded view to explore the underlying issues and multiplicities of change efforts and their consequences. The study asks what kind of expanded conceptual framework is needed to better understand organizational change as transcending currently dominant oppositions in management thinking, specifically in the field of health care. The study includes five explorative case studies of health care change efforts and their consequences in Finland. Theory and practice are tightly interconnected in the study. The methodology of the study integrates the ethnography of organizational change, a narrative approach and cultural-historical activity theory. From the stance of activity theory, historicity, contradictions, locality and employee participation play significant roles in developing health care. The empirical data of the study has mainly been collected in two projects, funded by the Finnish Work Environment Fund. The data was collected in public sector health care organizations during the years 2004-2010. By exploring the oppositions between distinct views on organizational change and the multi-site, multi-level and multi-logic of organizational change, the study develops an expanded, multidimensional activity-theoretical framework on organizational change and management thinking. The findings of the study contribute to activity theory and organization studies, and provide information for health care management and practitioners. The study illuminates that continuous development efforts bridged to one another and anchored to collectively created new activity models can lead to significant improvements and organizational learning in health care. The study presents such expansive learning processes. The ways of conducting change efforts in organizations play a critical role in the creation of collective new practices and tools and in establishing ownership over them. Some of the studied change efforts were discontinuous or encapsulated, not benefiting the larger whole. The study shows that the stagnation and unexpected consequences of change efforts relate to the unconnectedness of the different organizational sites, levels and logics. If not dealt with, the unintended consequences such as obstacles, breaks and conflicts may stem promising change and learning processes.
Resumo:
Periglacial processes act on cold, non-glacial regions where the landscape deveploment is mainly controlled by frost activity. Circa 25 percent of Earth's surface can be considered as periglacial. Geographical Information System combined with advanced statistical modeling methods, provides an efficient tool and new theoretical perspective for study of cold environments. The aim of this study was to: 1) model and predict the abundance of periglacial phenomena in subarctic environment with statistical modeling, 2) investigate the most import factors affecting the occurence of these phenomena with hierarchical partitioning, 3) compare two widely used statistical modeling methods: Generalized Linear Models and Generalized Additive Models, 4) study modeling resolution's effect on prediction and 5) study how spatially continous prediction can be obtained from point data. The observational data of this study consist of 369 points that were collected during the summers of 2009 and 2010 at the study area in Kilpisjärvi northern Lapland. The periglacial phenomena of interest were cryoturbations, slope processes, weathering, deflation, nivation and fluvial processes. The features were modeled using Generalized Linear Models (GLM) and Generalized Additive Models (GAM) based on Poisson-errors. The abundance of periglacial features were predicted based on these models to a spatial grid with a resolution of one hectare. The most important environmental factors were examined with hierarchical partitioning. The effect of modeling resolution was investigated with in a small independent study area with a spatial resolution of 0,01 hectare. The models explained 45-70 % of the occurence of periglacial phenomena. When spatial variables were added to the models the amount of explained deviance was considerably higher, which signalled a geographical trend structure. The ability of the models to predict periglacial phenomena were assessed with independent evaluation data. Spearman's correlation varied 0,258 - 0,754 between the observed and predicted values. Based on explained deviance, and the results of hierarchical partitioning, the most important environmental variables were mean altitude, vegetation and mean slope angle. The effect of modeling resolution was clear, too coarse resolution caused a loss of information, while finer resolution brought out more localized variation. The models ability to explain and predict periglacial phenomena in the study area were mostly good and moderate respectively. Differences between modeling methods were small, although the explained deviance was higher with GLM-models than GAMs. In turn, GAMs produced more realistic spatial predictions. The single most important environmental variable controlling the occurence of periglacial phenomena was mean altitude, which had strong correlations with many other explanatory variables. The ongoing global warming will have great impact especially in cold environments on high latitudes, and for this reason, an important research topic in the near future will be the response of periglacial environments to a warming climate.