17 resultados para Formal Methods. Component-Based Development. Competition. Model Checking
Resumo:
The point of departure in this dissertation was the practical safety problem of unanticipated, unfamiliar events and unexpected changes in the environment, the demanding situations which the operators should take care of in the complex socio-technical systems. The aim of this thesis was to increase the understanding of demanding situations and of the resources for coping with these situations by presenting a new construct, a conceptual model called Expert Identity (ExId) as a way to open up new solutions to the problem of demanding situations and by testing the model in empirical studies on operator work. The premises of the Core-Task Analysis (CTA) framework were adopted as a starting point: core-task oriented working practices promote the system efficiency (incl. safety, productivity and well-being targets) and that should be supported. The negative effects of stress were summarised and the possible countermeasures related to the operators' personal resources such as experience, expertise, sense of control, conceptions of work and self etc. were considered. ExId was proposed as a way to bring emotional-energetic depth into the work analysis and to supplement CTA-based practical methods to discover development challenges and to contribute to the development of complex socio-technical systems. The potential of ExId to promote understanding of operator work was demonstrated in the context of the six empirical studies on operator work. Each of these studies had its own practical objectives within the corresponding quite broad focuses of the studies. The concluding research questions were: 1) Are the assumptions made in ExId on the basis of the different theories and previous studies supported by the empirical findings? 2) Does the ExId construct promote understanding of the operator work in empirical studies? 3) What are the strengths and weaknesses of the ExId construct? The layers and the assumptions of the development of expert identity appeared to gain evidence. The new conceptual model worked as a part of an analysis of different kinds of data, as a part of different methods used for different purposes, in different work contexts. The results showed that the operators had problems in taking care of the core task resulting from the discrepancy between the demands and resources (either personal or external). The changes of work, the difficulties in reaching the real content of work in the organisation and the limits of the practical means of support had complicated the problem and limited the possibilities of the development actions within the case organisations. Personal resources seemed to be sensitive to the changes, adaptation is taking place, but not deeply or quickly enough. Furthermore, the results showed several characteristics of the studied contexts that complicated the operators' possibilities to grow into or with the demands and to develop practices, expertise and expert identity matching the core task. They were: discontinuation of the work demands, discrepancy between conceptions of work held in the other parts of organisation, visions and the reality faced by the operators, emphasis on the individual efforts and situational solutions. The potential of ExId to open up new paths to solving the problem of the demanding situations and its ability to enable studies on practices in the field was considered in the discussion. The results were interpreted as promising enough to encourage the conduction of further studies on ExId. This dissertation proposes especially contribution to supporting the workers in recognising the changing demands and their possibilities for growing with them when aiming to support human performance in complex socio-technical systems, both in designing the systems and solving the existing problems.
Resumo:
Service researchers have repeatedly claimed that firms should acquire customer information in order to develop services that fit customer needs. Despite this, studies that would concentrate on the actual use of customer information in service development are lacking. The present study fulfils this research gap by investigating information use during a service development process. It demonstrates that use is not a straightforward task that automatically follows the acquisition of customer information. In fact, out of the six identified types of use, four represent non usage of customer information. Hence, the study demonstrates that the acquisition of customer information does not guarantee that the information will actually be used in development. The current study used an ethnographic approach. Consequently, the study was conducted in the field in real time over an extensive period of 13 months. Participant observation allowed direct access to the investigated phenomenon, i.e. the different types of use by the observed development project members were captured while they emerged. In addition, interviews, informal discussions and internal documents were used to gather data. A development process of a bank’s website constituted the empirical context of the investigation. This ethnography brings novel insights to both academia and practice. It critically questions the traditional focus on the firm’s acquisition of customer information and suggests that this focus ought to be expanded to the actual use of customer information. What is the point in acquiring costly customer information if it is not used in the development? Based on the findings of this study, a holistic view on customer information, “information in use” is generated. This view extends the traditional view of customer information in three ways: the source, timing and form of data collection. First, the study showed that the customer information can come explicitly from the customer, from speculation among the developers or it can already exist implicitly. Prior research has mainly focused on the customer as the information provider and the explicit source to turn to for information. Second, the study identified that the used and non-used customer information was acquired both previously, and currently within the time frame of the focal development process, as well as potentially in the future. Prior research has primarily focused on the currently acquired customer information, i.e. within the timeframe of the development process. Third, the used and non-used customer information was both formally and informally acquired. In prior research a large number of sophisticated formal methods have been suggested for the acquisition of customer information to be used in development. By focusing on “information in use”, new knowledge on types of customer information that are actually used was generated. For example, the findings show that the formal customer information acquired during the development process is used less than customer information already existent within the firm. With this knowledge at hand, better methods to capture this more usable customer information can be developed. Moreover, the thesis suggests that by focusing stronger on use of customer information, service development processes can be restructured in order to facilitate the information that is actually used.
Resumo:
The type A lantibiotic nisin produced by several Lactococcus lactis strains, and one Streptococcus uberis strainis a small antimicrobial peptide that inhibits the growth of a wide range of gram-positive bacteria, such as Bacillus, Clostridium, Listeria and Staphylococcus species. It is nontoxic to humans and used as a food preservative (E234) in more than 50 countries including the EU, the USA, and China. National legislations concerning maximum addition levels of nisin in different foods vary greatly. Therefore, there is a demand for non-laborious and sensitive methods to identify and quantify nisin reliably from different food matrices. The horizontal inhibition assay, based on the inhibitory effect of nisin to Micrococcus luteus is the base for most quantification methods developed so far. However, the sensitivity and accuracy of the agar diffusion method is affected by several parameters. Immunological tests have also been described. Taken into account the sensitivity of immunological methods to interfering substances within sample matrices, and possible cross-reactivities with lantibiotics structurally close to nisin, their usefulness for nisin detection from food samples remains limited. The proteins responsible for nisin biosynthesis, and producer self-immunity are encoded by genes arranged into two inducible operons, nisA/Z/QBTCIPRK and nisFEG, which also contain internal, constitutive promoters PnisI and PnisR. The transmembrane histidine kinase NisK and the response regulator NisR form a two-component signal transduction system, in which NisK autophosphorylates after exposure to extra cellular nisin, and subsequently transfers the phosphate to NisR. The phosphorylated NisR then relays the signal downstream by binding to two regulated promoters in the nisin gene cluster, i.e the nisA/Z/Qand the nisF promoters, thus activating transcription of the structural gene nisA/Z/Q and the downstream genes nisBTCIPRK from the nisA/Z/Q promoter, and the genes nisFEG from the nisF promoter. In this work two novel and highly sensitive nisin bioassays were developed. Both of these quantification methods were based on NisRK mediated, nisin induced Green Fluorescent Protein (GFP) fluorescence. The suitabilities of these assays for quantifica¬tion of nisin from food samples were evaluated in several food matrices. These bioassays had nisin sensitivities in the nanogram or picogram levels. In addition, shelf life of nisin in cooked sausages and retainment of the induction activity of nisin in intestinal chyme (intestinal content) was assessed.
Resumo:
This thesis studies homogeneous classes of complete metric spaces. Over the past few decades model theory has been extended to cover a variety of nonelementary frameworks. Shelah introduced the abstact elementary classes (AEC) in the 1980s as a common framework for the study of nonelementary classes. Another direction of extension has been the development of model theory for metric structures. This thesis takes a step in the direction of combining these two by introducing an AEC-like setting for studying metric structures. To find balance between generality and the possibility to develop stability theoretic tools, we work in a homogeneous context, thus extending the usual compact approach. The homogeneous context enables the application of stability theoretic tools developed in discrete homogeneous model theory. Using these we prove categoricity transfer theorems for homogeneous metric structures with respect to isometric isomorphisms. We also show how generalized isomorphisms can be added to the class, giving a model theoretic approach to, e.g., Banach space isomorphisms or operator approximations. The novelty is the built-in treatment of these generalized isomorphisms making, e.g., stability up to perturbation the natural stability notion. With respect to these generalized isomorphisms we develop a notion of independence. It behaves well already for structures which are omega-stable up to perturbation and coincides with the one from classical homogeneous model theory over saturated enough models. We also introduce a notion of isolation and prove dominance for it.
Resumo:
The superconducting (or cryogenic) gravimeter (SG) is based on the levitation of a superconducting sphere in a stable magnetic field created by current in superconducting coils. Depending on frequency, it is capable of detecting gravity variations as small as 10-11ms-2. For a single event, the detection threshold is higher, conservatively about 10-9 ms-2. Due to its high sensitivity and low drift rate, the SG is eminently suitable for the study of geodynamical phenomena through their gravity signatures. I present investigations of Earth dynamics with the superconducting gravimeter GWR T020 at Metsähovi from 1994 to 2005. The history and key technical details of the installation are given. The data processing methods and the development of the local tidal model at Metsähovi are presented. The T020 is a part of the worldwide GGP (Global Geodynamics Project) network, which consist of 20 working station. The data of the T020 and of other participating SGs are available to the scientific community. The SG T020 have used as a long-period seismometer to study microseismicity and the Earth s free oscillation. The annual variation, spectral distribution, amplitude and the sources of microseism at Metsähovi were presented. Free oscillations excited by three large earthquakes were analyzed: the spectra, attenuation and rotational splitting of the modes. The lowest modes of all different oscillation types are studied, i.e. the radial mode 0S0, the "football mode" 0S2, and the toroidal mode 0T2. The very low level (0.01 nms-1) incessant excitation of the Earth s free oscillation was detected with the T020. The recovery of global and regional variations in gravity with the SG requires the modelling of local gravity effects. The most important of them is hydrology. The variation in the groundwater level at Metsähovi as measured in a borehole in the fractured bedrock correlates significantly (0.79) with gravity. The influence of local precipitation, soil moisture and snow cover are detectable in the gravity record. The gravity effect of the variation in atmospheric mass and that of the non-tidal loading by the Baltic Sea were investigated together, as sea level and air pressure are correlated. Using Green s functions it was calculated that a 1 metre uniform layer of water in the Baltic Sea increases the gravity at Metsähovi by 31 nms-2 and the vertical deformation is -11 mm. The regression coefficient for sea level is 27 nms-2m-1, which is 87% of the uniform model. These studies are associated with temporal height variations using the GPS data of Metsähovi permanent station. Results of long time series at Metsähovi demonstrated high quality of data and correctly carried out offsets and drift corrections. The superconducting gravimeter T020 has been proved to be an eminent and versatile tool in studies of the Earth dynamics.
Resumo:
"Fifty-six teachers, from four European countries, were interviewed to ascertain their attitudes to and beliefs about the Collaborative Learning Environments (CLEs) which were designed under the Innovative Technologies for Collaborative Learning Project. Their responses were analysed using categories based on a model from cultural-historical activity theory [Engestrom, Y. (1987). Learning by expanding.- An activity-theoretical approach to developmental research. Helsinki: Orienta-Konsultit; Engestrom, Y., Engestrom, R., & Suntio, A. (2002). Can a school community learn to master its own future? An activity-theoretical study of expansive learning among middle school teachers. In G. Wells & G. Claxton (Eds.), Learning for life in the 21st century. Oxford: Blackwell Publishers]. The teachers were positive about CLEs and their possible role in initiating pedagogical innovation and enhancing personal professional development. This positive perception held across cultures and national boundaries. Teachers were aware of the fact that demanding planning was needed for successful implementations of CLEs. However, the specific strategies through which the teachers can guide students' inquiries in CLEs and the assessment of new competencies that may characterize student performance in the CLEs were poorly represented in the teachers' reflections on CLEs. The attitudes and beliefs of the teachers from separate countries had many similarities, but there were also some clear differences, which are discussed in the article. (c) 2005 Elsevier Ltd. All rights reserved."
Resumo:
Information structure and Kabyle constructions Three sentence types in the Construction Grammar framework The study examines three Kabyle sentence types and their variants. These sentence types have been chosen because they code the same state of affairs but have different syntactic structures. The sentence types are Dislocated sentence, Cleft sentence, and Canonical sentence. I argue first that a proper description of these sentence types should include information structure and, second, that a description which takes into account information structure is possible in the Construction Grammar framework. The study thus constitutes a testing ground for Construction Grammar for its applicability to a less known language. It constitutes a testing ground notably because the differentiation between the three types of sentences cannot be done without information structure categories and, consequently, these categories must be integrated also in the grammatical description. The information structure analysis is based on the model outlined by Knud Lambrecht. In that model, information structure is considered as a component of sentence grammar that assures the pragmatically correct sentence forms. The work starts by an examination of the three sentence types and the analyses that have been done in André Martinet s functional grammar framework. This introduces the sentence types chosen as the object of study and discusses the difficulties related to their analysis. After a presentation of the state of the art, including earlier and more recent models, the principles and notions of Construction Grammar and of Lambrecht s model are introduced and explicated. The information structure analysis is presented in three chapters, each treating one of the three sentence types. The analyses are based on spoken language data and elicitation. Prosody is included in the study when a syntactic structure seems to code two different focus structures. In such cases, it is pertinent to investigate whether these are coded by prosody. The final chapter presents the constructions that have been established and the problems encountered in analysing them. It also discusses the impact of the study on the theories used and on the theory of syntax in general.
Resumo:
The aim of this study was to investigate the connection between teachers' burn-out and professional development. In addition, the study aimed at clarifying teachers' conceptions of the significance of in-service training on work-related well-being. The theoretical starting points of the study were based on a model of burn-out (Kalimo & Toppinen1997) and a model of teachers' professional development (Niemi 1989). Present study can be seen as an independent follow-up study for a working ability project called "Uudistumisen eväät" that was followed through in Kuopio. The study was carried out in two phases. First, the connection between teachers' burn-out and professional development was charted with the help of a quantitative survey study. 131 teachers participated in the survey. Some of them were from schools that participated in the working ability project and the remainder were from other schools in Kuopio. The questionnaire consisted of self-constructed instruments of burn-out and professional development. According to the results, burn-out and professional development were strongly correlated with each other. Burn-out was summed up in three factors: emotional exhaustion, feelings of depersonalization and low feelings of personal accomplishment. Professional development was summed up in four factors: personality and pedagogical skills, learning-orientation, social skills and confronting change. Personality and pedagogical skills and skills of confronting change were correlated strongest with burn-out and its symptoms. A teacher, who has not found his/her own personal way of acting as a teacher and who considers change as something negative, is more likely to become exhausted than a teacher, who has developed his/her own pedagogical identity and who regards change more positively. In the second phase of this study, teachers' conceptions of the significance of in-service training on well-being was investigated with the help of group interviews (n=12). According to the results, the importance of in-service training was significant on the well-being of teachers. It appeared that in-service training promotes well-being by providing teachers with motivation, professional development and the possibility of taking a break from teaching and cooperating with other teachers. It has to be based on teachers' own needs. It has to be offered to teachers frequently and early enough. If teachers are already exhausted, they will neither have enough resources to participate in training, nor will they have the strength to make good use of it in practice. Both professional development and well-being are becoming more and more essential now that society is changing rapidly and the demands set on teachers are growing. Professional development can promote well-being, but are teachers too exhausted to develop themselves? Professional development demands resources and teachers may regard it as a threat and an additional strain. When the demands are so high that teachers cannot cope with them, they are likely to suffer stress and see reduction of commitment to their work and its development as a means to survive. If teachers stop caring about their work and their own development, how can we expect them to promote pupils' learning and development? It should be considered in the planning and implementation of in-service training and in arranging teachers' working conditions, that teachers have enough time and resources to develop themselves. Keywords: Teachers, burn-out, well-being, professional development, in-service training
Resumo:
In this thesis the use of the Bayesian approach to statistical inference in fisheries stock assessment is studied. The work was conducted in collaboration of the Finnish Game and Fisheries Research Institute by using the problem of monitoring and prediction of the juvenile salmon population in the River Tornionjoki as an example application. The River Tornionjoki is the largest salmon river flowing into the Baltic Sea. This thesis tackles the issues of model formulation and model checking as well as computational problems related to Bayesian modelling in the context of fisheries stock assessment. Each article of the thesis provides a novel method either for extracting information from data obtained via a particular type of sampling system or for integrating the information about the fish stock from multiple sources in terms of a population dynamics model. Mark-recapture and removal sampling schemes and a random catch sampling method are covered for the estimation of the population size. In addition, a method for estimating the stock composition of a salmon catch based on DNA samples is also presented. For most of the articles, Markov chain Monte Carlo (MCMC) simulation has been used as a tool to approximate the posterior distribution. Problems arising from the sampling method are also briefly discussed and potential solutions for these problems are proposed. Special emphasis in the discussion is given to the philosophical foundation of the Bayesian approach in the context of fisheries stock assessment. It is argued that the role of subjective prior knowledge needed in practically all parts of a Bayesian model should be recognized and consequently fully utilised in the process of model formulation.
Resumo:
Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.
Resumo:
An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.
Resumo:
This thesis deals with theoretical modeling of the electrodynamics of auroral ionospheres. In the five research articles forming the main part of the thesis we have concentrated on two main themes: Development of new data-analysis techniques and study of inductive phenomena in the ionospheric electrodynamics. The introductory part of the thesis provides a background for these new results and places them in the wider context of ionospheric research. In this thesis we have developed a new tool (called 1D SECS) for analysing ground based magnetic measurements from a 1-dimensional magnetometer chain (usually aligned in the North-South direction) and a new method for obtaining ionospheric electric field from combined ground based magnetic measurements and estimated ionospheric electric conductance. Both these methods are based on earlier work, but contain important new features: 1D SECS respects the spherical geometry of large scale ionospheric electrojet systems and due to an innovative way of implementing boundary conditions the new method for obtaining electric fields can be applied also at local scale studies. These new calculation methods have been tested using both simulated and real data. The tests indicate that the new methods are more reliable than the previous techniques. Inductive phenomena are intimately related to temporal changes in electric currents. As the large scale ionospheric current systems change relatively slowly, in time scales of several minutes or hours, inductive effects are usually assumed to be negligible. However, during the past ten years, it has been realised that induction can play an important part in some ionospheric phenomena. In this thesis we have studied the role of inductive electric fields and currents in ionospheric electrodynamics. We have formulated the induction problem so that only ionospheric electric parameters are used in the calculations. This is in contrast to previous studies, which require knowledge of the magnetospheric-ionosphere coupling. We have applied our technique to several realistic models of typical auroral phenomena. The results indicate that inductive electric fields and currents are locally important during the most dynamical phenomena (like the westward travelling surge, WTS). In these situations induction may locally contribute up to 20-30% of the total ionospheric electric field and currents. Inductive phenomena do also change the field-aligned currents flowing between the ionosphere and magnetosphere, thus modifying the coupling between the two regions.
Resumo:
In this study the junction of Christian mission, Christian education and voluntary work are examined in the Christian student voluntary association Opiskelijain Lähetysliitto (OL), which is the Finnish successor to the Student Volunteer Movement. The main subjects are the structure and content of the mission education as one aspect of Lutheran education and the reasons for expressing the mission interest through voluntary work. The research questions are as follows: What kind of organization has the OL been? What has mission education been like in the OL? Why have the former chairpersons participated in the OL? How have purposiveness and intentionality arisen among the former chairpersons? The study is empirical despite having a historical and retrospective view, since the OL is explored during the period 1972 2000. The data consists of the OL s annual reports, membership applications (N=629) and interviews of all 25 former chairmen. Data is analysed by qualitative and quantitative content analysis in a partly inductive and partly deductive manner. The pedagogical framework arises from situational learning theory (Lave - Wenger 1991), which was complemented with the criteria for meaningful learning (Jonassen 1995), the octagon model of volunteer motivation (Yeung 2004) and the definitions of intentionality and purposiveness in the theory of teachers pedagogical thinking (Kansanen et al. 2000). The analysis of the archive data showed that the activities of the OL are reminiscent of those of the missions of the Finnish Evangelical Lutheran Church congregations. The biggest difference was that all OL participants were young adults, the age group that is the greatest challenge to the Church. The OL is therefore an interesting context in which to explore mission education and mission interest. The key result of the study was the forming of a model of mission educa-tion. The model has three educational components: values, goals and methods. The gist of the model is formed by the goals. The main goal is the arousing and strengthening of mission interest which has emotional, cognitive and practical aspects. The subgoals create the horizontal vertical and inward outward dimensions of the model, which are the metalevels of mission education. The subgoals reveal that societal and religious education may embody a missionary dimension when they are understood as missionary training. Further, a distinction between mission education and missionary training was observed. The former emphasizes the main goal of the model and the latter underlines the subgoals. Based on the vertical dimension of the model the study suggests that the definition of religious competence needs to be complemented with missional competence. Reasons for participating in the OL were found to be diverse as noted in other studies on volunteering and motivating factors, and were typical to young people such as the importance of social relations. The study created new motivational themes that occurred in the middle of the continuity newness and the distance proximity dimensions, which were not found in Yeung s research. Mission interest as voluntary work appeared as oriented towards one s own spirituality or towards the social community. On the other hand, mission interest was manifested as intentional education in order to either improve the community or to promote the Christian mission. In the latter case the mission was seen as a purpose in life and as a future profession. Keywords: mission, Christian education, voluntary work, mission education, mission interest, stu-dent movement
Resumo:
Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.