874 resultados para development methods
Resumo:
Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.
Resumo:
Engelskans dominerande roll som internationellt språk och andra globaliseringstrender påverkar också Svenskfinland. Dessa trender påverkar i sin tur förutsättningarna för lärande och undervisning i engelska som främmande språk, det vill säga undervisningsmålen, de förväntade elev- och lärarroller, materialens ändamålsenlighet, lärares och elevers initiala erfarenheter av engelska och engelskspråkiga länder. Denna studie undersöker förutsättningarna för lärande och professionell utveckling i det svenskspråkiga nybörjarklassrummet i engelska som främmande språk. Utgångsläget för 351 nybörjare i engelska som främmande språk och 19 av deras lärare beskrivs och analyseras. Resultaten tyder på att engelska håller på att bli ett andraspråk snarare än ett traditionellt främmande språk för många unga elever. Dessa elever har också goda förutsättningar att lära sig engelska utanför skolan. Sådan var dock inte situationen för alla elever, vilket tyder på att det finns en anmärkningsvärd heterogenitet och även regional variation i det finlandssvenska klassrummet i engelska som främmande språk. Lärarresultaten tyder på att vissa lärare har klarat av att på ett konstruktivt sätt att tackla de förutsättningar de möter. Andra lärare uttrycker frustration över sin arbetssituation, läroplanen, undervisningsmaterialen och andra aktörer som kommer är av betydelse för skolmiljön. Studien påvisar att förutsättningarna för lärande och undervisning i engelska som främmande språk varierar i Svenskfinland. För att stöda elevers och lärares utveckling föreslås att dialogen mellan aktörer på olika nivå i samhället bör förbättras och systematiseras.
Resumo:
The aim of this study is to explore how a new concept appears inscientific discussion and research, how it diffuses to other fields and out of the scientific communities, and how the networks are formed around the concept. Text and terminology take the interest of a reader in the digital environment. Texts create networks where the terminology used is dependent on the ideas, viewsand paradigms of the field. This study is based mainly on bibliographic data. Materials for bibliometric studies have been collected from different databases. The databases are also evaluated and their quality and coverage are discussed. The thesauri of those databases that have been selected for a more in depth study have also been evaluated. The material selected has been used to study how long and in which ways an innovative publication, which can be seen as a milestone in a specific field, influences the research. The concept that has been chosen as a topic for this research is Social Capital, because it has been a popular concept in different scientific fields as well as in everyday speech and the media. It seemed to be a `fashion concept´ that appeared in different situations at the Millennium. The growth and diffusion of social capital publications has been studied. The terms connected with social capital in different fields and different stages of the development have also been analyzed. The methods that have been used in this study are growth and diffusion analysis, content analysis, citation analysis, coword analysis and cocitation analysis. One method that can be used tounderstand and to interpret results of these bibliometric studies is to interview some key persons, who are known to have a gatekeeper position in the diffusion of the concept. Thematic interviews with some Finnish researchers and specialists that have influenced the diffusion of social capital into Finnish scientificand social discussions provide background information. iv The Milestone Publications on social capital have been chosen and studied. They give answers to the question "What is Social Capital?" By comparing citations to Milestone Publications with the growth of all social capital publications in a database, we can drawconclusions about the point at which social capital became generally approved `tacit knowledge´. The contribution of the present study lies foremost in understanding the development of network structures around a new concept that has diffused in scientific communities and also outside them. The network means both networks of researchers, networks of publications and networks of concepts that describe the research field. The emphasis has been on the digital environment and onthe socalled information society that we are now living in, but in this transitional stage, the printed publications are still important and widely used in social sciences and humanities. The network formation is affected by social relations and informal contacts that push new ideas. This study also gives new information about using different research methods, like bibliometric methods supported by interviews and content analyses. It is evident that interpretation of bibliometric maps presupposes qualitative information and understanding of the phenomena under study.
Resumo:
Dagens programvaruindustri står inför alltmer komplicerade utmaningar i en värld där programvara är nästan allstädes närvarande i våra dagliga liv. Konsumenten vill ha produkter som är pålitliga, innovativa och rika i funktionalitet, men samtidigt också förmånliga. Utmaningen för oss inom IT-industrin är att skapa mer komplexa, innovativa lösningar till en lägre kostnad. Detta är en av orsakerna till att processförbättring som forskningsområde inte har minskat i betydelse. IT-proffs ställer sig frågan: “Hur håller vi våra löften till våra kunder, samtidigt som vi minimerar vår risk och ökar vår kvalitet och produktivitet?” Inom processförbättringsområdet finns det olika tillvägagångssätt. Traditionella processförbättringsmetoder för programvara som CMMI och SPICE fokuserar på kvalitets- och riskaspekten hos förbättringsprocessen. Mer lättviktiga metoder som t.ex. lättrörliga metoder (agile methods) och Lean-metoder fokuserar på att hålla löften och förbättra produktiviteten genom att minimera slöseri inom utvecklingsprocessen. Forskningen som presenteras i denna avhandling utfördes med ett specifikt mål framför ögonen: att förbättra kostnadseffektiviteten i arbetsmetoderna utan att kompromissa med kvaliteten. Den utmaningen attackerades från tre olika vinklar. För det första förbättras arbetsmetoderna genom att man introducerar lättrörliga metoder. För det andra bibehålls kvaliteten genom att man använder mätmetoder på produktnivå. För det tredje förbättras kunskapsspridningen inom stora företag genom metoder som sätter samarbete i centrum. Rörelsen bakom lättrörliga arbetsmetoder växte fram under 90-talet som en reaktion på de orealistiska krav som den tidigare förhärskande vattenfallsmetoden ställde på IT-branschen. Programutveckling är en kreativ process och skiljer sig från annan industri i det att den största delen av det dagliga arbetet går ut på att skapa något nytt som inte har funnits tidigare. Varje programutvecklare måste vara expert på sitt område och använder en stor del av sin arbetsdag till att skapa lösningar på problem som hon aldrig tidigare har löst. Trots att detta har varit ett välkänt faktum redan i många decennier, styrs ändå många programvaruprojekt som om de vore produktionslinjer i fabriker. Ett av målen för rörelsen bakom lättrörliga metoder är att lyfta fram just denna diskrepans mellan programutvecklingens innersta natur och sättet på vilket programvaruprojekt styrs. Lättrörliga arbetsmetoder har visat sig fungera väl i de sammanhang de skapades för, dvs. små, samlokaliserade team som jobbar i nära samarbete med en engagerad kund. I andra sammanhang, och speciellt i stora, geografiskt utspridda företag, är det mera utmanande att införa lättrörliga metoder. Vi har nalkats utmaningen genom att införa lättrörliga metoder med hjälp av pilotprojekt. Detta har två klara fördelar. För det första kan man inkrementellt samla kunskap om metoderna och deras samverkan med sammanhanget i fråga. På så sätt kan man lättare utveckla och anpassa metoderna till de specifika krav som sammanhanget ställer. För det andra kan man lättare överbrygga motstånd mot förändring genom att introducera kulturella förändringar varsamt och genom att målgruppen får direkt förstahandskontakt med de nya metoderna. Relevanta mätmetoder för produkter kan hjälpa programvaruutvecklingsteam att förbättra sina arbetsmetoder. När det gäller team som jobbar med lättrörliga och Lean-metoder kan en bra uppsättning mätmetoder vara avgörande för beslutsfattandet när man prioriterar listan över uppgifter som ska göras. Vårt fokus har legat på att stöda lättrörliga och Lean-team med interna produktmätmetoder för beslutsstöd gällande så kallad omfaktorering, dvs. kontinuerlig kvalitetsförbättring av programmets kod och design. Det kan vara svårt att ta ett beslut att omfaktorera, speciellt för lättrörliga och Lean-team, eftersom de förväntas kunna rättfärdiga sina prioriteter i termer av affärsvärde. Vi föreslår ett sätt att mäta designkvaliteten hos system som har utvecklats med hjälp av det så kallade modelldrivna paradigmet. Vi konstruerar även ett sätt att integrera denna mätmetod i lättrörliga och Lean-arbetsmetoder. En viktig del av alla processförbättringsinitiativ är att sprida kunskap om den nya programvaruprocessen. Detta gäller oavsett hurdan process man försöker introducera – vare sig processen är plandriven eller lättrörlig. Vi föreslår att metoder som baserar sig på samarbete när processen skapas och vidareutvecklas är ett bra sätt att stöda kunskapsspridning på. Vi ger en översikt över författarverktyg för processer på marknaden med det förslaget i åtanke.
Resumo:
INTRODUCTION: Web-based e-learning is a teaching tool increasingly used in many medical schools and specialist fields, including ophthalmology. AIMS: this pilot study aimed to develop internet-based course-based clinical cases and to evaluate the effectiveness of this method within a graduate medical education group. METHODS: this was an interventional randomized study. First, a website was built using a distance learning platform. Sixteen first-year ophthalmology residents were then divided into two randomized groups: one experimental group, which was submitted to the intervention (use of the e-learning site) and another control group, which was not submitted to the intervention. The students answered a printed clinical case and their scores were compared. RESULTS: there was no statistically significant difference between the groups. CONCLUSION: We were able to successfully develop the e-learning site and the respective clinical cases. Despite the fact that there was no statistically significant difference between the access and the non access group, the study was a pioneer in our department, since a clinical case online program had never previously been developed.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
The objective of the this research project is to develop a novel force control scheme for the teleoperation of a hydraulically driven manipulator, and to implement an ideal transparent mapping between human and machine interaction, and machine and task environment interaction. This master‘s thesis provides a preparatory study for the present research project. The research is limited into a single degree of freedom hydraulic slider with 6-DOF Phantom haptic device. The key contribution of the thesis is to set up the experimental rig including electromechanical haptic device, hydraulic servo and 6-DOF force sensor. The slider is firstly tested as a position servo by using previously developed intelligent switching control algorithm. Subsequently the teleoperated system is set up and the preliminary experiments are carried out. In addition to development of the single DOF experimental set up, methods such as passivity control in teleoperation are reviewed. The thesis also contains review of modeling of the servo slider in particular reference to the servo valve. Markov Chain Monte Carlo method is utilized in developing the robustness of the model in presence of noise.
Resumo:
More and more innovations currently being commercialized exhibit network effects, in other words, the value of using the product increases as more and more people use the same or compatible products. Although this phenomenon has been the subject of much theoretical debate in economics, marketing researchers have been slow to respond to the growing importance of network effects in new product success. Despite an increase in interest in recent years, there is no comprehensive view on the phenomenon and, therefore, there is currently incomplete understanding of the dimensions it incorporates. Furthermore, there is wide dispersion in operationalization, in other words, the measurement of network effects, and currently available approaches have various shortcomings that limit their applicability, especially in marketing research. Consequently, little is known today about how these products fare on the marketplace and how they should be introduced in order to maximize their chances of success. Hence, the motivation for this study was driven by the need to increase our knowledge and understanding of the nature of network effects as a phenomenon, and of their role in the commercial success of new products. This thesis consists of two parts. The first part comprises a theoretical overview of the relevant literature, and presents the conclusions of the entire study. The second part comprises five complementary, empirical research publications. Quantitative research methods and two sets of quantitative data are utilized. The results of the study suggest that there is a need to update both the conceptualization and the operationalization of the phenomenon of network effects. Furthermore, there is a need for an augmented view on customers’ perceived value in the context of network effects, given that the nature of value composition has major implications for the viability of such products in the marketplace. The role of network effects in new product performance is not as straightforward as suggested in the existing theoretical literature. The overwhelming result of this study is that network effects do not directly influence product success, but rather enhance or suppress the influence of product introduction strategies. The major contribution of this study is in conceptualizing the phenomenon of network effects more comprehensively than has been attempted thus far. The study gives an augmented view of the nature of customer value in network markets, which helps in explaining why some products thrive on these markets whereas others never catch on. Second, the study discusses shortcomings in prior literature in the way it has operationalized network effects, suggesting that these limitations can be overcome in the research design. Third, the study provides some much-needed empirical evidence on how network effects, product introduction strategies, and new product performance are associated. In general terms, this thesis adds to our knowledge of how firms can successfully leverage network effects in product commercialization in order to improve market performance.
Resumo:
The value and benefits of user experience (UX) are widely recognized in the modern world and UX is seen as an integral part of many fields. This dissertation integrates UX and understanding end users with the early phases of software development. The concept of UX is still unclear, as witnessed by more than twenty-five definitions and ongoing argument about its different aspects and attributes. This missing consensus forms a problem in creating a link between UX and software development: How to take the UX of end users into account when it is unclear for software developers what UX stands for the end users. Furthermore, currently known methods to estimate, evaluate and analyse UX during software development are biased in favor of the phases where something concrete and tangible already exists. It would be beneficial to further elaborate on UX in the beginning phases of software development. Theoretical knowledge from the fields of UX and software development is presented and linked with surveyed and analysed UX attribute information from end users and UX professionals. Composing the surveys around the identified 21 UX attributes is described and the results are analysed in conjunction with end user demographics. Finally the utilization of the gained results is explained with a proof of concept utility, the Wizard of UX, which demonstrates how UX can be integrated into early phases of software development. The process of designing, prototyping and testing this utility is an integral part of this dissertation. The analyses show statistically significant dependencies between appreciation towards UX attributes and surveyed end user demographics. In addition, tests conducted by software developers and industrial UX designer both indicate the benefits and necessity of the prototyped Wizard of UX utility. According to the conducted tests, this utility meets the requirements set for it: It provides a way for software developers to raise their know-how of UX and a possibility to consider the UX of end users with statistical user profiles during the early phases of software development. This dissertation produces new and relevant information for the UX and software development communities by demonstrating that it is possible to integrate UX as a part of the early phases of software development.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
One of the aims of the study was to clarify the reliability and validity of the Job Diagnostic Survey (JDS) and the Eigenzustand (EZ) method as measures of the objective characteristics of work and short-term mental work load in the Finnish data. The reliability and validity were examined taking into consideration the theoretical backgrounds of the methods and the reliability of the measurements. The methods were used for finding out the preconditions for organisational development based on self-improvement and clarifying the impacts of working environment (organisational functioning and job characteristics) on a worker’s mental state and health. The influences were examined on a general level - regardless of individual personal or specific contextual factors. One aim was also to clarify how cognitions and emotions are intertwined and how they influence a person’s perception of the working environment. The data consisted of 15 blue-collar organisations in the public sector. The organisations were divided in target and comparison groups depending on the research frames. The data was collected by questionnaires by post. The exploratory and confirmatory factor analyses (Lisrel) were used as the main statistical methods in examining the structures of the methods and impacts between the variables. It was shown that it is possible for organisations to develop their working conditions themselves on specific preconditions. The advance of the development processes could be shown by the amount of the development activity as well as by the changes of the mental well-being (ability to act) and sick absenteeism of the personnel. It was found that the JDS and the EZ methods were reliable and valid measures in the Finnish data. It was shown that, in addition to the objective working environment (organisational functioning and job characteristics), also such a personal factor as selfesteem influences a person’s perception of mental work load. However, the influence did not seem to be direct. The importance of job satisfaction as a general indicator of perceived working conditions was emphasised. Emotional and cognitive factors were found to be functionally intertwined constituting a common factor. Organisational functioning and the characteristics of work had connections with a person’s health measured by sick absenteeism.
Resumo:
Search engine optimization & marketing is a set of processes widely used on websites to improve search engine rankings which generate quality web traffic and increase ROI. Content is the most important part of any website. CMS web development is now become very essential for most of organizations and online businesses to develop their online system and websites. Every online business using a CMS wants to get users (customers) to make profit and ROI. This thesis comprises a brief study of existing SEO methods, tools and techniques and how they can be implemented to optimize a content base website. In results, the study provides recommendations about how to use SEO methods; tools and techniques to optimize CMS based websites on major search engines. This study compares popular CMS systems like Drupal, WordPress and Joomla SEO features and how implementing SEO can be improved on these CMS systems. Having knowledge of search engine indexing and search engine working is essential for a successful SEO campaign. This work is a complete guideline for web developers or SEO experts who want to optimize a CMS based website on all major search engines.
Resumo:
The state of Ceará, Brazil, has 75% of its area covered by Brazilian semiarid, with its peculiar features. In this state, the dams are constituted in water structure of strategic importance, ensuring, both in time and space, the development and supply of water to population. However, construction of reservoirs results in various impacts that should be carefully observed when deciding on their implementation. One of the impacts identified as negative is the increased evaporation, which constitutes a major component of water balance in reservoirs, especially in arid regions. Several methods for estimating evaporation have been proposed over time, many of them deriving from the Penman equation. This study evaluated six different methods for estimating evaporation in order to determine the most suitable for use in hydrological models for water balance in reservoirs in the state of Ceará. The tested methods were proposed by Penman, Kohler-Nordenson-Fox, Priestley-Taylor, deBruim-Keijman, Brutsaert-Stricker and deBruim. The methods presented good performance when tested for water balance during the dry season, and the Priestley-Taylor was the most appropriate, since the data from de simulated water balance with evaporation estimated by this method were the closest of the water balance data observed from measures of reservoir level and the elevation-volume curve provided by the Company of Management of Water Resources of the state of Ceará - COGERH.
Resumo:
This research investigates relations between communication and the development of family farmers from the Pontal do Paranapanema region (state of São Paulo), mainly in the municipality of Teodoro Sampaio. Family agriculture in this municipality is represented basically by beneficiaries the Agrarian Reform program. The main hypothesis is that communication contributes positively to the development of those family farmers. Thus, this study aimed to understand communication practices of these farmers and relate them with the development of their families. Quantitative and qualitative research methods were used. Development proxy is understood as the combination between the family living conditions and production. Among the main results, it was found that the effect of communication for "life and production conditions" increases as farmers at superior levels of "development" are analyzed; thus the main hypothesis should not be rejected for those farmers with higher development condition. For the others, communication did not have the same effect because many settled families are not focused on agricultural activities. The main suggestion is to improve ways for producers and professionals from public services related to them to access information.
Resumo:
Intensive and critical care nursing is a speciality in its own right and with its own nature within the nursing profession. This speciality poses its own demands for nursing competencies. Intensive and critical care nursing is focused on severely ill patients and their significant others. The patients are comprehensively cared for, constantly monitored and their vital functions are sustained artificially. The main goal is to win time to cure the cause of the patient’s situation or illness. The purpose of this empirical study was i) to describe and define competence and competence requirements in intensive and critical care nursing, ii) to develop a basic measurement scale for competence assessment in intensive and critical care nursing for graduating nursing students, and iii) to describe and evaluate graduating nursing students’ basic competence in intensive and critical care nursing by seeking the reference basis of self-evaluated basic competence in intensive and critical care nursing from ICU nurses. However, the main focus of this study was on the outcomes of nursing education in this nursing speciality. The study was carried out in different phases: basic exploration of competence (phase 1 and 2), instrumentation of competence (phase 3) and evaluation of competence (phase 4). Phase 1 (n=130) evaluated graduating nursing students’ basic biological and physiological knowledge and skills for working in intensive and critical care with Basic Knowledge Assessment Tool version 5 (BKAT-5, Toth 2012). Phase 2 focused on defining competence in intensive and critical care nursing with the help of literature review (n=45 empirical studies) as well as competence requirements in intensive and critical care nursing with the help of experts (n=45 experts) in a Delphi study. In phase 3 the scale Intensive and Critical Care Nursing Competence Scale (ICCN-CS) was developed and tested twice (pilot test 1: n=18 students and n=12 nurses; pilot test 2: n=56 students and n=54 nurses). Finally, in phase 4, graduating nursing students’ competence was evaluated with ICCN-CS and BKAT version 7 (Toth 2012). In order to develop a valid assessment scale of competence for graduating nursing students and to evaluate and establish the competence of graduating nursing students, empirical data were retrieved at the same time from both graduating nursing students (n=139) and ICU nurses (n=431). Competence can be divided into clinical and general professional competence. It can be defined as a specific knowledge base, skill base, attitude and value base and experience base of nursing and the personal base of an intensive and critical care nurse. Personal base was excluded in this self-evaluation based scale. The ICCN-CS-1 consists of 144 items (6 sum variables). Finally, it became evident that the experience base of competence is not a suitable sum variable in holistic intensive and critical care competence scale for graduating nursing students because of their minor experience in this special nursing area. ICCN-CS-1 is a reliable and tolerably valid scale for use among graduating nursing students and ICU nurses Among students, basic competence of intensive and critical care nursing was self-rated as good by 69%, as excellent by 25% and as moderate by 6%. However, graduating nursing students’ basic biological and physiological knowledge and skills for working in intensive and critical care were poor. The students rated their clinical and professional competence as good, and their knowledge base and skill base as moderate. They gave slightly higher ratings for their knowledge base than skill base. Differences in basic competence emerged between graduating nursing students and ICU nurses. The students’ self-ratings of both their basic competence and clinical and professional competence were significantly lower than the nurses’ ratings. The students’ self-ratings of their knowledge and skill base were also statistically significantly lower than nurses’ ratings. However, both groups reported the same attitude and value base, which was excellent. The strongest factor explaining students’ conception of their competence was their experience of autonomy in nursing. Conclusions: Competence in intensive and critical care nursing is a multidimensional concept. Basic competence in intensive and critical care nursing can be measured with self-evaluation based scale but alongside should be used an objective evaluation method. Graduating nursing students’ basic competence in intensive and critical care nursing is good but their knowledge and skill base are moderate. Especially the biological and physiological knowledge base is poor. Therefore in future in intensive and critical care nursing education should be focused on both strengthening students’ biological and physiological knowledge base and on strengthening their overall skill base. Practical implications are presented for nursing education, practice and administration. In future, research should focus on education methods and contents, mentoring of clinical practice and orientation programmes as well as further development of the scale.