15 resultados para propositional linear-time temporal logic
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
This thesis introduces an extension of Chomsky’s context-free grammars equipped with operators for referring to left and right contexts of strings.The new model is called grammar with contexts. The semantics of these grammars are given in two equivalent ways — by language equations and by logical deduction, where a grammar is understood as a logic for the recursive definition of syntax. The motivation for grammars with contexts comes from an extensive example that completely defines the syntax and static semantics of a simple typed programming language. Grammars with contexts maintain most important practical properties of context-free grammars, including a variant of the Chomsky normal form. For grammars with one-sided contexts (that is, either left or right), there is a cubic-time tabular parsing algorithm, applicable to an arbitrary grammar. The time complexity of this algorithm can be improved to quadratic,provided that the grammar is unambiguous, that is, it only allows one parsefor every string it defines. A tabular parsing algorithm for grammars withtwo-sided contexts has fourth power time complexity. For these grammarsthere is a recognition algorithm that uses a linear amount of space. For certain subclasses of grammars with contexts there are low-degree polynomial parsing algorithms. One of them is an extension of the classical recursive descent for context-free grammars; the version for grammars with contexts still works in linear time like its prototype. Another algorithm, with time complexity varying from linear to cubic depending on the particular grammar, adapts deterministic LR parsing to the new model. If all context operators in a grammar define regular languages, then such a grammar can be transformed to an equivalent grammar without context operators at all. This allows one to represent the syntax of languages in a more succinct way by utilizing context specifications. Linear grammars with contexts turned out to be non-trivial already over a one-letter alphabet. This fact leads to some undecidability results for this family of grammars
Resumo:
Crossroads, crucibles and refuges are three words that may describe natural coastal lagoon environments. The words refer to the complex mix of marine and terrestrial influences, prolonged dilution due to the semi-enclosed nature and the function of a habitat for highly diverse plant and animal communities, some of which are endangered. To attain a realistic picture of the present situation, high vulnerability to anthropogenic impact should be added to the description. As the sea floor in coastal lagoons is usually entirely photic, macrophyte primary production is accentuated compared with open sea environments. There is, however, a lack of proper knowledge on the importance of vegetation for the general functioning of coastal lagoon ecosystems. The aim of this thesis is to assess the role of macrophyte diversity, cover and species identity over temporal and spatial scales for lagoon functions, and to determine which steering factors primarily restrict the qualitative and quantitative composition of vegetation in coastal lagoons. The results are linked to patterns of related trophic levels and the indicative potential of vegetation for assessment of general conditions in coastal lagoons is evaluated. This thesis includes five field studies conducted in flads and glo-flads in the brackish water northern Baltic Sea. Flads and glo-flads are defined as a Baltic variety of coastal lagoons, which due to an inlet threshold and post-glacial landuplift slowly will be isolated from the open sea. This process shrinks inlet size, increases exposure and water retention, and is called habitat isolation. The studied coastal lagoons are situated in the archipelago areas of the eastern coast of Sweden, the Åland Islands and the south-west mainland of Finland, where land-uplift amounts to ca. 5 mm/ per year. Out of 400 evaluated sites, a total of 70 lagoons varying in inlet size, archipelago position and anthropogenic influence to cover for essential environmental variation were chosen for further inventory. Vegetation composition, cover and richness were measured together with several hydrographic and morphometric variables in the lagoons both seasonally and inter-annually to cover for general regional, local and temporal patterns influencing lagoon and vegetation development. On smaller species-level scale, the effects of macrophyte species identity and richness for the fish habitat function were studied by examining the influence of plant interaction on juvenile fish diversity. Thus, the active election of plant monoand polycultures by fish and the diversity of fish in the respective culture were examined and related to plant height and water depth. The lagoons and vegetation composition were found to experience a regime shift initiated by increased habitat isolation along with land-uplift. Vegetation composition altered, richness decreased and cover increased forming a less isolated and more isolated regime, named the vascular plant regime and charophyte regime, respectively according to the dominant vegetation. As total phosphorus in the water, turbidity and the impact of regional influences decreased in parallel, the dominance of charophytes and increasing cover seemed to buffer and stabilize conditions in the charophyte regime and indicated an increased functional role of vegetation for the lagoon ecosystem. The regime pattern was unaffected by geographical differences, while strong anthropogenic impact seemed to distort the pattern due to loss of especially Chara tomentosa L. in the charophyte regime. The regimes were further found unperturbed by short-time temporal fluctuations. In fact the seasonal and inter-annual dynamics reinforced the functional difference between the regimes by the increasing role of vegetation along habitat isolation and the resemblance to lake environments for the charophyte regime. For instance, greater total phosphorus and chlorophyll a concentrations in the water in the beginning of the season in the charophyte regime compared with the vascular plant regime presented a steeper reduction to even lower values than in the vascular plant regime along the season. Despite a regional importance and positive relationship of macrophyte diversity in relation to trophic diversity, species identity was underlined in the results of this thesis, especially with decreasing spatial scale. This result was supported partly by the increased role of charophytes in the functioning of the charophyte regime, but even more explicitly by the species-specific preference of juvenile fish for tall macrophyte monocultures. On a smaller species-level scale, tall plant species in monoculture seemed to be able to increase their length, indicating that negative selection forms preferred habitat structures, which increase fish diversity. This negative relationship between plant and fish diversity suggest a shift in diversity patterns among trohic levels on smaller scale. Thus, as diversity patterns seem complex and diverge among spatial scales, it might be ambiguous to extend the understanding of diversity relationships from one trophic level to the other. All together, the regime shift described here presents similarities to the regime development in marine lagoon environments and shallow lakes subjected to nutrient enrichment. However, due to nutrient buffering by vegetation with increased isolation and water retention as a consequence of the inlet threshold, the development seems opposite to the course along an eutrophication gradient described in marine lagoons lacking an inlet threshold, where the role of vegetation decreases. Thus, the results imply devastating consequences of inlet dredging (decreasing isolation) in terms of vegetation loss and nutrient release, and call for increased conservational supervision. Especially the red listed charophytes would suffer negatively from such interference and the consequences are likely to also deteriorate juvenile fish production. The fact that a new species to Finland, Chara connivens Salzm. Ex. Braun 1835 was discovered during this study further indicates a potential of the lagoons serving as refuges for rare species.
Resumo:
This dissertation has two almost unrelated themes: privileged words and Sturmian words. Privileged words are a new class of words introduced recently. A word is privileged if it is a complete first return to a shorter privileged word, the shortest privileged words being letters and the empty word. Here we give and prove almost all results on privileged words known to date. On the other hand, the study of Sturmian words is a well-established topic in combinatorics on words. In this dissertation, we focus on questions concerning repetitions in Sturmian words, reproving old results and giving new ones, and on establishing completely new research directions. The study of privileged words presented in this dissertation aims to derive their basic properties and to answer basic questions regarding them. We explore a connection between privileged words and palindromes and seek out answers to questions on context-freeness, computability, and enumeration. It turns out that the language of privileged words is not context-free, but privileged words are recognizable by a linear-time algorithm. A lower bound on the number of binary privileged words of given length is proven. The main interest, however, lies in the privileged complexity functions of the Thue-Morse word and Sturmian words. We derive recurrences for computing the privileged complexity function of the Thue-Morse word, and we prove that Sturmian words are characterized by their privileged complexity function. As a slightly separate topic, we give an overview of a certain method of automated theorem-proving and show how it can be applied to study privileged factors of automatic words. The second part of this dissertation is devoted to Sturmian words. We extensively exploit the interpretation of Sturmian words as irrational rotation words. The essential tools are continued fractions and elementary, but powerful, results of Diophantine approximation theory. With these tools at our disposal, we reprove old results on powers occurring in Sturmian words with emphasis on the fractional index of a Sturmian word. Further, we consider abelian powers and abelian repetitions and characterize the maximum exponents of abelian powers with given period occurring in a Sturmian word in terms of the continued fraction expansion of its slope. We define the notion of abelian critical exponent for Sturmian words and explore its connection to the Lagrange spectrum of irrational numbers. The results obtained are often specialized for the Fibonacci word; for instance, we show that the minimum abelian period of a factor of the Fibonacci word is a Fibonacci number. In addition, we propose a completely new research topic: the square root map. We prove that the square root map preserves the language of any Sturmian word. Moreover, we construct a family of non-Sturmian optimal squareful words whose language the square root map also preserves.This construction yields examples of aperiodic infinite words whose square roots are periodic.
Resumo:
This dissertation describes a networking approach to infinite-dimensional systems theory, where there is a minimal distinction between inputs and outputs. We introduce and study two closely related classes of systems, namely the state/signal systems and the port-Hamiltonian systems, and describe how they relate to each other. Some basic theory for these two classes of systems and the interconnections of such systems is provided. The main emphasis lies on passive and conservative systems, and the theoretical concepts are illustrated using the example of a lossless transfer line. Much remains to be done in this field and we point to some directions for future studies as well.
Resumo:
It is generally accepted that between 70 and 80% of manufacturing costs can be attributed to design. Nevertheless, it is difficult for the designer to estimate manufacturing costs accurately, especially when alternative constructions are compared at the conceptual design phase, because of the lack of cost information and appropriate tools. In general, previous reports concerning optimisation of a welded structure have used the mass of the product as the basis for the cost comparison. However, it can easily be shown using a simple example that the use of product mass as the sole manufacturing cost estimator is unsatisfactory. This study describes a method of formulating welding time models for cost calculation, and presents the results of the models for particular sections, based on typical costs in Finland. This was achieved by collecting information concerning welded products from different companies. The data included 71 different welded assemblies taken from the mechanical engineering and construction industries. The welded assemblies contained in total 1 589 welded parts, 4 257 separate welds, and a total welded length of 3 188 metres. The data were modelled for statistical calculations, and models of welding time were derived by using linear regression analysis. Themodels were tested by using appropriate statistical methods, and were found to be accurate. General welding time models have been developed, valid for welding in Finland, as well as specific, more accurate models for particular companies. The models are presented in such a form that they can be used easily by a designer, enabling the cost calculation to be automated.
Resumo:
Raw measurement data does not always immediately convey useful information, but applying mathematical statistical analysis tools into measurement data can improve the situation. Data analysis can offer benefits like acquiring meaningful insight from the dataset, basing critical decisions on the findings, and ruling out human bias through proper statistical treatment. In this thesis we analyze data from an industrial mineral processing plant with the aim of studying the possibility of forecasting the quality of the final product, given by one variable, with a model based on the other variables. For the study mathematical tools like Qlucore Omics Explorer (QOE) and Sparse Bayesian regression (SB) are used. Later on, linear regression is used to build a model based on a subset of variables that seem to have most significant weights in the SB model. The results obtained from QOE show that the variable representing the desired final product does not correlate with other variables. For SB and linear regression, the results show that both SB and linear regression models built on 1-day averaged data seriously underestimate the variance of true data, whereas the two models built on 1-month averaged data are reliable and able to explain a larger proportion of variability in the available data, making them suitable for prediction purposes. However, it is concluded that no single model can fit well the whole available dataset and therefore, it is proposed for future work to make piecewise non linear regression models if the same available dataset is used, or the plant to provide another dataset that should be collected in a more systematic fashion than the present data for further analysis.
Resumo:
Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.
Resumo:
This dissertation examined skill development in music reading by focusing on the visual processing of music notation in different music-reading tasks. Each of the three experiments of this dissertation addressed one of the three types of music reading: (i) sight-reading, i.e. reading and performing completely unknown music, (ii) rehearsed reading, during which the performer is already familiar with the music being played, and (iii) silent reading with no performance requirements. The use of the eye-tracking methodology allowed the recording of the readers’ eye movements from the time of music reading with extreme precision. Due to the lack of coherence in the smallish amount of prior studies on eye movements in music reading, the dissertation also had a heavy methodological emphasis. The present dissertation thus aimed to promote two major issues: (1) it investigated the eye-movement indicators of skill and skill development in sight-reading, rehearsed reading and silent reading, and (2) developed and tested suitable methods that can be used by future studies on the topic. Experiment I focused on the eye-movement behaviour of adults during their first steps of learning to read music notation. The longitudinal experiment spanned a nine-month long music-training period, during which 49 participants (university students taking part in a compulsory music course) sight-read and performed a series of simple melodies in three measurement sessions. Participants with no musical background were entitled as “novices”, whereas “amateurs” had had musical training prior to the experiment. The main issue of interest was the changes in the novices’ eye movements and performances across the measurements while the amateurs offered a point of reference for the assessment of the novices’ development. The experiment showed that the novices tended to sight-read in a more stepwise fashion than the amateurs, the latter group manifesting more back-and-forth eye movements. The novices’ skill development was reflected by the faster identification of note symbols involved in larger melodic intervals. Across the measurements, the novices also began to show sensitivity to the melodies’ metrical structure, which the amateurs demonstrated from the very beginning. The stimulus melodies consisted of quarter notes, making the effects of meter and larger melodic intervals distinguishable from effects caused by, say, different rhythmic patterns. Experiment II explored the eye movements of 40 experienced musicians (music education students and music performance students) during temporally controlled rehearsed reading. This cross-sectional experiment focused on the eye-movement effects of one-bar-long melodic alterations placed within a familiar melody. The synchronizing of the performance and eye-movement recordings enabled the investigation of the eye-hand span, i.e., the temporal gap between a performed note and the point of gaze. The eye-hand span was typically found to remain around one second. Music performance students demonstrated increased professing efficiency by their shorter average fixation durations as well as in the two examined eye-hand span measures: these participants used larger eye-hand spans more frequently and inspected more of the musical score during the performance of one metrical beat than students of music education. Although all participants produced performances almost indistinguishable in terms of their auditory characteristics, the altered bars indeed affected the reading of the score: the general effects of expertise in terms of the two eye- hand span measures, demonstrated by the music performance students, disappeared in the face of the melodic alterations. Experiment III was a longitudinal experiment designed to examine the differences between adult novice and amateur musicians’ silent reading of music notation, as well as the changes the 49 participants manifested during a nine-month long music course. From a methodological perspective, an opening to research on eye movements in music reading was the inclusion of a verbal protocol in the research design: after viewing the musical image, the readers were asked to describe what they had seen. A two-way categorization for verbal descriptions was developed in order to assess the quality of extracted musical information. More extensive musical background was related to shorter average fixation duration, more linear scanning of the musical image, and more sophisticated verbal descriptions of the music in question. No apparent effects of skill development were observed for the novice music readers alone, but all participants improved their verbal descriptions towards the last measurement. Apart from the background-related differences between groups of participants, combining verbal and eye-movement data in a cluster analysis identified three styles of silent reading. The finding demonstrated individual differences in how the freely defined silent-reading task was approached. This dissertation is among the first presentations of a series of experiments systematically addressing the visual processing of music notation in various types of music-reading tasks and focusing especially on the eye-movement indicators of developing music-reading skill. Overall, the experiments demonstrate that the music-reading processes are affected not only by “top-down” factors, such as musical background, but also by the “bottom-up” effects of specific features of music notation, such as pitch heights, metrical division, rhythmic patterns and unexpected melodic events. From a methodological perspective, the experiments emphasize the importance of systematic stimulus design, temporal control during performance tasks, and the development of complementary methods, for easing the interpretation of the eye-movement data. To conclude, this dissertation suggests that advances in comprehending the cognitive aspects of music reading, the nature of expertise in this musical task, and the development of educational tools can be attained through the systematic application of the eye-tracking methodology also in this specific domain.
Resumo:
In this work, image based estimation methods, also known as direct methods, are studied which avoid feature extraction and matching completely. Cost functions use raw pixels as measurements and the goal is to produce precise 3D pose and structure estimates. The cost functions presented minimize the sensor error, because measurements are not transformed or modified. In photometric camera pose estimation, 3D rotation and translation parameters are estimated by minimizing a sequence of image based cost functions, which are non-linear due to perspective projection and lens distortion. In image based structure refinement, on the other hand, 3D structure is refined using a number of additional views and an image based cost metric. Image based estimation methods are particularly useful in conditions where the Lambertian assumption holds, and the 3D points have constant color despite viewing angle. The goal is to improve image based estimation methods, and to produce computationally efficient methods which can be accomodated into real-time applications. The developed image-based 3D pose and structure estimation methods are finally demonstrated in practise in indoor 3D reconstruction use, and in a live augmented reality application.
Resumo:
The acceleration of solar energetic particles (SEPs) by flares and coronal mass ejections (CMEs) has been a major topic of research for the solar-terrestrial physics and geophysics communities for decades. This thesis discusses theories describing first-order Fermi acceleration of SEPs through repeated crossings at a CME-driven shock. We propose that particle trapping occurs through self-generated Alfvén waves, leading to a turbulent trapping region in front of the shock. Decelerating coronal shocks are shown to be capable of efficient SEP acceleration, provided seed particle injection is sufficient. Quasi-parallel shocks are found to inject thermal particles with good efficiency. The roles of minimum injection velocities, cross-field diffusion, downstream scattering efficiency and cross-shock potential are investigated in detail, with downstream isotropisation timescales having a major effect on injection efficiency. Accelerated spectra of heavier elements up to iron are found to exhibit significantly harder spectra than protons. Accelerated spectra cut-off energies are found to scale proportional to (Q/A)1.5, which is explained through analysis of the spectral shape of amplified Alfvénic turbulence. Acceleration times to different threshold energies are found to be non-linear, indicating that self-consistent time-dependent simulations are required in order to expose the full extent of acceleration dynamics. The well-established quasilinear theory (QLT) of particle scattering is investigated by comparing QLT scattering coefficients with those found via full-orbit simulations. QLT is found to overemphasise resonance conditions. This finding supports the simplifications implemented in the presented coronal shock acceleration (CSA) simulation software. The CSA software package is used to simulate a range of acceleration scenarios. The results are found to be in agreement with well-established particle acceleration theory. At the same time, new spatial and temporal dynamics of particle population trapping and wave evolution are revealed.
Resumo:
The objective of this research is to demonstrate the use of Lean Six Sigma methodology in a manufacturing lead time improvement project. Moreover, the goal is to develop working solutions for the target company to improve its manufacturing lead time. The theoretical background is achieved through exploring the literature of Six Sigma, Lean and Lean Six Sigma. The development will be done in collaboration with the related stakeholders, by following the Lean Six Sigma improvement process DMAIC and by analyzing the process data from the target company. The focus of this research is in demonstrating how to use Lean Six Sigma improvement process DMAIC in practice, rather than in comparing Lean Six Sigma to other improvement methodologies. In order to validate the manufacturing system’s current state, improvement potential and solutions, statistical tools such as linear regression analysis were used. This ensured that all the decisions were as heavily based on actual data as possible. As a result of this research, a set of solutions were developed and implemented in the target company. These solutions included batch size reduction, bottleneck shift, first-in first-out queuing and shifting a data entry task from production planners to line workers. With the use of these solutions, the target company was able to reduce its manufacturing lead time by over one third.
Resumo:
Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.
Resumo:
Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.
Resumo:
Time Frames brings together a group of cultural historians studying several different periods and locations in the history of Western culture, to discuss the temporal structures which historical studies live by. How do we construct chronologies and temporal sequences? How should we approach the discontinuity and particularity of the past, and its periodization? Deploying ideas and frames from three contemporary conceptual debates - spatial organisation, constructions of experience, and the cultural embeddedness of individual agency - the writers of this book propose methods of historicizing and explore what cultural history is about today.
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.