15 resultados para graphical authentication
em Helda - Digital Repository of University of Helsinki
Resumo:
Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.
Resumo:
Aptitude-based student selection: A study concerning the admission processes of some technically oriented healthcare degree programmes in Finland (Orthotics and Prosthetics, Dental Technology and Optometry). The data studied consisted of conveniencesamples of preadmission information and the results of the admission processes of three technically oriented healthcare degree programmes (Orthotics and Prosthetics, Dental Technology and Optometry) in Finland during the years 1977-1986 and 2003. The number of the subjects tested and interviewed in the first samples was 191, 615 and 606, and in the second 67, 64 and 89, respectively. The questions of the six studies were: I. How were different kinds of preadmission data related to each other? II. Which were the major determinants of the admission decisions? III. Did the graduated students and those who dropped out differ from each other? IV. Was it possible to predict how well students would perform in the programmes? V. How was the student selection executed in the year 2003? VI. Should clinical vs. statistical prediction or both be used? (Some remarks are presented on Meehl's argument: "Always, we might as well face it, the shadow of the statistician hovers in the background; always the actuary will have the final word.") The main results of the study were as follows: Ability tests, dexterity tests and judgements of personality traits (communication skills, initiative, stress tolerance and motivation) provided unique, non-redundant information about the applicants. Available demographic variables did not bias the judgements of personality traits. In all three programme settings, four-factor solutions (personality, reasoning, gender-technical and age-vocational with factor scores) could be extracted by the Maximum Likelihood method with graphical Varimax rotation. The personality factor dominated the final aptitude judgements and very strongly affected the selection decisions. There were no clear differences between graduated students and those who had dropped out in regard to the four factors. In addition, the factor scores did not predict how well the students performed in the programmes. Meehl's argument on the uncertainty of clinical prediction was supported by the results, which on the other hand did not provide any relevant data for rules on statistical prediction. No clear arguments for or against the aptitude-based student selection was presented. However, the structure of the aptitude measures and their impact on the admission process are now better known. The concept of "personal aptitude" is not necessarily included in the values and preferences of those in charge of organizing the schooling. Thus, obviously the most well-founded and cost-effective way to execute student selection is to rely on e.g. the grade point averages of the matriculation examination and/or written entrance exams. This procedure, according to the present study, would result in a student group which has a quite different makeup (60%) from the group selected on the basis of aptitude tests. For the recruiting organizations, instead, "personal aptitude" may be a matter of great importance. The employers, of course, decide on personnel selection. The psychologists, if consulted, are responsible for the proper use of psychological measures.
Resumo:
Bacteria play an important role in many ecological systems. The molecular characterization of bacteria using either cultivation-dependent or cultivation-independent methods reveals the large scale of bacterial diversity in natural communities, and the vastness of subpopulations within a species or genus. Understanding how bacterial diversity varies across different environments and also within populations should provide insights into many important questions of bacterial evolution and population dynamics. This thesis presents novel statistical methods for analyzing bacterial diversity using widely employed molecular fingerprinting techniques. The first objective of this thesis was to develop Bayesian clustering models to identify bacterial population structures. Bacterial isolates were identified using multilous sequence typing (MLST), and Bayesian clustering models were used to explore the evolutionary relationships among isolates. Our method involves the inference of genetic population structures via an unsupervised clustering framework where the dependence between loci is represented using graphical models. The population dynamics that generate such a population stratification were investigated using a stochastic model, in which homologous recombination between subpopulations can be quantified within a gene flow network. The second part of the thesis focuses on cluster analysis of community compositional data produced by two different cultivation-independent analyses: terminal restriction fragment length polymorphism (T-RFLP) analysis, and fatty acid methyl ester (FAME) analysis. The cluster analysis aims to group bacterial communities that are similar in composition, which is an important step for understanding the overall influences of environmental and ecological perturbations on bacterial diversity. A common feature of T-RFLP and FAME data is zero-inflation, which indicates that the observation of a zero value is much more frequent than would be expected, for example, from a Poisson distribution in the discrete case, or a Gaussian distribution in the continuous case. We provided two strategies for modeling zero-inflation in the clustering framework, which were validated by both synthetic and empirical complex data sets. We show in the thesis that our model that takes into account dependencies between loci in MLST data can produce better clustering results than those methods which assume independent loci. Furthermore, computer algorithms that are efficient in analyzing large scale data were adopted for meeting the increasing computational need. Our method that detects homologous recombination in subpopulations may provide a theoretical criterion for defining bacterial species. The clustering of bacterial community data include T-RFLP and FAME provides an initial effort for discovering the evolutionary dynamics that structure and maintain bacterial diversity in the natural environment.
Resumo:
Employees and students in University of Helsinki use various services which require authentication. Some of these services require strong authentication. Traditionally this has been realized by meeting in person and presenting an official identification card. Some of these online services can be automatized by implementing existing techniques for strong authentication. Currently strong authentication is implemented by VETUMA-service. Mobile authentication is interesting alternative method. The purpose of this paper is to study the Mobile Signature Service technology and to find out the benefits and possibilities of its use for mobile authentication in University of Helsinki. Mobile authentication is suitable method for implementing strong authentication and for signing documents digitally. Mobile authentication can be used in many different ways in Helsinki university.
Resumo:
Marketing of goods under geographical names has always been common. Aims to prevent abuse have given rise to separate forms of legal protection for geographical indications (GIs) both nationally and internationally. The European Community (EC) has also gradually enacted its own legal regime to protect geographical indications. The legal protection of GIs has traditionally been based on the idea that geographical origin endows a product exclusive qualities and characteristics. In today s world we are able to replicate almost any prod-uct anywhere, including its qualities and characteristics. One would think that this would preclude protec-tion from most geographical names, yet the number of geographical indications seems to be rising. GIs are no longer what they used to be. In the EC it is no longer required that a product is endowed exclusive characteristics by its geographical origin as long as consumers associate the product with a certain geo-graphical origin. This departure from the traditional protection of GIs is based on the premise that a geographical name extends beyond and exists apart from the product and therefore deserves protection itself. The thesis tries to clearly articulate the underlying reasons, justifications, principles and policies behind the protection of GIs in the EC and then scrutinise the scope and shape of the GI system in the light of its own justifications. The essential questions it attempts to aswer are (1) What is the basis and criteria for granting GI rights? (2) What is the scope of protection afforded to GIs? and (3) Are these both justified in the light of the functions and policies underlying granting and protecting of GIs? Despite the differences, the actual functions of GIs are in many ways identical to those of trade marks. Geographical indications have a limited role as source and quality indicators in allowing consumers to make informed and efficient choices in the market place. In the EC this role is undermined by allowing able room and discretion for uses that are arbitrary. Nevertheless, generic GIs are unable to play this role. The traditional basis for justifying legal protection seems implausible in most case. Qualities and charac-teristics are more likely to be related to transportable skill and manufacturing methods than the actual geographical location of production. Geographical indications are also incapable of protecting culture from market-induced changes. Protection against genericness, against any misuse, imitation and evocation as well as against exploiting the reputation of a GI seem to be there to protect the GI itself. Expanding or strengthening the already existing GI protection or using it to protect generic GIs cannot be justified with arguments on terroir or culture. The conclusion of the writer is that GIs themselves merit protection only in extremely rare cases and usually only the source and origin function of GIs should be protected. The approach should not be any different from one taken in trade mark law. GI protection should not be used as a means to mo-nopolise names. At the end of the day, the scope of GI protection is nevertheless a policy issue.
Resumo:
There is an ongoing controversy as to which methods in total hip arthroplasty (THA) could provide young patients with best long-term results. THA is an especially demanding operation in patients with severely dysplastic hips. The optimal surgical treatment for these patients also remains controversial. The aim of this study was to evaluate the long-term survival of THA in young patients (<55 years at the time of the primary operation) on a nation-wide level, and to analyze the long-term clinical and radio-graphical outcome of uncemented THA in patients with severely dysplastic joints. Survival of 4661 primary THAs performed for primary osteoarthritis (OA), 2557 primary THAs per-formed for rheumatoid arthritis (RA), and modern uncemented THA designs performed for primary OA in young patients, were analysed from the Finnish Arthroplasty Register. A total of 68 THAs were per-formed in 56 consecutive patients with high congenital hip dislocation between 1989-1994, and 68 THAs were performed in 59 consecutive patients with severely dysplastic hips and a previous Schanz osteotomy of the femur between 1988-1995 at the Orton Orthopaedic Hospital, Helsinki, Finland. These patients underwent a detailed physical and radiographical evaluation at a mean of 12.3 years and 13.0 years postoperatively, respectively. The risk of stem revision due to aseptic loosening in young patients with primary OA was higher for cemented stems than for proximally porous-coated or HA-coated uncemented stems implanted over the 1991-2001 period. There was no difference in the risk of revision between all-poly cemented-cups and press-fit porous-coated uncemented cups implanted during the same period, when the end point was defined as any revision (including exchange of liner). All uncemented stem designs studied in young patients with primary OA had >90% survival rates at 10 years. The Biomet Bi-Metric stem had a 95% (95% CI 93-97) survival rate even at 15 years. When the end point was defined as any revision, 10 year survival rates of all uncemented cup designs except the Harris-Galante II decreased to <80%. In young patients with RA, the risk of stem revision due to aseptic loosening was higher with cemented stems than with proximally porous-coated uncemented stems. In contrast, the risk of cup revision was higher for all uncemented cup concepts than for all-poly cemented cups with any type of cup revision as the end point. The Harris hip score increased significantly (p<0.001) both in patients with high con-genital hip dislocation and in patients with severely dysplastic hips and a previous Schanz osteotomy, treated with uncemented THA. There was a negative Trendelenburg sign in 92% and in 88% of hips, respectively. There were 12 (18%) and 15 (22%) perioperative complications. The rate of survival for the CDH femoral components, with revision due to aseptic loosening as the end point, was 98% (95% CI 97-100) at 10 years in patients with high hip dislocation and 92% (95% CI, 86-99) at 14 years in patients with a previous Schanz osteotomy. The rate of survival for press-fit, porous-coated acetabular components, with revision due to aseptic loosening as the end point, was 95% (95% CI 89-100) at 10 years in patients with high hip dislocation, and 98% (95% CI 89-100) in patients with a previous Schanz osteotomy. When revision of the cup for any reason was defined as the end point, 10 year sur-vival rates declined to 88% (95% CI 81-95) and to 69% (95% CI, 56-82), respectively. For young patients with primary OA, uncemented proximally circumferentially porous- and HA-coated stems are the implants of choice. However, survival rates of modern uncemented cups are no better than that of all-poly cemented cups. Uncemented proximally circumferentially porous-coated stems and cemented all-poly cups are currently the implants of choice for young patients with RA. Uncemented THA, with placement of the cup at the level of the true acetabulum, distal advancement of the greater trochanter and femoral shortening osteotomy provided patients with high congenital hip dislocation good long-term outcomes. Most of the patients with severely dysplastic hips and a previous Schanz osteotomy can be successfully treated with the same method. However, the subtrochanteric segmental shortening with angular correction gives better leg length correction for the patients with a previous low-seated unilateral Schanz osteotomy.
Resumo:
Emissions of coal combustion fly ash through real scale ElectroStatic Precipitators (ESP) were studied in different coal combustion and operation conditions. Sub-micron fly-ash aerosol emission from a power plant boiler and the ESP were determined and consequently the aerosol penetration, as based on electrical mobility measurements, thus giving thereby an indication for an estimate on the size and the maximum extent that the small particles can escape. The experimentals indicate a maximum penetration of 4% to 20 % of the small particles, as counted on number basis instead of the normally used mass basis, while simultaneously the ESP is operating at a nearly 100% collection efficiency on mass basis. Although the size range as such seems to appear independent of the coal, of the boiler or even of the device used for the emission control, the maximum penetration level on the number basis depends on the ESP operating parameters. The measured emissions were stable during stable boiler operation for a fired coal, and the emissions seemed each to be different indicating that the sub-micron size distribution of the fly-ash could be used as a specific characteristics for recognition, for instance for authenticity, provided with an indication of known stable operation. Consequently, the results on the emissions suggest an optimum particle size range for environmental monitoring in respect to the probability of finding traces from the samples. The current work embodies also an authentication system for aerosol samples for post-inspection from any macroscopic sample piece. The system can comprise newly introduced new devices, for mutually independent use, or, for use in a combination with each other, as arranged in order to promote the sampling operation length and/or the tag selection diversity. The tag for the samples can be based on naturally occurring measures and/or added measures of authenticity in a suitable combination. The method involves not only military related applications but those in civil industries as well. Alternatively to the samples, the system can be applied to ink for note printing or other monetary valued papers, but also in a filter manufacturing for marking fibrous filters.
Resumo:
Diagnostic radiology represents the largest man-made contribution to population radiation doses in Europe. To be able to keep the diagnostic benefit versus radiation risk ratio as high as possible, it is important to understand the quantitative relationship between the patient radiation dose and the various factors which affect the dose, such as the scan parameters, scan mode, and patient size. Paediatric patients have a higher probability for late radiation effects, since longer life expectancy is combined with the higher radiation sensitivity of the developing organs. The experience with particular paediatric examinations may be very limited and paediatric acquisition protocols may not be optimised. The purpose of this thesis was to enhance and compare different dosimetric protocols, to promote the establishment of the paediatric diagnostic reference levels (DRLs), and to provide new data on patient doses for optimisation purposes in computed tomography (with new applications for dental imaging) and in paediatric radiography. Large variations in radiation exposure in paediatric skull, sinus, chest, pelvic and abdominal radiography examinations were discovered in patient dose surveys. There were variations between different hospitals and examination rooms, between different sized patients, and between imaging techniques; emphasising the need for harmonisation of the examination protocols. For computed tomography, a correction coefficient, which takes individual patient size into account in patient dosimetry, was created. The presented patient size correction method can be used for both adult and paediatric purposes. Dental cone beam CT scanners provided adequate image quality for dentomaxillofacial examinations while delivering considerably smaller effective doses to patient compared to the multi slice CT. However, large dose differences between cone beam CT scanners were not explained by differences in image quality, which indicated the lack of optimisation. For paediatric radiography, a graphical method was created for setting the diagnostic reference levels in chest examinations, and the DRLs were given as a function of patient projection thickness. Paediatric DRLs were also given for sinus radiography. The detailed information about the patient data, exposure parameters and procedures provided tools for reducing the patient doses in paediatric radiography. The mean tissue doses presented for paediatric radiography enabled future risk assessments to be done. The calculated effective doses can be used for comparing different diagnostic procedures, as well as for comparing the use of similar technologies and procedures in different hospitals and countries.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
Physics teachers are in a key position to form the attitudes and conceptions of future generations toward science and technology, as well as to educate future generations of scientists. Therefore, good teacher education is one of the key areas of physics departments education program. This dissertation is a contribution to the research-based development of high quality physics teacher education, designed to meet three central challenges of good teaching. The first challenge relates to the organization of physics content knowledge. The second challenge, connected to the first one, is to understand the role of experiments and models in (re)constructing the content knowledge of physics for purposes of teaching. The third challenge is to provide for pre-service physics teachers opportunities and resources for reflecting on or assessing their knowledge and experience about physics and physics education. This dissertation demonstrates how these challenges can be met when the content knowledge of physics, the relevant epistemological aspects of physics and the pedagogical knowledge of teaching and learning physics are combined. The theoretical part of this dissertation is concerned with designing two didactical reconstructions for purposes of physics teacher education: the didactical reconstruction of processes (DRoP) and the didactical reconstruction of structures (DRoS). This part starts with taking into account the required professional competencies of physics teachers, the pedagogical aspects of teaching and learning, and the benefits of the graphical ways of representing knowledge. Then it continues with the conceptual and philosophical analysis of physics, especially with the analysis of experiments and models role in constructing knowledge. This analysis is condensed in the form of the epistemological reconstruction of knowledge justification. Finally, these two parts are combined in the designing and production of the DRoP and DRoS. The DRoP captures the knowledge formation of physical concepts and laws in concise and simplified form while still retaining authenticity from the processes of how concepts have been formed. The DRoS is used for representing the structural knowledge of physics, the connections between physical concepts, quantities and laws, to varying extents. Both DRoP and DRoS are represented in graphical form by means of flow charts consisting of nodes and directed links connecting the nodes. The empirical part discusses two case studies that show how the three challenges are met through the use of DRoP and DRoS and how the outcomes of teaching solutions based on them are evaluated. The research approach is qualitative; it aims at the in-depth evaluation and understanding about the usefulness of the didactical reconstructions. The data, which were collected from the advanced course for prospective physics teachers during 20012006, consisted of DRoP and DRoS flow charts made by students and student interviews. The first case study discusses how student teachers used DRoP flow charts to understand the process of forming knowledge about the law of electromagnetic induction. The second case study discusses how student teachers learned to understand the development of physical quantities as related to the temperature concept by using DRoS flow charts. In both studies, the attention is focused on the use of DRoP and DRoS to organize knowledge and on the role of experiments and models in this organization process. The results show that students understanding about physics knowledge production improved and their knowledge became more organized and coherent. It is shown that the flow charts and the didactical reconstructions behind them had an important role in gaining these positive learning results. On the basis of the results reported here, the designed learning tools have been adopted as a standard part of the teaching solutions used in the physics teacher education courses in the Department of Physics, University of Helsinki.
Resumo:
Layering is a widely used method for structuring data in CAD-models. During the last few years national standardisation organisations, professional associations, user groups for particular CAD-systems, individual companies etc. have issued numerous standards and guidelines for the naming and structuring of layers in building design. In order to increase the integration of CAD data in the industry as a whole ISO recently decided to define an international standard for layer usage. The resulting standard proposal, ISO 13567, is a rather complex framework standard which strives to be more of a union than the least common denominator of the capabilities of existing guidelines. A number of principles have been followed in the design of the proposal. The first one is the separation of the conceptual organisation of information (semantics) from the way this information is coded (syntax). The second one is orthogonality - the fact that many ways of classifying information are independent of each other and can be applied in combinations. The third overriding principle is the reuse of existing national or international standards whenever appropriate. The fourth principle allows users to apply well-defined subsets of the overall superset of possible layernames. This article describes the semantic organisation of the standard proposal as well as its default syntax. Important information categories deal with the party responsible for the information, the type of building element shown, whether a layer contains the direct graphical description of a building part or additional information needed in an output drawing etc. Non-mandatory information categories facilitate the structuring of information in rebuilding projects, use of layers for spatial grouping in large multi-storey projects, and storing multiple representations intended for different drawing scales in the same model. Pilot testing of ISO 13567 is currently being carried out in a number of countries which have been involved in the definition of the standard. In the article two implementations, which have been carried out independently in Sweden and Finland, are described. The article concludes with a discussion of the benefits and possible drawbacks of the standard. Incremental development within the industry, (where ”best practice” can become ”common practice” via a standard such as ISO 13567), is contrasted with the more idealistic scenario of building product models. The relationship between CAD-layering, document management product modelling and building element classification is also discussed.
Resumo:
Activity systems are the cognitively linked groups of activities that consumers carry out as a part of their daily life. The aim of this paper is to investigate how consumers experience value through their activities, and how services fit into the context of activity systems. A new technique for illustrating consumers’ activity systems is introduced. The technique consists of identifying a consumer’s activities through an interview, then quantitatively measuring how the consumer evaluates the identified activities on three dimensions: Experienced benefits, sacrifices and frequency. This information is used to create a graphical representation of the consumer’s activity system, an “activityscape map”. Activity systems work as infrastructure for the individual consumer’s value experience. The paper contributes to value and service literature, where there currently are no clearly described standardized techniques for visually mapping out individual consumer activity. Existing approaches are service- or relationship focused, and are mostly used to identify activities, not to understand them. The activityscape representation provides an overview of consumers’ perceptions of their activity patterns and the position of one or several services in this pattern. Comparing different consumers’ activityscapes, it shows the differences between consumers' activity structures, and provides insight into how services are used to create value within them. The paper is conceptual; an empirical illustration is used to indicate the potential in further empirical studies. The technique can be used by businesses to understand contexts for service use, which may uncover potential for business reconfiguration and customer segmentation.
Resumo:
Activity systems are the cognitively linked groups of activities that consumers carry out as a part of their daily life. The aim of this paper is to investigate how consumers experience value through their activities, and how services fit into the context of activity systems. A new technique for illustrating consumers’ activity systems is introduced. The technique consists of identifying a consumer’s activities through an interview, then quantitatively measuring how the consumer evaluates the identified activities on three dimensions: Experienced benefits, sacrifices and frequency. This information is used to create a graphical representation of the consumer’s activity system, an “activityscape map”. Activity systems work as infrastructure for the individual consumer’s value experience. The paper contributes to value and service literature, where there currently are no clearly described standardized techniques for visually mapping out individual consumer activity. Existing approaches are service- or relationship focused, and are mostly used to identify activities, not to understand them. The activityscape representation provides an overview of consumers’ perceptions of their activity patterns and the position of one or several services in this pattern. Comparing different consumers’ activityscapes, it shows the differences between consumers' activity structures, and provides insight into how services are used to create value within them. The paper is conceptual; an empirical illustration is used to indicate the potential in further empirical studies. The technique can be used by businesses to understand contexts for service use, which may uncover potential for business reconfiguration and customer segmentation.
Resumo:
We have developed CowLog, which is open-source software for recording behaviors from digital video and is easy to use and modify. CowLog tracks the time code from digital video files. The program is suitable for coding any digital video, but the authors have used it in animal research. The program has two main windows: a coding window, which is a graphical user interface used for choosing video files and defining output files that also has buttons for scoring behaviors, and a video window, which displays the video used for coding. The windows can be used in separate displays. The user types the key codes for the predefined behavioral categories, and CowLog transcribes their timing from the video time code to a data file. CowLog comes with an additional feature, an R package called Animal, for elementary analyses of the data files. With the analysis package, the user can calculate the frequencies, bout durations, and total durations of the coded behaviors and produce summary plots from the data.
Resumo:
We have developed CowLog, which is open-source software for recording behaviors from digital video and is easy to use and modify. CowLog tracks the time code from digital video files. The program is suitable for coding any digital video, but the authors have used it in animal research. The program has two main windows: a coding window, which is a graphical user interface used for choosing video files and defining output files that also has buttons for scoring behaviors, and a video window, which displays the video used for coding. The windows can be used in separate displays. The user types the key codes for the predefined behavioral categories, and CowLog transcribes their timing from the video time code to a data file. CowLog comes with an additional feature, an R package called Animal, for elementary analyses of the data files. With the analysis package, the user can calculate the frequencies, bout durations, and total durations of the coded behaviors and produce summary plots from the data.