979 resultados para Point of Purchase (POP)
Resumo:
We investigated the behavioral correlates of the activity of serotonergic and non-serotonergic neurons in the nucleus raphe pallidus (NRP) and nucleus raphe obscurus (NRO) of unanesthetized and unrestrained cats. The animals were implanted with electrodes for recording single unit activity, parietal oscillographic activity, and splenius, digastric and masseter electromyographic activities. They were tested along the waking-sleep cycle, during sensory stimulation and during drinking behavior. The discharge of the serotonergic neurons decreased progressively from quiet waking to slow wave sleep and to fast wave sleep. Ten different patterns of relative discharge across the three states were observed for the non-serotonergic neurons. Several non-serotonergic neurons showed cyclic discharge fluctuations related to respiration during one, two or all three states. While serotonergic neurons were usually unresponsive to the sensory stimuli used, many non-serotonergic neurons responded to these stimuli. Several non-serotonergic neurons showed a phasic relationship with splenius muscle activity during auditory stimulation. One serotonergic neuron showed a tonic relationship with digastric muscle activity during drinking behavior. A few non-serotonergic neurons exhibited a tonic relationship with digastric and/or masseter muscle activity during this behavior. Many non-serotonergic neurons exhibited a phasic relationship with these muscle activities, also during this behavior. These results suggest that the serotonergic neurons in the NRP and NRO constitute a relatively homogeneous population from a functional point of view, while the non-serotonergic neurons form groups with considerable functional specificity. The data support the idea that the NRP and NRO are implicated in the control of somatic motor output.
Resumo:
The purpose of the present study was to validate the quantitative culture and cellularity of bronchoalveolar lavage (BAL) for the diagnosis of ventilator-associated pneumonia (VAP). A prospective validation test trial was carried out between 1992 and 1997 in a general adult intensive care unit of a teaching hospital. Thirty-seven patients on mechanical ventilation with suspected VAP who died at most three days after a BAL diagnostic procedure were submitted to a postmortem lung biopsy. BAL effluent was submitted to Gram staining, quantitative culture and cellularity count. Postmortem lung tissue quantitative culture and histopathological findings were considered to be the gold standard exams for VAP diagnosis. According to these criteria, 20 patients (54%) were diagnosed as having VAP and 17 (46%) as not having the condition. Quantitative culture of BAL effluent showed 90% sensitivity (18/20), 94.1% specificity (16/17), 94.7% positive predictive value and 88.8% negative predictive value. Fever and leukocytosis were useless for VAP diagnosis. Gram staining of BAL effluent was negative in 94.1% of the patients without VAP (16/17). Regarding the total cellularity of BAL, a cut-off point of 400,000 cells/ml showed a specificity of 94.1% (16/17), and a cut-off point of 50% of BAL neutrophils showed a sensitivity of 90% (19/20). In conclusion, BAL quantitative culture, Gram staining and cellularity might be useful in the diagnostic investigation of VAP.
Resumo:
The break point of the curve of blood lactate vs exercise load has been called anaerobic threshold (AT) and is considered to be an important indicator of endurance exercise capacity in human subjects. There are few studies of AT determination in animals. We describe a protocol for AT determination by the "lactate minimum test" in rats during swimming exercise. The test is based on the premise that during an incremental exercise test, and after a bout of maximal exercise, blood lactate decreases to a minimum and then increases again. This minimum value indicates the intensity of the AT. Adult male (90 days) Wistar rats adapted to swimming for 2 weeks were used. The initial state of lactic acidosis was obtained by making the animals jump into the water and swim while carrying a load equivalent to 50% of body weight for 6 min (30-s exercise interrupted by a 30-s rest). After a 9-min rest, blood was collected and the incremental swimming test was started. The test consisted of swimming while supporting loads of 4.5, 5.0, 5.5, 6.0 and 7.0% of body weight. Each exercise load lasted 5 min and was followed by a 30-s rest during which blood samples were taken. The blood lactate minimum was determined from a zero-gradient tangent to a spline function fitting the blood lactate vs workload curve. AT was estimated to be 4.95 ± 0.10% of body weight while interpolated blood lactate was 7.17 ± 0.16 mmol/l. These results suggest the application of AT determination in animal studies concerning metabolism during exercise.
Resumo:
Environmental threats are growing nowadays, they became global issues. People around the world try to face these issues by two means: solving the current affected environs and preventing non-affected environs. This thesis describes the design, implementation, and evaluation of online water quality monitoring system in Lake Saimaa, Finland. The water quality in Lake Saimaa needs to be monitored in order to provide responsible bodies with valuable information which allows them to act fast in order to prevent any negative impact on the lake's environment. The objectives were to design a suitable system, implement the system in Lake Saimaa, and then to evaluate the applicability and reliability of such systems for this environment. The needs for the system were first isolated, and then the design, needed modifications, and the construction of the system took place. After that was the testing of the system in Lake Saimaa in two locations nearby Mikkeli city. The last step was to evaluate the whole system. The main results were that the application of online water quality monitoring systems in Lake Saimaa can benefit of many advantages such as reducing the required manpower, time and running costs. However, the point of unreliability of the exact measured values of some parameters is still the drawback of such systems which can be developed by using more advanced equipments with more sophisticated features specifically for the purpose of monitoring in the predefined location.
Resumo:
To determine if radiocontrast impairs vascular relaxation of the renal artery, segments (4-5 mm in length) of canine renal artery were suspended in vitro in organ chambers to measure isometric force (95% O2/5% CO2, at 37ºC). Arterial segments with and without endothelium were placed at the optimal point of their length-tension relation and incubated with 10 µM indomethacin to prevent synthesis of endogenous prostanoids. The presence of nonionic radiocontrast (iohexol, Omnipaque 350, 1 ml in 25 ml control solution, 4% (v/v)) did not alter endothelium-dependent relaxation to acetylcholine in rings precontracted with both norepinephrine and prostaglandin F2alpha (N = 6). When the rings were precontracted with prostaglandin F2alpha, the presence of ionic contrast did not inhibit the relaxation of the arteries. However, in canine renal arteries contracted with norepinephrine, the presence of ionic radiocontrast (diatrizoate meglumine and diatrizoate sodium, MD-76, 1 ml in 25 ml control solution, 4% (v/v)) inhibited relaxation in response to acetylcholine, sodium nitroprusside (N = 6 in each group), and isoproterenol (N = 5; P < 0.05). Rings were relaxed less than 50% of norepinephrine contraction. Following removal of the contrast, vascular relaxation in response to the agonists returned to normal. These results indicate that ionic radiocontrast nonspecifically inhibits vasodilation (both cAMP-mediated and cGMP-mediated) of canine renal arteries contracted with norepinephrine. This reversible impairment of vasodilation could inhibit normal renal perfusion and act as a mechanism of renal failure following radiocontrast infusion. In the adopted experimental protocol the isoproterenol-induced relaxation of renal arteries precontracted with norepinephrine was more affected, suggesting a pivotal role of the cAMP system.
Resumo:
The deficiency of the A isoenzyme of ß-hexosaminidase (Hex) produced by different mutations of the gene that codes for the alpha subunit (Tay-Sachs disease) has two variants with enzymological differences: the B variant consists of the absence of Hex A isoenzyme and the B1 variant produces an inactive Hex A isoenzyme for the hydrolysis of the GM2 ganglioside and synthetic substrates with negative charge. In contrast to the early childhood form of the B variant, the B1 variant appears at a later clinical stage (3 to 7 years of age) with neurodegenerative symptoms leading to the death of the patient in the second decade of life. The most frequent mutation responsible for the GM2 gangliosidosis B1 variant is R178H, which has a widespread geographic and ethnic distribution. The highest incidence has been described in Portugal, which has been suggested as the point of origin of this mutation. Biochemical characterization of this lysosomal disease is carried out using negatively charged synthetic alpha subunit-specific sulfated substrates, since Hex A isoenzyme heat-inactivation assays are not applicable. However, the determination of the apparent activation energy of Hex using the neutral substrate 3,3'-dichlorophenolsulfonphthaleinyl N-acetyl-ß-D-glucosaminide, may offer a valid alternative. The presence of an alpha subunit in the alphaß heterodimer Hex A means that its activation energy (41.8 kJ/mol) is significantly lower than that of the ßß homodimer Hex B (75.1 kJ/mol); however, as mutation inactivates the alpha subunit, the Hex A of the B1 variant presents an activation energy that is similar to that of the Hex B isoenzyme.
Resumo:
Presentation of Jussi-Pekka Hakkarainen, held at the Emtacl15 conference on the 20th of April 2015 in Trondheim, Norway.
Resumo:
The growing population on earth along with diminishing fossil deposits and the climate change debate calls out for a better utilization of renewable, bio-based materials. In a biorefinery perspective, the renewable biomass is converted into many different products such as fuels, chemicals, and materials, quite similar to the petroleum refinery industry. Since forests cover about one third of the land surface on earth, ligno-cellulosic biomass is the most abundant renewable resource available. The natural first step in a biorefinery is separation and isolation of the different compounds the biomass is comprised of. The major components in wood are cellulose, hemicellulose, and lignin, all of which can be made into various end-products. Today, focus normally lies on utilizing only one component, e.g., the cellulose in the Kraft pulping process. It would be highly desirable to utilize all the different compounds, both from an economical and environmental point of view. The separation process should therefore be optimized. Hemicelluloses can partly be extracted with hot-water prior to pulping. Depending in the severity of the extraction, the hemicelluloses are degraded to various degrees. In order to be able to choose from a variety of different end-products, the hemicelluloses should be as intact as possible after the extraction. The main focus of this work has been on preserving the hemicellulose molar mass throughout the extraction at a high yield by actively controlling the extraction pH at the high temperatures used. Since it has not been possible to measure pH during an extraction due to the high temperatures, the extraction pH has remained a “black box”. Therefore, a high-temperature in-line pH measuring system was developed, validated, and tested for hot-water wood extractions. One crucial step in the measurements is calibration, therefore extensive efforts was put on developing a reliable calibration procedure. Initial extractions with wood showed that the actual extraction pH was ~0.35 pH units higher than previously believed. The measuring system was also equipped with a controller connected to a pump. With this addition it was possible to control the extraction to any desired pH set point. When the pH dropped below the set point, the controller started pumping in alkali and by that the desired set point was maintained very accurately. Analyses of the extracted hemicelluloses showed that less hemicelluloses were extracted at higher pH but with a higher molar-mass. Monomer formation could, at a certain pH level, be completely inhibited. Increasing the temperature, but maintaining a specific pH set point, would speed up the extraction without degrading the molar-mass of the hemicelluloses and thereby intensifying the extraction. The diffusion of the dissolved hemicelluloses from the wood particle is a major part of the extraction process. Therefore, a particle size study ranging from 0.5 mm wood particles to industrial size wood chips was conducted to investigate the internal mass transfer of the hemicelluloses. Unsurprisingly, it showed that hemicelluloses were extracted faster from smaller wood particles than larger although it did not seem to have a substantial effect on the average molar mass of the extracted hemicelluloses. However, smaller particle sizes require more energy to manufacture and thus increases the economic cost. Since bark comprises 10 – 15 % of a tree, it is important to also consider it in a biorefinery concept. Spruce inner and outer bark was hot-water extracted separately to investigate the possibility to isolate the bark hemicelluloses. It was showed that the bark hemicelluloses comprised mostly of pectic material and differed considerably from the wood hemicelluloses. The bark hemicelluloses, or pectins, could be extracted at lower temperatures than the wood hemicelluloses. A chemical characterization, done separately on inner and outer bark, showed that inner bark contained over 10 % stilbene glucosides that could be extracted already at 100 °C with aqueous acetone.
Resumo:
Over the last 30 years, new technologies and globalization have radically changed the way in which marketing is conducted. However, whereas their effects on business in general have been widely discussed, the focus of the effects on marketing remains without clear recognition. Global research has been made to shed light onto the issue, but it has widely concentrated on the views of executives as well as the consumer markets. In addition, a research gap is existent in applying the concept of marketing change in a specific business-to-business (B2B) industry. Therefore, the main research question this study seeks to answer is: “How is contemporary marketing conducted in the high-technology industry?” In this research, the researcher considers the specific industry of high-technology. However, as the industry is comprised of differing markets, the focus will be given to one of the industry’s prime sectors – the information technology (IT) markets, where companies offer other firms products or services manufactured with advanced technology. The growing IT-market is considered of critical importance in the economies of technologically ready countries such as Finland, where this research is also conducted. Through multiple case studies the researcher aims to describe how the changes in technology, customer engagement and future trends have shaped the way in which successful high-tech marketing is conducted in today’s marketplace. Then, results derived from the empirical research are presented to the reader with links to existing literature. As a conclusion, a generalized framework is constructed to depict and ideal marketer-customer relationship, with emphasis on dynamic, two-way communication and its supporting elements of customer analytics, change adaptation, strategic customer communication and organizational support. From a managerial point of view, the research may provide beneficial information as contemporary marketing can yield profitable outcomes if managed correctly. As a new way to grasp competitive advantage, strategic marketing is much more data-driven and customer-focused than ever before. The study can also prove to be relevant for the academic communities, while its results may act as inspiring for new focus on the education trends of future marketers. This study was limited to the internal activities done at the high-tech industry, leaving out the considerations for co-marketing, marketing via business partners or marketing at other B2B-industries.
Resumo:
Permanent magnet synchronous machines (PMSM) have become widely used in applications because of high efficiency compared to synchronous machines with exciting winding or to induction motors. This feature of PMSM is achieved through the using the permanent magnets (PM) as the main excitation source. The magnetic properties of the PM have significant influence on all the PMSM characteristics. Recent observations of the PM material properties when used in rotating machines revealed that in all PMSMs the magnets do not necessarily operate in the second quadrant of the demagnetization curve which makes the magnets prone to hysteresis losses. Moreover, still no good analytical approach has not been derived for the magnetic flux density distribution along the PM during the different short circuits faults. The main task of this thesis is to derive simple analytical tool which can predict magnetic flux density distribution along the rotor-surface mounted PM in two cases: during normal operating mode and in the worst moment of time from the PM’s point of view of the three phase symmetrical short circuit. The surface mounted PMSMs were selected because of their prevalence and relatively simple construction. The proposed model is based on the combination of two theories: the theory of the magnetic circuit and space vector theory. The comparison of the results in case of the normal operating mode obtained from finite element software with the results calculated with the proposed model shows good accuracy of model in the parts of the PM which are most of all prone to hysteresis losses. The comparison of the results for three phase symmetrical short circuit revealed significant inaccuracy of the proposed model compared with results from finite element software. The analysis of the inaccuracy reasons was provided. The impact on the model of the Carter factor theory and assumption that air have permeability of the PM were analyzed. The propositions for the further model development are presented.
Resumo:
The topic of the present doctoral dissertation is the analysis of the phonological and tonal structures of a previously largely undescribed language, namely Samue. It is a Gur language belonging to the Niger-Congo language phulym, which is spoken in Burkina Faso. The data were collected during the fieldwork period in a Sama village; the data include 1800 lexical items, thousands of elicited sentences and 30 oral texts. The data were first transcribed phonetically and then the phonological and tonal analyses were conducted. The results show that the phonological system of Samue with the phoneme inventory and phonological processes has the same characteristics as other related Gur languages, although some particularities were found, such as the voicing and lenition of stop consonants in medial positions. Tonal analysis revealed three level tones, which have both lexical and grammatical functions. A particularity of the tonal system is the regressive Mid tone spreading in the verb phrase. The theoretical framework used in the study is Optimality theory. Optimality theory is rarely used in the analysis of an entire language system, and thus an objective was to see whether the theory was applicable to this type of work. Within the tonal analysis especially, some language specific constraints had to be created, although the basic Optimality Theory principle is the universal nature of the constraints. These constraints define the well-formedness of the language structures and they are differently ranked in different languages. This study gives new insights about typological phenomena in Gur languages. It is also a fundamental starting point for the Samue language in relation to the establishment of an orthography. From the theoretical point of view, the study proves that Optimality theory is largely applicable in the analysis of an entire sound system.
Resumo:
The goal of the thesis was to investigate how much after-sales profits a crane sale generates over the life cycle of the crane and the effects of these after-sales profits on the overall profitability of the crane. The thesis utilizes theories about life cycle costing from an equipment and service supplier’s point of view. However, instead of costs, the thesis is focused on the life cycle after-sales profits from maintenance services and spare parts provided for the sold crane. The case study approach was chosen and a total of five cranes from three different segments were investigated. An eight-step life cycle profit calculation model was developed in order to analyze the chosen cases’ life cycle profits systematically. The results of the investigation suggest that the life cycle after-sales profits are significant in value. In the case analyses they accounted for between 20% and 44% of the overall life cycle profits of the case cranes. The after-sales profits should be taken into account already in the pricing when offering a crane to a customer.
Resumo:
In order to evaluate the performance of a 1-h 75-g oral glucose tolerance test (OGTT) for the diagnosis of gestational diabetes mellitus (GDM), a cohort of 4998 women, 20 years or older, without previous diabetes being treated in prenatal care clinics in Brazil answered a questionnaire and performed a 75-g OGTT including fasting, 1-h and 2-h glucose measurements between their 24th and 28th gestational weeks. Pregnancy outcomes were transcribed from medical registries. GDM was defined according to WHO criteria (fasting: ≥126 mg/dL; 2-h value: ≥140 mg/dL) and macrosomia as a birth weight equal to or higher than 4000 g. Areas under the receiver operator characteristic curve (AUC) were compared and diagnostic properties of various cut-off points were evaluated. The AUCs for the prediction of macrosomia were 0.606 (0.572-0.637) for the 1-h and 0.589 (0.557-0.622) for the 2-h plasma glucose test. Similar predictability was demonstrable regarding combined adverse outcomes: 0.582 (0.559-0.604) for the 1-h test and 0.572 (0.549-0.595) for the 2-h test. When the 1-h glucose test was evaluated against a diagnosis of GDM defined by the 2-h glucose test, the AUC was 0.903 (0.886-0.919). The cut-off point that maximized sensitivity (83%) and specificity (83%) was 141 mg/dL, identifying 21% of the women as positive. A cut-off point of 160 mg/dL, with lower sensitivity (62%), had higher specificity (94%), labeling 8.6% as positive. Detection of GDM can be done with a 1-h 75-g OGTT: the value of 160 mg/dL has the same diagnostic performance as the conventional 2-h value (140 mg/dL). The simplification of the test may improve coverage and timing of the diagnosis of GDM.
Resumo:
A recent assessment of 4400 postgraduate courses in Brazil by CAPES (a federal government agency dedicated to the improvement of the quality of and research at the postgraduate level) stimulated a large amount of manifestations in the press, scientific journals and scientific congresses. This gigantic effort to classify 16,400 scientific journals in order to provide indicators for assessment proved to be puzzling and methodologically erroneous in terms of gauging the institutions from a metric point of view. A simple algorithm is proposed here to weigh the scientometric indicators that should be considered in the assessment of a scientific institution. I conclude here that the simple gauge of the total number of citations accounts for both the productivity of scientists and the impact of articles. The effort spent in this exercise is relatively small, and the sources of information are fully accessible. As an exercise to estimate the value of the methodology, 12 institutions of physics (10 from Brazil, one from the USA and one from Italy) have been evaluated.
Resumo:
The emerging technologies have recently challenged the libraries to reconsider their role as a mere mediator between the collections, researchers, and wider audiences (Sula, 2013), and libraries, especially the nationwide institutions like national libraries, haven’t always managed to face the challenge (Nygren et al., 2014). In the Digitization Project of Kindred Languages, the National Library of Finland has become a node that connects the partners to interplay and work for shared goals and objectives. In this paper, I will be drawing a picture of the crowdsourcing methods that have been established during the project to support both linguistic research and lingual diversity. The National Library of Finland has been executing the Digitization Project of Kindred Languages since 2012. The project seeks to digitize and publish approximately 1,200 monograph titles and more than 100 newspapers titles in various, and in some cases endangered Uralic languages. Once the digitization has been completed in 2015, the Fenno-Ugrica online collection will consist of 110,000 monograph pages and around 90,000 newspaper pages to which all users will have open access regardless of their place of residence. The majority of the digitized literature was originally published in the 1920s and 1930s in the Soviet Union, and it was the genesis and consolidation period of literary languages. This was the era when many Uralic languages were converted into media of popular education, enlightenment, and dissemination of information pertinent to the developing political agenda of the Soviet state. The ‘deluge’ of popular literature in the 1920s to 1930s suddenly challenged the lexical orthographic norms of the limited ecclesiastical publications from the 1880s onward. Newspapers were now written in orthographies and in word forms that the locals would understand. Textbooks were written to address the separate needs of both adults and children. New concepts were introduced in the language. This was the beginning of a renaissance and period of enlightenment (Rueter, 2013). The linguistically oriented population can also find writings to their delight, especially lexical items specific to a given publication, and orthographically documented specifics of phonetics. The project is financially supported by the Kone Foundation in Helsinki and is part of the Foundation’s Language Programme. One of the key objectives of the Kone Foundation Language Programme is to support a culture of openness and interaction in linguistic research, but also to promote citizen science as a tool for the participation of the language community in research. In addition to sharing this aspiration, our objective within the Language Programme is to make sure that old and new corpora in Uralic languages are made available for the open and interactive use of the academic community as well as the language societies. Wordlists are available in 17 languages, but without tokenization, lemmatization, and so on. This approach was verified with the scholars, and we consider the wordlists as raw data for linguists. Our data is used for creating the morphological analyzers and online dictionaries at the Helsinki and Tromsø Universities, for instance. In order to reach the targets, we will produce not only the digitized materials but also their development tools for supporting linguistic research and citizen science. The Digitization Project of Kindred Languages is thus linked with the research of language technology. The mission is to improve the usage and usability of digitized content. During the project, we have advanced methods that will refine the raw data for further use, especially in the linguistic research. How does the library meet the objectives, which appears to be beyond its traditional playground? The written materials from this period are a gold mine, so how could we retrieve these hidden treasures of languages out of the stack that contains more than 200,000 pages of literature in various Uralic languages? The problem is that the machined-encoded text (OCR) contains often too many mistakes to be used as such in research. The mistakes in OCRed texts must be corrected. For enhancing the OCRed texts, the National Library of Finland developed an open-source code OCR editor that enabled the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary to implement, since these rare and peripheral prints did often include already perished characters, which are sadly neglected by the modern OCR software developers, but belong to the historical context of kindred languages and thus are an essential part of the linguistic heritage (van Hemel, 2014). Our crowdsourcing tool application is essentially an editor of Alto XML format. It consists of a back-end for managing users, permissions, and files, communicating through a REST API with a front-end interface—that is, the actual editor for correcting the OCRed text. The enhanced XML files can be retrieved from the Fenno-Ugrica collection for further purposes. Could the crowd do this work to support the academic research? The challenge in crowdsourcing lies in its nature. The targets in the traditional crowdsourcing have often been split into several microtasks that do not require any special skills from the anonymous people, a faceless crowd. This way of crowdsourcing may produce quantitative results, but from the research’s point of view, there is a danger that the needs of linguists are not necessarily met. Also, the remarkable downside is the lack of shared goal or the social affinity. There is no reward in the traditional methods of crowdsourcing (de Boer et al., 2012). Also, there has been criticism that digital humanities makes the humanities too data-driven and oriented towards quantitative methods, losing the values of critical qualitative methods (Fish, 2012). And on top of that, the downsides of the traditional crowdsourcing become more imminent when you leave the Anglophone world. Our potential crowd is geographically scattered in Russia. This crowd is linguistically heterogeneous, speaking 17 different languages. In many cases languages are close to extinction or longing for language revitalization, and the native speakers do not always have Internet access, so an open call for crowdsourcing would not have produced appeasing results for linguists. Thus, one has to identify carefully the potential niches to complete the needed tasks. When using the help of a crowd in a project that is aiming to support both linguistic research and survival of endangered languages, the approach has to be a different one. In nichesourcing, the tasks are distributed amongst a small crowd of citizen scientists (communities). Although communities provide smaller pools to draw resources, their specific richness in skill is suited for complex tasks with high-quality product expectations found in nichesourcing. Communities have a purpose and identity, and their regular interaction engenders social trust and reputation. These communities can correspond to research more precisely (de Boer et al., 2012). Instead of repetitive and rather trivial tasks, we are trying to utilize the knowledge and skills of citizen scientists to provide qualitative results. In nichesourcing, we hand in such assignments that would precisely fill the gaps in linguistic research. A typical task would be editing and collecting the words in such fields of vocabularies where the researchers do require more information. For instance, there is lack of Hill Mari words and terminology in anatomy. We have digitized the books in medicine, and we could try to track the words related to human organs by assigning the citizen scientists to edit and collect words with the OCR editor. From the nichesourcing’s perspective, it is essential that altruism play a central role when the language communities are involved. In nichesourcing, our goal is to reach a certain level of interplay, where the language communities would benefit from the results. For instance, the corrected words in Ingrian will be added to an online dictionary, which is made freely available for the public, so the society can benefit, too. This objective of interplay can be understood as an aspiration to support the endangered languages and the maintenance of lingual diversity, but also as a servant of ‘two masters’: research and society.