30 resultados para Emulators (Computer programs)


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The methodology of extracting information from texts has widely been described in the current literature. However, the methodology has been developed mainly for the purposes of other fields than terminology science. In addition, the research has been English language oriented. Therefore, there are no satisfactory language-independent methods for extracting terminological information from texts. The aim of the present study is to form the basis for a further improvement of methods for extraction of terminological information. A further aim is to determine differences in term extraction between subject groups with or without knowledge of the special field in question. The study is based on the theory of terminology, and has mainly a qualitative approach. The research material consists of electronically readable specialized texts in the subject domain of maritime safety. Textbooks, conference papers, research reports and articles from professional journals in Finnish and in Russian are included. The thesis first deals with certain term extraction methods. These are manual term identification and semi-automatic term extraction, the latter of which was carried out by using three commercial computer programs. The results of term extraction were compared and the recall and precision of the methods were evaluated. The latter part of the study is dedicated to the identification of concept relations. Certain linguistic expressions, which some researchers call knowledge probes, were applied to identify concept relations. The results of the present thesis suggest that special field knowledge is an advantage in manual term identification. However, in the candidate term lists the variation between subject groups was not as remarkable as it was between individual subjects. The term extraction software tested here produces candidate term lists which can be useful, but only after some manual work. Therefore, the work emphasizes the need to further develop term extraction software. Furthermore, the analyses indicate that there are a certain number of terms which were extracted by all the subjects and the software. These terms we call core terms. As the result of the experiment on linguistic expressions which signal concept relations, a proposal of Finnish and Russian knowledge probes in the field of maritime safety was made. The main finding was that it would be useful to combine the use of knowledge probes with semi-automatic term extraction since knowledge probes usually occur in the vicinity of terms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a max-min LP, the objective is to maximise ω subject to Ax ≤ 1, Cx ≥ ω1, and x ≥ 0 for nonnegative matrices A and C. We present a local algorithm (constant-time distributed algorithm) for approximating max-min LPs. The approximation ratio of our algorithm is the best possible for any local algorithm; there is a matching unconditional lower bound.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a max-min LP, the objective is to maximise ω subject to Ax ≤ 1, Cx ≥ ω1, and x ≥ 0. In a min-max LP, the objective is to minimise ρ subject to Ax ≤ ρ1, Cx ≥ 1, and x ≥ 0. The matrices A and C are nonnegative and sparse: each row ai of A has at most ΔI positive elements, and each row ck of C has at most ΔK positive elements. We study the approximability of max-min LPs and min-max LPs in a distributed setting; in particular, we focus on local algorithms (constant-time distributed algorithms). We show that for any ΔI ≥ 2, ΔK ≥ 2, and ε > 0 there exists a local algorithm that achieves the approximation ratio ΔI (1 − 1/ΔK) + ε. We also show that this result is the best possible: no local algorithm can achieve the approximation ratio ΔI (1 − 1/ΔK) for any ΔI ≥ 2 and ΔK ≥ 2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distraction in the workplace is increasingly more common in the information age. Several tasks and sources of information compete for a worker's limited cognitive capacities in human-computer interaction (HCI). In some situations even very brief interruptions can have detrimental effects on memory. Nevertheless, in other situations where persons are continuously interrupted, virtually no interruption costs emerge. This dissertation attempts to reveal the mental conditions and causalities differentiating the two outcomes. The explanation, building on the theory of long-term working memory (LTWM; Ericsson and Kintsch, 1995), focuses on the active, skillful aspects of human cognition that enable the storage of task information beyond the temporary and unstable storage provided by short-term working memory (STWM). Its key postulate is called a retrieval structure an abstract, hierarchical knowledge representation built into long-term memory that can be utilized to encode, update, and retrieve products of cognitive processes carried out during skilled task performance. If certain criteria of practice and task processing are met, LTWM allows for the storage of large representations for long time periods, yet these representations can be accessed with the accuracy, reliability, and speed typical of STWM. The main thesis of the dissertation is that the ability to endure interruptions depends on the efficiency in which LTWM can be recruited for maintaing information. An observational study and a field experiment provide ecological evidence for this thesis. Mobile users were found to be able to carry out heavy interleaving and sequencing of tasks while interacting, and they exhibited several intricate time-sharing strategies to orchestrate interruptions in a way sensitive to both external and internal demands. Interruptions are inevitable, because they arise as natural consequences of the top-down and bottom-up control of multitasking. In this process the function of LTWM is to keep some representations ready for reactivation and others in a more passive state to prevent interference. The psychological reality of the main thesis received confirmatory evidence in a series of laboratory experiments. They indicate that after encoding into LTWM, task representations are safeguarded from interruptions, regardless of their intensity, complexity, or pacing. However, when LTWM cannot be deployed, the problems posed by interference in long-term memory and the limited capacity of the STWM surface. A major contribution of the dissertation is the analysis of when users must resort to poorer maintenance strategies, like temporal cues and STWM-based rehearsal. First, one experiment showed that task orientations can be associated with radically different patterns of retrieval cue encodings. Thus the nature of the processing of the interface determines which features will be available as retrieval cues and which must be maintained by other means. In another study it was demonstrated that if the speed of encoding into LTWM, a skill-dependent parameter, is slower than the processing speed allowed for by the task, interruption costs emerge. Contrary to the predictions of competing theories, these costs turned out to involve intrusions in addition to omissions. Finally, it was learned that in rapid visually oriented interaction, perceptual-procedural expectations guide task resumption, and neither STWM nor LTWM are utilized due to the fact that access is too slow. These findings imply a change in thinking about the design of interfaces. Several novel principles of design are presented, basing on the idea of supporting the deployment of LTWM in the main task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study examined how personality and social psychological factors affect third and fourth graders' computer-mediated communication. Personality was analysed in terms of the following strategies: optimism, pessimism and defensive pessimism. Students worked either individually or in dyads which were paired homogeneously or heterogeneously according to the strategies. Moreover, the present study compared horizontal and vertical interaction. The study also examined the role that popularity plays, and students were divided into groups based on their popularity level. The results show that an optimistic strategy is useful. Optimism was found to be related to the active production and processing of ideas. Although previous research has identified drawbacks to pessimism in achievement settings, this study shows that the pessimistic strategy is not as debilitating a strategy as is usually assumed. Pessimistic students were able to process their ideas. However, defensive pessimists were somewhat cautious in introducing or changing ideas. Heterogeneous dyads were not beneficial configurations with respect to producing, introducing, or changing ideas. Moreover, many differences were found to exist between the horizontal and vertical interaction; specifically, the students expressed more opinions and feelings when teachers took no part in the discussions. Strong emotions were observed especially in the horizontal interaction. Further, group working skills were found to be more important for boys than for girls, while rejected students were not at a disadvantage compared to popular ones. Schools can encourage emotional and social learning. The present study shows that students can use computers to express their feelings. In addition, students who are unpopular in non-computer contexts or students who use pessimism can benefit from computers. Participation in computer discussions can give unpopular children a chance to develop confidence when relating to peers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the thesis was to compare the correspondence of the outcome a computer assisted program appearance compared to the original image. The aspect of the study was directed to embroidery with household machines. The study was made from the usability point of view with Brother's PE-design 6.0 embroidery design programs two automatic techniques; multicoloured fragment design and multicoloured stitch surface design. The study's subject is very current because of the fast development of machine embroidery. The theory is based on history of household sewing machines, embroidery sewing machines, stitch types in household sewing machines, embroidery design programs as well as PE-design 6.0 embroidery design program's six automatic techniques. Additionally designing of embroidery designs were included: original image, digitizing, punching, applicable sewing threads as well as the connection between embroidery designs and materials used on embroidery. Correspondences of sewn appearances were examined with sewing experimental methods. 18 research samples of five original image were sewn with both techniques. Experiments were divided into four testing stages in design program. Every testing stage was followed by experimental sewing with Brother Super Galaxie 3100D embroidery machine. Experiments were reported into process files and forms made for the techniques. Research samples were analysed on images syntactic bases with sensory perception assessment. Original images and correspondence of the embroidery appearances were analysed with a form made of it. The form was divided into colour and shape assessment in five stage-similarity-scale. Based on this correspondence analysis it can be said that with both automatic techniques the best correspondence of colour and shape was achieved by changing the standard settings and using the makers own thread chart and edited original image. According to the testing made it is impossible to inform where the image editing possibilities of the images are sufficient or does the optimum correspondence need a separate program. When aiming at correspondence between appearances of two images the computer is unable to trace by itself the appearance of the original image. Processing a computer program assisted embroidery image human perception and personal decision making are unavoidable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Malli on logiikassa käytetty abstraktio monille matemaattisille objekteille. Esimerkiksi verkot, ryhmät ja metriset avaruudet ovat malleja. Äärellisten mallien teoria on logiikan osa-alue, jossa tarkastellaan logiikkojen, formaalien kielten, ilmaisuvoimaa malleissa, joiden alkioiden lukumäärä on äärellinen. Rajoittuminen äärellisiin malleihin mahdollistaa tulosten soveltamisen teoreettisessa tietojenkäsittelytieteessä, jonka näkökulmasta logiikan kaavoja voidaan ajatella ohjelmina ja äärellisiä malleja niiden syötteinä. Lokaalisuus tarkoittaa logiikan kyvyttömyyttä erottaa toisistaan malleja, joiden paikalliset piirteet vastaavat toisiaan. Väitöskirjassa tarkastellaan useita lokaalisuuden muotoja ja niiden säilymistä logiikkoja yhdistellessä. Kehitettyjä työkaluja apuna käyttäen osoitetaan, että Gaifman- ja Hanf-lokaalisuudeksi kutsuttujen varianttien välissä on lokaalisuuskäsitteiden hierarkia, jonka eri tasot voidaan erottaa toisistaan kasvavaa dimensiota olevissa hiloissa. Toisaalta osoitetaan, että lokaalisuuskäsitteet eivät eroa toisistaan, kun rajoitutaan tarkastelemaan äärellisiä puita. Järjestysinvariantit logiikat ovat kieliä, joissa on käytössä sisäänrakennettu järjestysrelaatio, mutta sitä on käytettävä siten, etteivät kaavojen ilmaisemat asiat riipu valitusta järjestyksestä. Määritelmää voi motivoida tietojenkäsittelyn näkökulmasta: vaikka ohjelman syötteen tietojen järjestyksellä ei olisi odotetun tuloksen kannalta merkitystä, on syöte tietokoneen muistissa aina jossakin järjestyksessä, jota ohjelma voi laskennassaan hyödyntää. Väitöskirjassa tutkitaan minkälaisia lokaalisuuden muotoja järjestysinvariantit ensimmäisen kertaluvun predikaattilogiikan laajennukset yksipaikkaisilla kvanttoreilla voivat toteuttaa. Tuloksia sovelletaan tarkastelemalla, milloin sisäänrakennettu järjestys lisää logiikan ilmaisuvoimaa äärellisissä puissa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.