20 resultados para web content
em Helda - Digital Repository of University of Helsinki
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
Tämä tutkielma käsittelee World Wide Webin sisältämien verkkosivujen sisältöjen käyttöä korpusmaisesti kielitieteellisenä tutkimusaineistona. World Wide Web sisältää moninkertaisesti enemmän tekstiä kuin suurimmat olemassa olevat perinteiset tekstikorpukset, joten verkkosivuilta voi todennäköisesti löytää paljon esiintymiä sellaisista sanoista ja rakenteista, jotka ovat perinteisissä korpuksissa harvinaisia. Verkkosivuja voidaan käyttää aineistona kahdella eri tavalla: voidaan kerätä satunnainen otos verkkosivuista ja luoda itsenäinen korpus niiden sisällöistä, tai käyttää koko World Wide Webiä korpuksena verkkohakukoneiden kautta. Verkkosivuja on käytetty tutkimusaineistona monilla eri kielitieteen aloilla, kuten leksikograafisessa tutkimuksessa, syntaktisten rakenteiden tutkimuksessa, pedagogisena materiaalina ja vähemmistökielten tutkimuksessa. Verkkosivuilla on perinteisiin korpuksiin verrattuna useita haitallisia ominaisuuksia, jotka pitää ottaa huomioon, kun niitä käytetään aineistona. Kaikki sivut eivät sisällä kelvollista tekstiä, ja sivut ovat usein esimerkiksi HTML-muotoisia, jolloin ne pitää muuttaa helpommin käsiteltävissä olevaan muotoon. Verkkosivut sisältävät enemmän kielellisiä virheitä kuin perinteiset korpukset, ja niiden tekstityypit ja aihepiirit ovat runsaslukuisempia kuin perinteisten korpusten. Aineiston keräämiseen verkkosivuilta tarvitaan tehokkaita ohjelmatyökaluja. Näistä yleisimpiä ovat kaupalliset verkkohakukoneet, joiden kautta on mahdollista päästä nopeasti käsiksi suureen määrään erilaisia sivuja. Näiden lisäksi voidaan käyttää erityisesti kielitieteellisiin tarpeisiin kehitettyjä työkaluja. Tässä tutkielmassa esitellään ohjelmatyökalut WebCorp, WebAsCorpus.org, BootCaT ja Web as Corpus Toolkit, joiden avulla voi hakea aineistoa verkkosivuilta nimenomaan kielitieteellisiin tarkoituksiin.
Resumo:
The aim of this dissertation was to explore teaching in higher education from the teachers’ perspective. Two of the four studies analysed the effect of pedagogical training on approaches to teaching and on self-efficacy beliefs of teachers on teaching. Of these two studies, Study I analysed the effect of pedagogical training by applying a cross-sectional setting. The results showed that short training made teachers less student-centred and decreased their self-efficacy beliefs, as reported by the teachers themselves. However, more constant training enhanced the adoption of a student-centred approach to teaching and increased the self-efficacy beliefs of teachers as well. The teacher-focused approach to teaching was more resistant to change. Study II, on the other hand, applied a longitudinal setting. The results implied that among teachers who had not acquired more pedagogical training after Study II there were no changes in the student-focused approach scale between the measurements. However, teachers who had participated in further pedagogical training scored significantly higher on the scale measuring the student-focused approach to teaching. There were positive changes in the self-efficacy beliefs of teachers among teachers who had not participated in further training as well as among those who had. However, the analysis revealed that those teachers had the least teaching experience. Again, the teacher-focused approach was more resistant to change. Study III analysed approaches to teaching qualitatively by using a large and multidisciplinary sample in order to capture the variation in descriptions of teaching. Two broad categories of description were found: the learning-focused and the content-focused approach to teaching. The results implied that the purpose of teaching separates the two categories. In addition, the study aimed to identify different aspects of teaching in the higher-education context. Ten aspects of teaching were identified. While Study III explored teaching on a general level, Study IV analysed teaching on an individual level. The aim was to explore consonance and dissonance in the kinds of combinations of approaches to teaching university teachers adopt. The results showed that some teachers were clearly and systematically either learning- or content-focused. On the other hand, profiles of some teachers consisted of combinations of learning- and content-focused approaches or conceptions making their profiles dissonant. Three types of dissonance were identified. The four studies indicated that pedagogical training organised for university teachers is needed in order to enhance the development of their teaching. The results implied that the shift from content-focused or dissonant profiles towards consonant learning-focused profiles is a slow process and that teachers’ conceptions of teaching have to be addressed first in order to promote learning-focused teaching.
Resumo:
Strategies of scientific, question-driven inquiry are stated to be important cultural practices that should be educated in schools and universities. The present study focuses on investigating multiple efforts to implement a model of Progressive Inquiry and related Web-based tools in primary, secondary and university level education, to develop guidelines for educators in promoting students collaborative inquiry practices with technology. The research consists of four studies. In Study I, the aims were to investigate how a human tutor contributed to the university students collaborative inquiry process through virtual forums, and how the influence of the tutoring activities is demonstrated in the students inquiry discourse. Study II examined an effort to implement technology-enhanced progressive inquiry as a distance working project in a middle school context. Study III examined multiple teachers' methods of organizing progressive inquiry projects in primary and secondary classrooms through a generic analysis framework. In Study IV, a design-based research effort consisting of four consecutive university courses, applying progressive inquiry pedagogy, was retrospectively re-analyzed in order to develop the generic design framework. The results indicate that appropriate teacher support for students collaborative inquiry efforts appears to include interplay between spontaneity and structure. Careful consideration should be given to content mastery, critical working strategies or essential knowledge practices that the inquiry approach is intended to promote. In particular, those elements in students activities should be structured and directed, which are central to the aim of Progressive Inquiry, but which the students do not recognize or demonstrate spontaneously, and which are usually not taken into account in existing pedagogical methods or educational conventions. Such elements are, e.g., productive co-construction activities; sustained engagement in improving produced ideas and explanations; critical reflection of the adopted inquiry practices, and sophisticated use of modern technology for knowledge work. Concerning the scaling-up of inquiry pedagogy, it was concluded that one individual teacher can also apply the principles of Progressive Inquiry in his or her own teaching in many innovative ways, even under various institutional constraints. The developed Pedagogical Infrastructure Framework enabled recognizing and examining some central features and their interplay in the designs of examined inquiry units. The framework may help to recognize and critically evaluate the invisible learning-cultural conventions in various educational settings and can mediate discussions about how to overcome or change them.
Resumo:
Today information and communication technology allows us to use multimedia more than ever before in e-learning materials. Multimedia though can increase cognitive load in learning process. Because of that it cannot be taken granted what kind of learning materials should be produced. This paper intended to study the diversity of e-learning materials and the factors related cognitive load. The main purpose was to study the multimodality of the multimedia learning materials. The subject of this study is the learning materials on the web site Kansalaisen ABC published by YLE. Learning materials in the web site were approached from three different perspectives. The specific questions were: (1) What kind of form features are used in the representations of the learning material? Are certain form features preferred over others? (2) How do the cognitive load factors take shape in learning materials and between the forms? (3) How does the multimodality phenomenon appear in the learning materials and in what ways are form features and cognitive load factors related to multimodality? In this case study a qualitative approach was used. Analysis of the form features and the cognitive load factors in learning materials were based on content analysis. Form features included the specification of a format, the structure, the interactivity type and the type of learning material. The results showed that the web sites include various representations of both verbal and visual forms. Cognitive load factors were related mostly to visual than verbal material. Material presented according to the principles of cognitive multimedia theory multimedia representations did not cause cognitive overload in the informants. Cognitive load was increased in the case of students needing to split their attention between the multimedia forms in time and place. The results indicated how different individual characteristics are reflected by the cognitive load factors.
Resumo:
The study of social phenomena in the World Wide Web has been rather fragmentary, andthere is no coherent, reseach-based theory about sense of community in Web environment. Sense of community means part of one's self-concept that has to do with perceiving oneself belonging to, and feeling affinity to a certain social grouping. The present study aimed to find evidence for sense of community in Web environment, and specifically find out what the most critical psychological factors of sense of community would be. Based on known characteristics of real life communities and sense of community, and few occational studies of Web-communities, it was hypothesized that the following factors would be the most critical ones and that they could be grouped as prerequisites, facilitators and consequences of sense of community: awareness and social presence (prerequisites), criteria for membership and borders, common purpose, social interaction and reciprocity, norms and conformity, common history (facilitators), trust and accountability (consequences). In addition to critical factors, the present study aimed to find out if this kind of grouping would be valid. Furthermore, the effect of Web-community members' background variables to sense of community was of interest. In order to answer the questions, an online-questionnaire was created and tested. It included propositions that reflect factors that precede, facilitate and follow the sense of community in Web environment. A factor analysis was calculated to find out the critical factors and analyses of variance were calculated to see if the grouping to prerequisites, facilitators and consequences was right and how the background variables would affect the sense of community in Web environment. The results indicated that the psychological structure of sense of community in Web environment could not be presented with critical variables grouped as prerequisites, facilitators and consequences. Most factors did facilitate the sense of community, but based on this data it could not be argued that some of the factors chronologically precedesense of community and some follow it. Instead, the factor analysis revealed that the most critical factors in sense of community in Web environment are 1) reciprocal involvement, 2) basic trust for others, 3) similarity and common purpose of members, and 4) shared history of members. The most influencing background variables were the member's own participation activity (indicated with reading and writing messages) and the phase in membership lifecycle (from visitor to leader). The more the member participated and the further in membership life cycle he was, the more he felt sense of community. There are many descreptions of sense of community, but the present study was one of the first to actually measure the phenomenon in Web environment, and that gained well documented, valid results based on large data, proving that sense of community in Web environment is possible, and clarifying its psychological structure, thus enhancing the understanding of sense of community in Web environment. Keywords: sense of community, Web-community, psychology of the Internet
Resumo:
Tutkimuksessa analysoidaan teoriatasolla web-avusteisen tiedontuottamisyhteistyön kokeilutoiminnassa esiin tulleita ongelmia ja kootaan niiden viitoittamana teoriaperustaa tulevaisuuden työskentelytavalle, tietämyksen web-avusteiselle ryhmäprosessoinnille. Keskiössä on ihminen kognitiivisena tiedonkäsittelijänä ja elinikäisenä oppijana. Organisaatiossa vallitsevat toimintastrategiat, ryhmätyöskentelyn organisointi ja ryhmässä toteutuvat käytännöt muodostavat yksilön toimintaa ohjaavan sosiaalisen toimintaympäristön, joka säätelee yhteistyöprosessin onnistumista. Yhteistyötä tarkastellaan sosiaalisen tiedonkäsittelyn ryhmäilmiönä, jossa käsite yhteistyökyky kyseenalaistuu. Tuotetussa teoriaperustassa ihminen nähdään monitasoisten oppimishaasteiden ristitulessa. Koulutuksessa vallitseva absoluuttinen tietokäsitys on pystyttävä muuttamaan konstruktivistiseksi tietokäsitykseksi: yhteistä tietämystä ryhmänä tuotettaessa yksilöiden tietämys on rakennusaineena. Koulussa saatu sosiaalisen tiedonkäsittelyn malli, itsekeskeisen tiedonkäsittelyn malli, ei sovellu tietämyksen ryhmäprosessointiin. Malliin liittyviä oppeja on poisopittava samalla kun ryhmäjäsenenä on suunniteltava ja yhdessä opeteltava ryhmäkeskeisiä ja web-avusteisia tiedonkäsittelymalleja. Näin yhteistyöprosessissa toteutuu rinnakkain asiaoppimista ja toimintaoppimista. Näitä molempia on tarpeen tukea kehitettäessä uudenlaisia web-avusteisia työskentelymenetelmiä. Osassa I esitellään tutkimuksen kokemusperäisiä virikkeitä. Suhteellisen tietokäsityksen mukaisesti tämä tutkimus on kiinteästi kytkeytynyt tutkijan elinikäiseen oppimiseen ja tutkimus kuvataan tutkijan yksilöllisen tietämyksen prosessointina. Tutkimusaiheen löytymiseen johtaneet käytännön havainnot aloittavat sen takia raportoinnin. Erityisesti koulun sosiaalisen tiedonkäsittelyn organisointi on tarkastelun kohteena. Osassa II hahmotetaan tutkimusnäkökulmaa. Tutkimuskysymykset ovat lähinnä viitoittamassa tietämyksen etsintää. Tutkimusmenetelmä, kokeilutoiminnan analysointi, on saanut vahvasti vaikutteita toimintatutkimuksesta. Tutkimusasetelman perustana on toimintaan kytkeytyvä ihmiskäsitys. Kirjallisuuden avulla etsitään mahdollisia kytkeytymiä ihmisen käyttäytymisen ja sosiaalisen toiminnan organisoinnin välillä. Osa III kuvailee tutkimusaihetta koskevan ymmärryksen prosessointia. Esiymmärrys yhteistyöstä ja yhteistyökäsitteen määrittely mahdollistavat kokeilutoimintaan ryhtymisen. Uusi viestintämahdollisuus (www) nähdään yhteisessä tiedonkäsittelyssä apuvälineenä. Käytännön kokeilujaksot tarjoavat aineksia kognitiivisen tiedonkäsittelyn syvällisten periaatteiden löytymiselle. Inhimillinen toiminta ja sosiaalinen tiedonkäsittely saavat teoriatason analysoinnissa oppimiseen perustuvia tulkintoja. Osassa IV esitellään tutkimuksen tavoitteena ollutta teoriaperustaa tietämyksen web-avusteiselle ryhmäprosessoinnille. Teoriatiivistelmässä keskiöön nousee ihmisen kognitiivinen tiedonkäsittely sekä konstruktivistisen asiaoppimisen että selviytymispainotteisen toimintaoppimisen muodossa. Sosiaalinen toimintaympäristö nähdään osallisena kognitiivisessa tiedonkäsittelyssä ja selityksenä sille, että yhteinen tiedonkäsittely ei onnistu pelkästään tietoverkkoympäristössä vaan tarvitaan myös sosiaalisia tapaamisia. Kokeilutoiminnan opetuksia tuodaan esille yhteistyöprosessin keskeisten tapahtumien tarkastelussa. Tuotettua teoriaperustaa koetellaan tieteen kentällä vertailemalla sitä muiden tutkijoiden julkaisemiin käsityksiin. Tietoverkkoteknologian suhdetta inhimilliseen tiedonkäsittelyyn verrataan muualla saatuihin ja julkaistuihin käsityksiin. Avainsanat: tietointensiivinen yhteistyö, tiedontuottamisyhteistyö, yhteistyöprosessi, yhteistyökyky, yhteistyökyvyttömyys, virtuaalinen yhteistyöorganisaatio, yksilöllinen tietämys, yhteisen ymmärryksen etsintä, yhteinen tietämys, kognitiivinen tiedonkäsittely, asiaoppiminen, toimintaoppiminen, sosiaalinen tiedonkäsittely, sosiaalinen toimintaympäristö, automatisoitunut toimintamalli, yksilökeskeinen toimintamalli, itsekeskeinen toimintamalli, ryhmäkeskeinen toimintamalli, sosiaalisen tiedonkäsittelyn toimintastrategia, työympäristössä oppiminen.
Resumo:
DEVELOPING A TEXTILE ONTOLOGY FOR THE SEMANTIC WEB AND CONNECTING IT TO MUSEUM CATALOGING DATA The goal of the Semantic Web is to share concept-based information in a versatile way on the Internet. This is achievable using formal data structures called ontologies. The goal of this re-search is to increase the usability of museum cataloging data in information retrieval. The work is interdisciplinary, involving craft science, terminology science, computer science, and museology. In the first part of the dissertation an ontology of concepts of textiles, garments, and accessories is developed for museum cataloging work. The ontology work was done with the help of thesauri, vocabularies, research reports, and standards. The basis of the ontology development was the Museoalan asiasanasto MASA, a thesaurus for museum cataloging work which has been enriched by other vocabularies. Concepts and terms concerning the research object, as well as the material names of textiles, costumes, and accessories, were focused on. The research method was terminological concept analysis complemented by an ontological view of the Semantic Web. The concept structure was based on the hierarchical generic relation. Attention was also paid to other relations between terms and concepts, and between concepts themselves. Altogether 977 concept classes were created. Issues including how to choose and name concepts for the ontology hierarchy and how deep and broad the hierarchy could be are discussed from the viewpoint of the ontology developer and museum cataloger. The second part of the dissertation analyzes why some of the cataloged terms did not match with the developed textile ontology. This problem is significant because it prevents automatic ontological content integration of the cataloged data on the Semantic Web. The research datasets, i.e. the cataloged museum data on textile collections, came from three museums: Espoo City Museum, Lahti City Museum and The National Museum of Finland. The data included 1803 textile, costume, and accessory objects. Unmatched object and textile material names were analyzed. In the case of the object names six categories (475 cases), and of the material names eight categories (423 cases), were found where automatic annotation was not possible. The most common explanation was that the cataloged field was filled with a long sentence comprised of many terms. Sometimes in the compound term, the object name and material, or the name and the way of usage, were combined. As well, numeric values in the material name cataloging field prevented annotation and so did the absence of a corresponding concept in the ontology. Ready-made drop-down lists of materials used in one cataloging system facilitated the annotation. In the case of naming objects and materials, one should use terms in basic form without attributes. The developed textile ontology has been applied in two cultural portals, MuseumFinland and Culturesampo, where one can search for and browse information based on cataloged data using integrated ontologies in an interoperable way. The textile ontology is also part of the national FinnONTO ontology infrastructure. Keywords: annotation, concept, concept analysis, cataloging, museum collection, ontology, Semantic Web, textile collection, textile material
Resumo:
Tutkimus käsittelee verkko-opetusinnovaation leviämistä perusasteen ja lukion maantieteeseen vuosina 1998–2004. Työssä sovellettiin opetusinnovaation leviämismallia ja innovaatioiden diffuusioteoriaa. Aineisto hankittiin seitsemänä vuotena kyselylomakkeilla maantieteen verkko-opetuksen edelläkävijäopettajilta, jotka palauttivat 326 lomaketta. Tutkimuksen pääongelmat olivat 1) Millaisia edellytyksiä edelläkävijäopettajilla on käyttää verkko-opetusta koulun maantieteessä? 2) Mitä sovelluksia ja millä tavoin edelläkävijäopettajat käyttävät maantieteen verkko-opetuksessa? 3) Millaisia käyttökokemuksia edelläkävijäopettajat ovat saaneet maantieteen verkko-opetuksesta? Tutkimuksessa havaittiin, että tietokoneiden riittämätön määrä ja puuttuminen aineluokasta vaikeuttivat maantieteen verkko-opetusta. Työssä kehitettiin opettajien digitaalisten mediataitojen kuutiomalli, johon kuuluvat tekniset taidot, informaation prosessointitaidot ja viestintätaidot. Opettajissa erotettiin kolme verkko-opetuksen käyttäjätyyppiä: informaatiohakuiset kevytkäyttäjät, viestintähakuiset peruskäyttäjät ja yhteistyöhakuiset tehokäyttäjät. Verkko-opetukseen liittyi intensiivisiä myönteisiä ja kielteisiä kokemuksia. Se toi iloa ja motivaatiota opiskeluun. Sitä pidettiin rikastuttavana lisänä, joka haluttiin integroida opetukseen hallitusti. Edelläkävijäopettajat ottivat käyttöön tietoverkoissa olevaa informaatiota ja sovelsivat työvälineohjelmia. He pääsivät alkuun todellisuutta jäljittelevien virtuaalimaailmojen: satelliittikuvien toistaman maapallon, digitaalikarttojen ja simulaatioiden käytössä. Opettajat kokeilivat verkon sosiaalisia tiloja reaaliaikaisen viestinnän, keskusteluryhmien ja ryhmätyöohjelmien avulla. Mielikuvitukseen perustuvat virtuaalimaailmat jäivät vähälle sillä opettajat eivät juuri pelanneet viihdepelejä. He omaksuivat virtuaalimaailmoista satunnaisia palasia käytettävissä olevan laite- ja ohjelmavarustuksen mukaan. Virtuaalimaailmojen valtaus eteni tutkimuksen aikana digitaalisen informaation hyödyntämisestä viestintäsovelluksiin ja aloittelevaan yhteistyöhön. Näin opettajat laajensivat virtuaalireviiriään tietoverkkojen dynaamisiksi toimijoiksi ja pääsivät uusin keinoin tyydyttämään ihmisen universaalia tarvetta yhteyteen muiden kanssa. Samalla opettajat valtautuivat informaation kuluttajista sen tuottajiksi, objekteista subjekteiksi. Verkko-opetus avaa koulun maantieteelle huomattavia mahdollisuuksia. Mobiililaitteiden avulla informaatiota voidaan kerätä ja tallentaa maasto-olosuhteissa, ohjelmilla sitä voidaan muuntaa muodosta toiseen. Internetin autenttiset ja ajantasaiset materiaalit tuovat opiskeluun konkretiaa ja kiinnostavuutta, mallit, simulaatiot ja paikkatieto havainnollistavat ilmiöitä. Viestintä- ja yhteistyövälineet sekä sosiaaliset informaatiotilat vahvistavat yhteistyötä. Avainsanat: verkko-opetus, internet, virtuaalimaailmat, maantiede, innovaatiot
Resumo:
When genome sections of wild Solanum species are bred into the cultivated potato (S. tuberosum L.) to obtain improved potato cultivars, the new cultivars must be evaluated for their beneficial and undesirable traits. Glycoalkaloids present in Solanum species are known for their toxic as well as for beneficial effects on mammals. On the other hand, glycoalkaloids in potato leaves provide natural protection against pests. Due to breeding, glycoalkaloid profile of the plant is affected. In addition, the starch properties in potato tubers can be affected as a result of breeding, because the crystalline properties are determined by the botanical source of the starch. Starch content and composition affect the texture of cooked and processed potatoes. In order to determine glycoalkaloid contents in Solanum species, simultaneous separation of glycoalkaloids and aglycones using reversed-phase high-performance liquid chromatography (HPLC) was developed. Clean-up of foliage samples was improved using a silica-based strong cation exchanger instead of octadecyl phases in solid-phase extraction. Glycoalkaloids alpha-solanine and alpha-chaconine were detected in potato tubers of cvs. Satu and Sini. The total glycoalkaloid concentration of non-peeled and immature tubers was at an acceptable level (under 20 mg/100 g of FW) in the cv. Satu, whereas concentration in cv. Sini was 23 mg/100 g FW. Solanum species (S. tuberosum, S. brevidens, S. acaule, and S. commersonii) and interspecific somatic hybrids (brd + tbr, acl + tbr, cmm + tbr) were analyzed for their glycoalkaloid contents using liquid chromatography-electrospray ionization-mass spectrometry (LC-ESI-MS). The concentrations in the tubers of the brd + tbr and acl + tbr hybrids remained under 20 mg/100 g FW. Glycoalkaloid concentration in the foliage of the Solanum species was between 110 mg and 890 mg/100 g FW. However, the concentration in the foliage of S. acaule was as low as 26 mg/100 g FW. The total concentrations of brd + tbr, acl + tbr, and cmm + tbr hybrid foliages were 88 mg, 180 mg, and 685 mg/100 g FW, respectively. Glycoalkaloids of both parental plants as well as new combinations of aglycones and saccharides were detected in somatic hybrids. The hybrids contained mainly spirosolanes, and glycoalkaloid structures having no 5,6-double bond in the aglycone. Based on these results, the glycoalkaloid profiles of the hybrids may represent a safer and more beneficial spectrum of glycoalkaloids than that found in currently cultivated varieties. Starch nanostructure of three different cultivars (Satu, Saturna, and Lady Rosetta), a wild species S. acaule, and interspecific somatic hybrids were examined by wide-angle and small-angle X-ray scattering (WAXS, SAXS). For the first time, the measurements were conducted on fresh potato tuber samples. Crystallinity of starch, average crystallite size, and lamellar distance were determined from the X-ray patterns. No differences in the starch nanostructure between the three different cultivars were detected. However, tuber immaturity was detected by X-ray scattering methods when large numbers of immature and mature samples were measured and the results were compared. The present study shows that no significant changes occurred in the nanostructures of starches resulting from hybridizations of potato cultivars.
Resumo:
Event-based systems are seen as good candidates for supporting distributed applications in dynamic and ubiquitous environments because they support decoupled and asynchronous many-to-many information dissemination. Event systems are widely used, because asynchronous messaging provides a flexible alternative to RPC (Remote Procedure Call). They are typically implemented using an overlay network of routers. A content-based router forwards event messages based on filters that are installed by subscribers and other routers. The filters are organized into a routing table in order to forward incoming events to proper subscribers and neighbouring routers. This thesis addresses the optimization of content-based routing tables organized using the covering relation and presents novel data structures and configurations for improving local and distributed operation. Data structures are needed for organizing filters into a routing table that supports efficient matching and runtime operation. We present novel results on dynamic filter merging and the integration of filter merging with content-based routing tables. In addition, the thesis examines the cost of client mobility using different protocols and routing topologies. We also present a new matching technique called temporal subspace matching. The technique combines two new features. The first feature, temporal operation, supports notifications, or content profiles, that persist in time. The second feature, subspace matching, allows more expressive semantics, because notifications may contain intervals and be defined as subspaces of the content space. We also present an application of temporal subspace matching pertaining to metadata-based continuous collection and object tracking.
Resumo:
In recent years, XML has been widely adopted as a universal format for structured data. A variety of XML-based systems have emerged, most prominently SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This popularity is helped by the excellent support for XML processing in many programming languages and by the variety of XML-based technologies for more complex needs of applications. Concurrently with this rise of XML, there has also been a qualitative expansion of the Internet's scope. Namely, mobile devices are becoming capable enough to be full-fledged members of various distributed systems. Such devices are battery-powered, their network connections are based on wireless technologies, and their processing capabilities are typically much lower than those of stationary computers. This dissertation presents work performed to try to reconcile these two developments. XML as a highly redundant text-based format is not obviously suitable for mobile devices that need to avoid extraneous processing and communication. Furthermore, the protocols and systems commonly used in XML messaging are often designed for fixed networks and may make assumptions that do not hold in wireless environments. This work identifies four areas of improvement in XML messaging systems: the programming interfaces to the system itself and to XML processing, the serialization format used for the messages, and the protocol used to transmit the messages. We show a complete system that improves the overall performance of XML messaging through consideration of these areas. The work is centered on actually implementing the proposals in a form usable on real mobile devices. The experimentation is performed on actual devices and real networks using the messaging system implemented as a part of this work. The experimentation is extensive and, due to using several different devices, also provides a glimpse of what the performance of these systems may look like in the future.
Resumo:
The Ajax approach has outgrown its origin as shorthand for "Asynchronous JavaScript + XML". Three years after its naming, Ajax has become widely adopted by web applications. Therefore, there exists a growing interest in using those applications with mobile devices. This thesis evaluates the presentational capability and measures the performance of five mobile browsers on the Apple iPhone and Nokia models N95 and N800. Performance is benchmarked through user-experienced response times as measured with a stopwatch. 12 Ajax toolkit examples and 8 production-quality applications are targeted, all except one in their real environments. In total, over 1750 observations are analyzed and included in the appendix. Communication delays are not considered; the network connection type is WLAN. Results indicate that the initial loading time of an Ajax application can often exceed 20 seconds. Content reordering may be used to partially overcome this limitation. Proper testing is the key for success: the selected browsers are capable of presenting Ajax applications if their differing implementations are overcome, perhaps using a suitable toolkit.