13 resultados para Gestural interfaces

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reuse of existing carefully designed and tested software improves the quality of new software systems and reduces their development costs. Object-oriented frameworks provide an established means for software reuse on the levels of both architectural design and concrete implementation. Unfortunately, due to frame-works complexity that typically results from their flexibility and overall abstract nature, there are severe problems in using frameworks. Patterns are generally accepted as a convenient way of documenting frameworks and their reuse interfaces. In this thesis it is argued, however, that mere static documentation is not enough to solve the problems related to framework usage. Instead, proper interactive assistance tools are needed in order to enable system-atic framework-based software production. This thesis shows how patterns that document a framework s reuse interface can be represented as dependency graphs, and how dynamic lists of programming tasks can be generated from those graphs to assist the process of using a framework to build an application. This approach to framework specialization combines the ideas of framework cookbooks and task-oriented user interfaces. Tasks provide assistance in (1) cre-ating new code that complies with the framework reuse interface specification, (2) assuring the consistency between existing code and the specification, and (3) adjusting existing code to meet the terms of the specification. Besides illustrating how task-orientation can be applied in the context of using frameworks, this thesis describes a systematic methodology for modeling any framework reuse interface in terms of software patterns based on dependency graphs. The methodology shows how framework-specific reuse interface specifi-cations can be derived from a library of existing reusable pattern hierarchies. Since the methodology focuses on reusing patterns, it also alleviates the recog-nized problem of framework reuse interface specification becoming complicated and unmanageable for frameworks of realistic size. The ideas and methods proposed in this thesis have been tested through imple-menting a framework specialization tool called JavaFrames. JavaFrames uses role-based patterns that specify a reuse interface of a framework to guide frame-work specialization in a task-oriented manner. This thesis reports the results of cases studies in which JavaFrames and the hierarchical framework reuse inter-face modeling methodology were applied to the Struts web application frame-work and the JHotDraw drawing editor framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hydrophobins are a group of particularly surface active proteins. The surface activity is demonstrated in the ready adsorption of hydrophobins to hydrophobic/hydrophilic interfaces such as the air/water interface. Adsorbed hydrophobins self-assemble into ordered films, lower the surface tension of water, and stabilize air bubbles and foams. Hydrophobin proteins originate from filamentous fungi. In the fungi the adsorbed hydrophobin films enable the growth of fungal aerial structures, form protective coatings and mediate the attachment of fungi to solid surfaces. This thesis focuses on hydrophobins HFBI, HFBII, and HFBIII from a rot fungus Trichoderma reesei. The self-assembled hydrophobin films were studied both at the air/water interface and on a solid substrate. In particular, using grazing-incidence x-ray diffraction and reflectivity, it was possible to characterize the hydrophobin films directly at the air/water interface. The in situ experiments yielded information on the arrangement of the protein molecules in the films. All the T. reesei hydrophobins were shown to self-assemble into highly crystalline, hexagonally ordered rafts. The thicknesses of these two-dimensional protein crystals were below 30 Å. Similar films were also obtained on silicon substrates. The adsorption of the proteins is likely to be driven by the hydrophobic effect, but the self-assembly into ordered films involves also specific protein-protein interactions. The protein-protein interactions lead to differences in the arrangement of the molecules in the HFBI, HFBII, and HFBIII protein films, as seen in the grazing-incidence x-ray diffraction data. The protein-protein interactions were further probed in solution using small-angle x-ray scattering. Both HFBI and HFBII were shown to form mainly tetramers in aqueous solution. By modifying the solution conditions and thereby the interactions, it was shown that the association was due to the hydrophobic effect. The stable tetrameric assemblies could tolerate heating and changes in pH. The stability of the structure facilitates the persistence of these secreted proteins in the soil.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anu Konttinen: Conducting Gestures Institutional and Educational Construction of Conductorship in Finland, 1973-1993. This doctoral thesis concentrates on those Finnish conductors who have participated in Professor Jorma Panula s conducting class at the Sibelius Academy during the years 1973 1993. The starting point was conducting as a myth, and the goal has been to find its practical opposite the practical core of the profession. What has been studied is whether one can theorise and analyse this core, and how. The theoretical goal has been to find out what kind of social construction conductorship is as a historical, sociological and practical phenomenon. In practical terms, this means taking the historical and social concept of a great conductor apart to look for the practical core gestural communication. The most important theoretical tool is the concept of gesture. The idea has been to sketch a theoretical model based on gestural communication between a conductor and an orchestra, and to give one example of the many possible ways of studying the gestures of a conductor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertation considers the birth of modernist and avant-gardist authorship as a reaction against mass society and massculture. Radical avant-gardism is studied as figurative violence done against the human form. The main argument claims avant-gardist authorship to be an act of masculine autogenesis. This act demands human form to be worked to an elementary state of disarticulateness, then to be reformed to the model of the artist's own psychophysical and idiosyncratic vision and experience. This work is connected to concrete mass, mass of pigment, charcoal, film, or flesh. This mass of the figure is worked to create a likeness in the nervous system of the spectator. The act of violence against the human figure is intended to shock the spectator. This shock is also a state of emotional and perceptional massification. I use theatrical image as heuristic tool and performance analysis, connecting figure and spectator into a larger image, which is constituted by relationships of mimesis, where figure presents the likeness of the spectator and spectator the likeness of the figure. Likeness is considered as both gestural - social mimetic - and sensuous - kinesthetically mimetic. Through this kind of construction one can describe and contextualize the process of violent autogenesis using particular images as case studies. Avant-gardist author is the author of theatrical image, not particular figure, and through act of massification the nervous system of the spectator is also part of this image. This is the most radical form and ideology of avant-gardist and modernist authorship or imagerial will to power. I construct a model of gestural-mimic performer to explicate the nature of violence done for human form in specific works, in Mann's novella Death in Venice, in Schiele's and Artaud's selfportaits, in Francis Bacon's paintings, in Beckett's shortplat NOT I, in Orlan's chirurgical performance Operation Omnipresense, in Cindy Sherman's Film/Stills, in Diamanda Galás's recording Vena Cava and in Hitchcock's Psycho. Masspsychology constructed a phobic picture of human form's plasticity and capability to be constituted by influencies coming both inside and outside - childhood, atavistic organic memories, urban field of nervous impulses, unconsciousness, capitalist (image)market and democratic masspolitics. Violence is then antimimetic and antitheatrical, a paradoxical situation, considering that massmedias and massaudiences created an enormous fascination about possibilities of theatrical and hypnotic influence in artistic elites. The problem was how to use theatrical image without coming as author under influence. In this work one possible answer is provided: by destructing the gestural-mimetic performer, by eliminating representations of mimic body techniques from the performer of human (a painted figure, a photographed figure, a filmed figure or an acted figure, audiovisual or vocal) figure. This work I call the chirurgical operation, which also indicates co-option with medical portraitures or medico-cultural diagnoses of human form. Destruction of the autonomy of the performer was a parallel process to constructing the new mass media audience as passive, plastic, feminine. The process created an image of a new kind of autotelic masculine author-hero, freed from human form in its bourgeois, aristocratic, classical and popular versions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distraction in the workplace is increasingly more common in the information age. Several tasks and sources of information compete for a worker's limited cognitive capacities in human-computer interaction (HCI). In some situations even very brief interruptions can have detrimental effects on memory. Nevertheless, in other situations where persons are continuously interrupted, virtually no interruption costs emerge. This dissertation attempts to reveal the mental conditions and causalities differentiating the two outcomes. The explanation, building on the theory of long-term working memory (LTWM; Ericsson and Kintsch, 1995), focuses on the active, skillful aspects of human cognition that enable the storage of task information beyond the temporary and unstable storage provided by short-term working memory (STWM). Its key postulate is called a retrieval structure an abstract, hierarchical knowledge representation built into long-term memory that can be utilized to encode, update, and retrieve products of cognitive processes carried out during skilled task performance. If certain criteria of practice and task processing are met, LTWM allows for the storage of large representations for long time periods, yet these representations can be accessed with the accuracy, reliability, and speed typical of STWM. The main thesis of the dissertation is that the ability to endure interruptions depends on the efficiency in which LTWM can be recruited for maintaing information. An observational study and a field experiment provide ecological evidence for this thesis. Mobile users were found to be able to carry out heavy interleaving and sequencing of tasks while interacting, and they exhibited several intricate time-sharing strategies to orchestrate interruptions in a way sensitive to both external and internal demands. Interruptions are inevitable, because they arise as natural consequences of the top-down and bottom-up control of multitasking. In this process the function of LTWM is to keep some representations ready for reactivation and others in a more passive state to prevent interference. The psychological reality of the main thesis received confirmatory evidence in a series of laboratory experiments. They indicate that after encoding into LTWM, task representations are safeguarded from interruptions, regardless of their intensity, complexity, or pacing. However, when LTWM cannot be deployed, the problems posed by interference in long-term memory and the limited capacity of the STWM surface. A major contribution of the dissertation is the analysis of when users must resort to poorer maintenance strategies, like temporal cues and STWM-based rehearsal. First, one experiment showed that task orientations can be associated with radically different patterns of retrieval cue encodings. Thus the nature of the processing of the interface determines which features will be available as retrieval cues and which must be maintained by other means. In another study it was demonstrated that if the speed of encoding into LTWM, a skill-dependent parameter, is slower than the processing speed allowed for by the task, interruption costs emerge. Contrary to the predictions of competing theories, these costs turned out to involve intrusions in addition to omissions. Finally, it was learned that in rapid visually oriented interaction, perceptual-procedural expectations guide task resumption, and neither STWM nor LTWM are utilized due to the fact that access is too slow. These findings imply a change in thinking about the design of interfaces. Several novel principles of design are presented, basing on the idea of supporting the deployment of LTWM in the main task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In visual search one tries to find the currently relevant item among other, irrelevant items. In the present study, visual search performance for complex objects (characters, faces, computer icons and words) was investigated, and the contribution of different stimulus properties, such as luminance contrast between characters and background, set size, stimulus size, colour contrast, spatial frequency, and stimulus layout were investigated. Subjects were required to search for a target object among distracter objects in two-dimensional stimulus arrays. The outcome measure was threshold search time, that is, the presentation duration of the stimulus array required by the subject to find the target with a certain probability. It reflects the time used for visual processing separated from the time used for decision making and manual reactions. The duration of stimulus presentation was controlled by an adaptive staircase method. The number and duration of eye fixations, saccade amplitude, and perceptual span, i.e., the number of items that can be processed during a single fixation, were measured. It was found that search performance was correlated with the number of fixations needed to find the target. Search time and the number of fixations increased with increasing stimulus set size. On the other hand, several complex objects could be processed during a single fixation, i.e., within the perceptual span. Search time and the number of fixations depended on object type as well as luminance contrast. The size of the perceptual span was smaller for more complex objects, and decreased with decreasing luminance contrast within object type, especially for very low contrasts. In addition, the size and shape of perceptual span explained the changes in search performance for different stimulus layouts in word search. Perceptual span was scale invariant for a 16-fold range of stimulus sizes, i.e., the number of items processed during a single fixation was independent of retinal stimulus size or viewing distance. It is suggested that saccadic visual search consists of both serial (eye movements) and parallel (processing within perceptual span) components, and that the size of the perceptual span may explain the effectiveness of saccadic search in different stimulus conditions. Further, low-level visual factors, such as the anatomical structure of the retina, peripheral stimulus visibility and resolution requirements for the identification of different object types are proposed to constrain the size of the perceptual span, and thus, limit visual search performance. Similar methods were used in a clinical study to characterise the visual search performance and eye movements of neurological patients with chronic solvent-induced encephalopathy (CSE). In addition, the data about the effects of different stimulus properties on visual search in normal subjects were presented as simple practical guidelines, so that the limits of human visual perception could be taken into account in the design of user interfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Structuring of the Curriculum Design: Content and Pedagogy Constructing the Whole The object of this qualitative study is to structure curriculum design by drawing from the characteristics of subject content and pedagogy. The aim is to first outline the forms of content and pedagogy within the National Core Curriculum for Basic Education. By analysing these forms I then aim to construct a general view of the curriculum’s structure and its developmental potential as it relates to both current and future pedagogical and intellectual interests. The written curriculum is examined as part of the educational guidance system, which means that it is an administrative and juridical document that governs teacher action and has a pedagogical and intellectual character. Didactical schools, curriculum ideologies and curriculum-determinants are all discussed as means of approaching the curriculum model. Curriculum content itself is defined by the different forms and conceptions of knowledge. The representation of curriculum content can be defined to be either specific or integrated. Curriculum pedagogy is in turn defined on the basis of the prevailing conception of learning and teaching. The pedagogy within the curriculum can be open or closed depending on the extent of pedagogical freedom allowed. An examination of the pedagogical dimension also covers the subject of the interfaces between formal education and informal learning, which must be taken into consideration when developing school pedagogy and therefore also in the curriculum. The data of the study consists of two curriculum documents: The Finnish National Core Curriculum for Basic Education issued in 1994 and the present National core curriculum for basic education issued in 2004. The primary method used in the study is theory-based content analysis. On the one hand the aim of the analysis is to determine if the structure, i.e., model, of the curricula is built from unconnected, self-contained elements, or whether the separate parts make a coherent whole. On the other hand, the aim is also to examine the pedagogical features the two curricula contain. The basis of the study is not the systematic comparison of the curriculum documents, yet an analysis of two very distinct documents must also be based on an examination of their inherent differences. The results of the study show that the content in the analysed documents is not integrated. The boundaries between divisions are clearly defined and the curricula are subject-oriented and based on theoretical propositional knowledge. The pedagogy is mainly closed and based on strong guidance of content, structured student evaluation and measurable learning experiences. However, curriculum documents do have representations of integrated content: the themes covered early on in the core curriculum guidelines of 1994 permeate systematically the different divisions of the curriculum. The core curriculum guidelines of 2004 in turn reveal skills which create connections between subjects. The guidelines’ utilise out-of-school environments and accommodate learner experiences, and focus on flexible studying and emphasize individual learner needs. These characteristics reveal an open form of pedagogy. In light of these results, it is possible to reach an understanding of the content and pedagogical development possibilities of the curriculum. The essential viewpoints are then the setting of thematically-oriented aims as a basis for content development, the curriculum’s pedagogical structuring on the basis of the learning process and the enhancement of connections between curricular content and pedagogy in a purposeful way. Keywords: curriculum, curriculum theory, curriculum design, core curriculum guidelines, teaching content, pedagogy

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Väitöskirjani käsittele mikrobien ja erilaisten kemikaalien rooleja saostumien ja biofilmien muodostumisessa paperi- ja kartonkikoneilla. "Saostuma" tässä työssä tarkoittaa kiinteän aineen kertymää konepinnoille tai rajapinnoille konekierroissa, jotka on tarkoitettu massasulppujen, lietteiden, vesien tai ilman kuljetukseen. Saostumasta tulee "biofilmi" silloin kun sen oleellinen rakennekomponentti on mikrobisolut tai niiden tuotteet. Väitöstyöni työhypoteesina oli, että i. tietämys saostumien koostumuksesta, sekä ii. niiden rakenteesta, biologisista, fysikaalis-kemiallisista ja teknisistä ominaisuuksista ohjaavat tutkijaa löytämään ympäristöä säästäviä keinoja estää epätoivottujen saostumien muodostus tai purkaa jo muodostuneita saostumia. Selvittääkseni saostumien koostumista ja rakennetta käytin monia erilaisia analytiikan työkaluja, kuten elektronimikroskopiaa, konfokaali-laser mikroskopiaa (CLSM), energiadispersiivistä röntgenanalyysiä (EDX), pyrolyysi kaasukromatografiaa yhdistettynä massaspektrometriaan (Py-GCMS), joninvaihtokromatografiaa, kaasukromatografiaa ja mikrobiologisia analyysejä. Osallistuin aktiivisesti innovatiivisen, valon takaisinsirontaan perustuvan sensorin kehittämistyöhön, käytettäväksi biofilmin kasvun mittaukseen suoraan koneen vesikierroista ja säiliöistä. Työni osoitti, että monet paperinvalmistuksessa käytetyistä kemikaaleista reagoivat keskenään tuottaen orgaanisia tahmakerroksia konekiertojen teräspinnoille. Löysin myös kerrostumia, jotka valomikroskooppisessa tarkastelussa oli tulkittu mikrobeiksi, mutta jotka elektronimikroskopia paljasti alunasta syntyneiksi, alumiinihydroksidiksi joka saostui pH:ssa 6,8 kiertokuitua käyttävän koneen viiravesistä. Monet paperintekijät käyttävät vieläkin alunaa kiinnitysaineena vaikka prosessiolot ovat muuttuneet happamista neutraaleiksi. Sitä pidetään paperitekijän "aspiriinina", mutta väitöstutkimukseni osoitti sen riskit. Löysin myös orgaanisia saostumia, joiden alkuperä oli aineiden, kuten pihkan, saippuoituminen (kalsium saippuat) niin että muodostui tahmankasvua ylläpitävä alusta monilla paperi- ja kartonkikoneilla. Näin solumuodoiltaan Deinococcus geothermalista muistuttavia bakteereita kasvamassa lujasti teräskoepalojen pintaan kiinnittyneinä pesäkkeinä, kun koepaloja upotettiin paperikoneiden vesikiertoihin. Nämä deinokokkimaiset pesäkkeet voivat toimia jalustana, tarttumisalustana muiden mikrobien massoille, joka selittäisi miksi saostumat yleisesti sisältävät deinokokkeja pienenä, muttei koskaan pääasiallisena rakenneosana. Kun paperikoneiden käyttämien vesien (raakavedet, lämminvesi, biologisesti puhdistettu jätevesi) laatua tutkitaan, mittausmenetelmällä on suuri merkitys. Koepalan upotusmenetelmällä todettu biofilmikasvu ja viljelmenetelmällä mitattu bakteerisaastuneisuus korreloivat toisiinsa huonosti etenkin silloin kun likaantumisessa oli mukana rihmamaiseti kasvavia bakteereja. Huoli ympäristöstä on pakottanut paperi- ja kartonkikoneiden vesikiertojen sulkemiseen. Vesien kierrätys ja prosessivesien uudelleenkäyttö nostavat prosessilämpötilaa ja lisäävät koneella kiertävien kolloidisten ja liuenneiden aineiden määriä. Tutkin kiertovesien pitoisuuksia kolmessa eriasteisesti suljetussa tehtaassa, joiden päästöt olivat 0 m3, 0,5 m3 ja 4 m3 jätevettä tuotetonnia kohden, perustuen puhdistetun jäteveden uudelleen käyttöön. Nollapäästöisellä tehtaalla kiertovesiin kertyi paljon orgaanisesti sidottua hiiltä (> 10 g L-1), etenkin haihtuvina happoina (maito-, etikka-, propioni- ja voi-). Myös sulfaatteja, klorideja, natriumia ja kalsiumia kertyi paljon, > 1 g L-1 kutakin. Pääosa (>40%) kaikista bakteereista oli 16S rRNA geenisekvenssianalyysien tulosten perusteella sukua, joskin etäistä (< 96%) ainoastaan Enterococcus cecorum bakteerille. 4 m3 päästävältä tehtaalta löytyi lisäksi Bacillus thermoamylovorans ja Bacillus coagulans. Tehtaiden saostumat sisälsivät arkkeja suurina pitoisuuksina, ≥ 108 g-1, mutta tunnistukseen riittävää sekvenssisamanlaisuutta löytyi vain yhteen arkkisukuun, Methanothrix. Tutkimustulokset osoittivat että tehtaan vesikiertojen sulkeminen vähensi rajusti mikrobiston monimuotoisuutta, muttei estänyt liuenneen aineen ja kiintoaineen mineralisoitumista.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Planar curves arise naturally as interfaces between two regions of the plane. An important part of statistical physics is the study of lattice models. This thesis is about the interfaces of 2D lattice models. The scaling limit is an infinite system limit which is taken by letting the lattice mesh decrease to zero. At criticality, the scaling limit of an interface is one of the SLE curves (Schramm-Loewner evolution), introduced by Oded Schramm. This family of random curves is parametrized by a real variable, which determines the universality class of the model. The first and the second paper of this thesis study properties of SLEs. They contain two different methods to study the whole SLE curve, which is, in fact, the most interesting object from the statistical physics point of view. These methods are applied to study two symmetries of SLE: reversibility and duality. The first paper uses an algebraic method and a representation of the Virasoro algebra to find common martingales to different processes, and that way, to confirm the symmetries for polynomial expected values of natural SLE data. In the second paper, a recursion is obtained for the same kind of expected values. The recursion is based on stationarity of the law of the whole SLE curve under a SLE induced flow. The third paper deals with one of the most central questions of the field and provides a framework of estimates for describing 2D scaling limits by SLE curves. In particular, it is shown that a weak estimate on the probability of an annulus crossing implies that a random curve arising from a statistical physics model will have scaling limits and those will be well-described by Loewner evolutions with random driving forces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ever expanding growth of the wireless access to the Internet in recent years has led to the proliferation of wireless and mobile devices to connect to the Internet. This has created the possibility of mobile devices equipped with multiple radio interfaces to connect to the Internet using any of several wireless access network technologies such as GPRS, WLAN and WiMAX in order to get the connectivity best suited for the application. These access networks are highly heterogeneous and they vary widely in their characteristics such as bandwidth, propagation delay and geographical coverage. The mechanism by which a mobile device switches between these access networks during an ongoing connection is referred to as vertical handoff and it often results in an abrupt and significant change in the access link characteristics. The most common Internet applications such as Web browsing and e-mail make use of the Transmission Control Protocol (TCP) as their transport protocol and the behaviour of TCP depends on the end-to-end path characteristics such as bandwidth and round-trip time (RTT). As the wireless access link is most likely the bottleneck of a TCP end-to-end path, the abrupt changes in the link characteristics due to a vertical handoff may affect TCP behaviour adversely degrading the performance of the application. The focus of this thesis is to study the effect of a vertical handoff on TCP behaviour and to propose algorithms that improve the handoff behaviour of TCP using cross-layer information about the changes in the access link characteristics. We begin this study by identifying the various problems of TCP due to a vertical handoff based on extensive simulation experiments. We use this study as a basis to develop cross-layer assisted TCP algorithms in handoff scenarios involving GPRS and WLAN access networks. We then extend the scope of the study by developing cross-layer assisted TCP algorithms in a broader context applicable to a wide range of bandwidth and delay changes during a handoff. And finally, the algorithms developed here are shown to be easily extendable to the multiple-TCP flow scenario. We evaluate the proposed algorithms by comparison with standard TCP (TCP SACK) and show that the proposed algorithms are effective in improving TCP behavior in vertical handoff involving a wide range of bandwidth and delay of the access networks. Our algorithms are easy to implement in real systems and they involve modifications to the TCP sender algorithm only. The proposed algorithms are conservative in nature and they do not adversely affect the performance of TCP in the absence of cross-layer information.