968 resultados para Visual Basic (Programming Language)
Resumo:
Agent Communication Languages (ACLs) have been developed to provide a way for agents to communicate with each other supporting cooperation in Multi-Agent Systems. In the past few years many ACLs have been proposed for Multi-Agent Systems, such as KQML and FIPA-ACL. The goal of these languages is to support high-level, human like communication among agents, exploiting Knowledge Level features rather than symbol level ones. Adopting these ACLs, and mainly the FIPA-ACL specifications, many agent platforms and prototypes have been developed. Despite these efforts, an important issue in the research on ACLs is still open and concerns how these languages should deal (at the Knowledge Level) with possible failures of agents. Indeed, the notion of Knowledge Level cannot be straightforwardly extended to a distributed framework such as MASs, because problems concerning communication and concurrency may arise when several Knowledge Level agents interact (for example deadlock or starvation). The main contribution of this Thesis is the design and the implementation of NOWHERE, a platform to support Knowledge Level Agents on the Web. NOWHERE exploits an advanced Agent Communication Language, FT-ACL, which provides high-level fault-tolerant communication primitives and satisfies a set of well defined Knowledge Level programming requirements. NOWHERE is well integrated with current technologies, for example providing full integration for Web services. Supporting different middleware used to send messages, it can be adapted to various scenarios. In this Thesis we present the design and the implementation of the architecture, together with a discussion of the most interesting details and a comparison with other emerging agent platforms. We also present several case studies where we discuss the benefits of programming agents using the NOWHERE architecture, comparing the results with other solutions. Finally, the complete source code of the basic examples can be found in appendix.
Resumo:
Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.
Resumo:
The main aim of this thesis is strongly interdisciplinary: it involves and presumes a knowledge on Neurophysiology, to understand the mechanisms that undergo the studied phenomena, a knowledge and experience on Electronics, necessary during the hardware experimental set-up to acquire neuronal data, on Informatics and programming to write the code necessary to control the behaviours of the subjects during experiments and the visual presentation of stimuli. At last, neuronal and statistical models should be well known to help in interpreting data. The project started with an accurate bibliographic research: until now the mechanism of perception of heading (or direction of motion) are still poorly known. The main interest is to understand how the integration of visual information relative to our motion with eye position information happens. To investigate the cortical response to visual stimuli in motion and the integration with eye position, we decided to study an animal model, using Optic Flow expansion and contraction as visual stimuli. In the first chapter of the thesis, the basic aims of the research project are presented, together with the reasons why it’s interesting and important to study perception of motion. Moreover, this chapter describes the methods my research group thought to be more adequate to contribute to scientific community and underlines my personal contribute to the project. The second chapter presents an overview on useful knowledge to follow the main part of the thesis: it starts with a brief introduction on central nervous system, on cortical functions, then it presents more deeply associations areas, which are the main target of our study. Furthermore, it tries to explain why studies on animal models are necessary to understand mechanism at a cellular level, that could not be addressed on any other way. In the second part of the chapter, basics on electrophysiology and cellular communication are presented, together with traditional neuronal data analysis methods. The third chapter is intended to be a helpful resource for future works in the laboratory: it presents the hardware used for experimental sessions, how to control animal behaviour during the experiments by means of C routines and a software, and how to present visual stimuli on a screen. The forth chapter is the main core of the research project and the thesis. In the methods, experimental paradigms, visual stimuli and data analysis are presented. In the results, cellular response of area PEc to visual stimuli in motion combined with different eye positions are shown. In brief, this study led to the identification of different cellular behaviour in relation to focus of expansion (the direction of motion given by the optic flow pattern) and eye position. The originality and importance of the results are pointed out in the conclusions: this is the first study aimed to investigate perception of motion in this particular cortical area. In the last paragraph, a neuronal network model is presented: the aim is simulating cellular pre-saccadic and post-saccadic response of neuron in area PEc, during eye movement tasks. The same data presented in chapter four, are further analysed in chapter fifth. The analysis started from the observation of the neuronal responses during 1s time period in which the visual stimulation was the same. It was clear that cells activities showed oscillations in time, that had been neglected by the previous analysis based on mean firing frequency. Results distinguished two cellular behaviour by their response characteristics: some neurons showed oscillations that changed depending on eye and optic flow position, while others kept the same oscillations characteristics independent of the stimulus. The last chapter discusses the results of the research project, comments the originality and interdisciplinary of the study and proposes some future developments.
Resumo:
Interactive theorem provers are tools designed for the certification of formal proofs developed by means of man-machine collaboration. Formal proofs obtained in this way cover a large variety of logical theories, ranging from the branches of mainstream mathematics, to the field of software verification. The border between these two worlds is marked by results in theoretical computer science and proofs related to the metatheory of programming languages. This last field, which is an obvious application of interactive theorem proving, poses nonetheless a serious challenge to the users of such tools, due both to the particularly structured way in which these proofs are constructed, and to difficulties related to the management of notions typical of programming languages like variable binding. This thesis is composed of two parts, discussing our experience in the development of the Matita interactive theorem prover and its use in the mechanization of the metatheory of programming languages. More specifically, part I covers: - the results of our effort in providing a better framework for the development of tactics for Matita, in order to make their implementation and debugging easier, also resulting in a much clearer code; - a discussion of the implementation of two tactics, providing infrastructure for the unification of constructor forms and the inversion of inductive predicates; we point out interactions between induction and inversion and provide an advancement over the state of the art. In the second part of the thesis, we focus on aspects related to the formalization of programming languages. We describe two works of ours: - a discussion of basic issues we encountered in our formalizations of part 1A of the Poplmark challenge, where we apply the extended inversion principles we implemented for Matita; - a formalization of an algebraic logical framework, posing more complex challenges, including multiple binding and a form of hereditary substitution; this work adopts, for the encoding of binding, an extension of Masahiko Sato's canonical locally named representation we designed during our visit to the Laboratory for Foundations of Computer Science at the University of Edinburgh, under the supervision of Randy Pollack.
Resumo:
Web is constantly evolving, thanks to the 2.0 transition, HTML5 new features and the coming of cloud-computing, the gap between Web and traditional desktop applications is tailing off. Web-apps are more and more widespread and bring several benefits compared to traditional ones. On the other hand reference technologies, JavaScript primarly, are not keeping pace, so a paradim shift is taking place in Web programming, and so many new languages and technologies are coming out. First objective of this thesis is to survey the reference and state-of-art technologies for client-side Web programming focusing in particular on what concerns concurrency and asynchronous programming. Taking into account the problems that affect existing technologies, we finally design simpAL-web, an innovative approach to tackle Web-apps development, based on the Agent-oriented programming abstraction and the simpAL language. == Versione in italiano: Il Web è in continua evoluzione, grazie alla transizione verso il 2.0, alle nuove funzionalità introdotte con HTML5 ed all’avvento del cloud-computing, il divario tra le applicazioni Web e quelle desktop tradizionali va assottigliandosi. Le Web-apps sono sempre più diffuse e presentano diversi vantaggi rispetto a quelle tradizionali. D’altra parte le tecnologie di riferimento, JavaScript in primis, non stanno tenendo il passo, motivo per cui la programmazione Web sta andando incontro ad un cambio di paradigma e nuovi linguaggi e tecnologie stanno spuntando sempre più numerosi. Primo obiettivo di questa tesi è di passare al vaglio le tecnologie di riferimento ed allo stato dell’arte per quel che riguarda la programmmazione Web client-side, porgendo particolare attenzione agli aspetti inerenti la concorrenza e la programmazione asincrona. Considerando i principali problemi di cui soffrono le attuali tecnologie passeremo infine alla progettazione di simpAL-web, un approccio innovativo con cui affrontare lo sviluppo di Web-apps basato sulla programmazione orientata agli Agenti e sul linguaggio simpAL.
Resumo:
This thesis investigated affordances and verbal language to demonstrate the flexibility of embodied simulation processes. Starting from the assumption that both object/action understanding and language comprehension are tied to the context in which they take place, six studies clarified the factors that modulate simulation. The studies in chapter 4 and 5 investigated affordance activation in complex scenes, revealing the strong influence of the visual context, which included either objects and actions, on compatibility effects. The study in chapter 6 compared the simulation triggered by visual objects and objects names, showing differences depending on the kind of materials processed. The study in chapter 7 tested the predictions of the WAT theory, confirming that the different contexts in which words are acquired lead to the difference typically observed in the literature between concrete and abstract words. The study in chapter 8 on the grounding of abstract concepts tested the mapping of temporal contents on the spatial frame of reference of the mental timeline, showing that metaphoric congruency effects are not automatic, but flexibly mediated by the context determined by the goals of different tasks. The study in chapter 9 investigated the role of iconicity in verbal language, showing sound-to-shape correspondences when every-day object figures, result that validated the reality of sound-symbolism in ecological contexts. On the whole, this evidence favors embodied views of cognition, and supports the hypothesis of a high flexibility of simulation processes. The reported conceptual effects confirm that the context plays a crucial role in affordances emergence, metaphoric mappings activation and language grounding. In conclusion, this thesis highlights that in an embodied perspective cognition is necessarily situated and anchored to a specific context, as it is sustained by the existence of a specific body immersed in a specific environment.
Resumo:
Without a doubt, one of the biggest changes that affected XXth century art is the introduction of words into paintings and, in more recent years, in installations. For centuries, if words were part of a visual composition, they functioned as reference; strictly speaking, they were used as a guideline for a better perception of the subject represented. With the developments of the XXth century, words became a very important part of the visual composition, and sometimes embodied the composition itself. About this topic, American art critic and collector Russell Bowman wrote an interesting article called Words and images: A persistent paradox, in which he examines the American and the European art of the XXth century in almost its entirety, dividing it up in six “categories of intention”. The aforementioned categories are not based on the art history timeline, but on the role that language played for specific artists or movements. Taking inspiration from Bowman's article, this paper is structured in three chapters, respectively: words in juxtaposition and free association, words as means of exploration of language structures, and words as means for political and personal messages. The purpose of this paper is therefore to reflect on the role of language in contemporary art and on the way it has changed from artist to artist.
Resumo:
Flowers attract honeybees using colour and scent signals. Bimodality (having both scent and colour) in flowers leads to increased visitation rates, but how the signals influence each other in a foraging situation is still quite controversial. We studied four basic questions: When faced with conflicting scent and colour information, will bees choose by scent and ignore the “wrong” colour, or vice versa? To get to the bottom of this question, we trained bees on scent-colour combination AX (rewarded) versus BY (unrewarded) and tested them on AY (previously rewarded colour and unrewarded scent) versus BX (previously rewarded scent and unrewarded colour). It turned out that the result depends on stimulus quality: if the colours are very similar (unsaturated blue and blue-green), bees choose by scent. If they are very different (saturated blue and yellow), bees choose by colour. We used the same scents, lavender and rosemary, in both cases. Our second question was: Are individual bees hardwired to use colour and ignore scent (or vice versa), or can this behaviour be modified, depending on which cue is more readily available in the current foraging context? To study this question, we picked colour-preferring bees and gave them extra training on scent-only stimuli. Afterwards, we tested if their preference had changed, and if they still remembered the scent stimulus they had originally used as their main cue. We came to the conclusion that a colour preference can be reversed through scent-only training. We also gave scent-preferring bees extra training on colour-only stimuli, and tested for a change in their preference. The number of animals tested was too small for statistical tests (n = 4), but a common tendency suggested that colour-only training leads to a preference for colour. A preference to forage by a certain sensory modality therefore appears to be not fixed but flexible, and adapted to the bee’s surroundings. Our third question was: Do bees learn bimodal stimuli as the sum of their parts (elemental learning), or as a new stimulus which is different from the sum of the components’ parts (configural learning)? We trained bees on bimodal stimuli, then tested them on the colour components only, and the scent components only. We performed this experiment with a similar colour set (unsaturated blue and blue-green, as above), and a very different colour set (saturated blue and yellow), but used lavender and rosemary for scent stimuli in both cases. Our experiment yielded unexpected results: with the different colours, the results were best explained by elemental learning, but with the similar colour set, bees exhibited configural learning. Still, their memory of the bimodal compound was excellent. Finally, we looked at reverse-learning. We reverse-trained bees with bimodal stimuli to find out whether bimodality leads to better reverse-learning compared to monomodal stimuli. We trained bees on AX (rewarded) versus BY (unrewarded), then on AX (unrewarded) versus BY (rewarded), and finally on AX (rewarded) and BY (unrewarded) again. We performed this experiment with both colour sets, always using the same two scents (lavender and rosemary). It turned out that bimodality does not help bees “see the pattern” and anticipate the switch. Generally, bees trained on the different colour set performed better than bees trained on the similar colour set, indicating that stimulus salience influences reverse-learning.
Resumo:
Die BBC-Serie SHERLOCK war 2011 eine der meistexportierten Fernsehproduktionen Großbritanniens und wurde weltweit in viele Sprachen übersetzt. Eine der Herausforderungen bei der Übersetzung stellen die Schrifteinblendungen der Serie (kurz: Inserts) dar. Die Inserts versprachlichen die Gedanken des Protagonisten, bilden schriftliche und digitale Kommunikation ab und zeichnen sich dabei durch ihre visuelle Auffälligkeit und teilweise als einzige Träger sprachlicher Kommunikation aus, womit sie zum wichtigen ästhetischen und narrativen Mittel in der Serie werden. Interessanterweise sind in der Übersetztung alle stilistischen Eigenschaften der Original-Inserts erhalten. In dieser Arbeit wird einerseits untersucht, wie Schrifteinblendungen im Film theoretisch beschrieben werden können, und andererseits, was sie in der Praxis so übersetzt werden können, wie es in der deutschen Version von Sherlock geschah. Zur theoretischen Beschreibung werden zunächst die Schrifteinblendungen in Sherlock Untertitelungsnormen anhand relevanter grundlegender semiotischer Dimensionen gegenübergestellt. Weiterhin wird das Verhältnis zwischen Schrifteinblendungen und Filmbild erkundet. Dazu wird geprüft, wie gut verschiedene Beschreibungsansätze zu Text-Bild-Verhältnissen aus der Sprachwissenschaft, Comicforschung, Übersetzungswissenschaft und Typografie die Einblendungen in Sherlock erklären können. Im praktischen Teil wird die Übersetzung der Einblendungen beleuchtet. Der Übersetzungsprozess bei der deutschen Version wird auf Grundlage eines Experteninterviews mit dem Synchronautor der Serie rekonstruiert, der auch für die Formulierung der Inserts zuständig war. Abschließend werden spezifische Übersetzungsprobleme der Inserts aus der zweiten Staffel von SHERLOCK diskutiert. Es zeigt sich, dass Untertitelungsnormen zur Beschreibung von Inserts nicht geeignet sind, da sie in Dimensionen wie Position, grafische Gestaltung, Animation, Soundeffekte, aber auch Timing stark eingeschränkt sind. Dies lässt sich durch das historisch geprägte Verständnis von Untertiteln erklären, die als möglichst wenig störendes Beiwerk zum fertigen Filmbild und -ablauf (notgedrungen) hinzugefügt werden, wohingegen für die Inserts in SHERLOCK teilweise sogar ein zentraler Platz in der Bild- und Szenenkomposition bereits bei den Dreharbeiten vorgesehen wurde. In Bezug auf Text-Bild-Verhältnisse zeigen sich die größten Parallelen zu Ansätzen aus der Comicforschung, da auch dort schriftliche Texte im Bild eingebettet sind anstatt andersherum. Allerdings sind auch diese Ansätze zur Beschreibung von Bewegung und Ton unzureichend. Die Erkundung der Erklärungsreichweite weiterer vielversprechender Konzepte, wie Interface und Usability, bleibt ein Ziel für künftige Studien. Aus dem Experteninterview lässt sich schließen, dass die Übersetzung von Inserts ein neues, noch unstandardisiertes Verfahren ist, in dem idiosynkratische praktische Lösungen zur sprachübergreifenden Kommunikation zwischen verschiedenen Prozessbeteiligten zum Einsatz kommen. Bei hochqualitative Produktionen zeigt ist auch für die ersetzende Insertübersetzung der Einsatz von Grafikern unerlässlich, zumindest für die Erstellung neuer Inserts als Übersetzungen von gefilmtem Text (Display). Hierbei sind die theoretisch möglichen Synergien zwischen Sprach- und Bildexperten noch nicht voll ausgeschöpft. Zudem zeigt sich Optimierungspotential mit Blick auf die Bereitstellung von sorgfältiger Dokumentation zur ausgangssprachlichen Version. Diese wäre als Referenzmaterial für die Übersetzung insbesondere auch für Zwecke der internationalen Qualitätssicherung relevant. Die übersetzten Inserts in der deutschen Version weisen insgesamt eine sehr hohe Qualität auf. Übersetzungsprobleme ergeben sich für das genretypische Element der Codes, die wegen ihrer Kompaktheit und multiplen Bezügen zum Film eine Herausforderung darstellen. Neben weiteren bekannten Übersetzungsproblemen wie intertextuellen Bezügen und Realia stellt sich immer wieder die Frage, wieviel der im Original dargestellten Insert- und Displaytexte übersetzt werden müssen. Aus Gründen der visuellen Konsistenz wurden neue Inserts zur Übersetzung von Displays notwendig. Außerdem stellt sich die Frage insbesondere bei Fülltexten. Sie dienen der Repräsentation von Text und der Erweiterung der Grenzen der fiktiv dargestellten Welt, sind allerdings mit hohem Übersetzungsaufwand bei minimaler Bedeutung für die Handlung verbunden.
Resumo:
When healthy observers make a saccade that is erroneously directed toward a distracter stimulus, they often produce a corrective saccade within 100ms after the end of the primary saccade. Such short inter-saccadic intervals indicate that programming of the secondary saccade has been initiated prior to the execution of the primary saccade and hence that the two saccades have been programmed concurrently. Here we show that concurrent saccade programming is bilaterally impaired in left spatial neglect, a strongly lateralized disorder of visual attention resulting from extensive right cerebral damage. Neglect patients were asked to make saccades to targets presented left or right of fixation while disregarding a distracter presented in the opposite hemifield. We examined those experimental trials on which participants first made a saccade to the distracter, followed by a secondary (corrective) saccade to the target. Compared to healthy and right-hemisphere damaged control participants the proportion of secondary saccades directing gaze to the target instead of bringing it even closer to the distracter was bilaterally reduced in neglect patients. In addition, the characteristic reduction of secondary saccade latency observed in both control groups was absent in neglect patients, whether the secondary saccade was directed to the left or right hemifield. This pattern is consistent with a severe, bilateral impairment of concurrent saccade programming in left spatial neglect.
Comparative Analysis of Russian and French Prosodies: Theoretical, Experimental and Applied Aspects"
Resumo:
Experience shows that in teaching the pronunciation of a foreign language, it is the native syllable stereotype that resists correction most strongly. This is because the syllable is the basic unit of the perception and production of speech, and syllabic production is highly automatic and to some degree determines the prosody of speech at all levels: accent, rhythm, phrase, etc. The results of psycho-physiological studies show that the human acoustic analyser is a typical contemplator organ and new acoustic qualities are perceived through their inclusion into the already existing system of values characteristic to the mother tongue. This results in the adaptation of the perception and so production of foreign speech to native patterns. The less conscious the perception of the unit and the more 'primitive' its status, the greater the degree of its auditory assimilation, and the syllable is certainly among the less controllable linguistic units. The group carried out a complex investigation of the French and Russian languages at the level of syllable realisation, focusing on the stressed syllable of both open and closed types. The useful acoustic characteristics of the French/Russian syllable pattern were determined through identifying a typical syllable pattern within the system of each of the two languages, comparing these patterns to establish their contrasting features, and observing and systematising deviations from the pattern typical of the French/Russian language teaching situation. The components of the syllable pattern shown to need particular attention in teaching French pronunciation to Russian native speakers were intensity, fundamental frequency, and duration. The group then developed a method of correction which combines the auditory and visual canals of sound signal perception and tested this method with groups of Russian students of different levels.
Resumo:
A semantic approach towards political conflict first emerged in the 1930s and provides the methodological foundations for the description of political conflicts, in particular as the correlation between the language of description and reality. Any military or political confrontation presupposes axiological, conceptual and ideological confrontation. The form of adequate description can only be comprehended if the characteristic features of its language (structure) and thesaurus are revealed. Admitting the possibility of different descriptions implies the necessity of analysing this possible ambiguity, i.e. the characteristic features of the language which enable us to form various statements, including mutually exclusive ones. The insoluble task of finding a middle ground between the viewpoints of the conflicting parties should be replaced by soluble procedures of explaining and assessing the conflicting axiologies. For the description of conflict situations, when it is essential to represent various positions within a uniform system, an apparatus of model semantics seems to be the most appropriate one both for generating alternatives and for bringing them together in a modal system of a world in which procedures of transition from one world to another (i.e. the transworld compatibility between them) are also reflected. Reality is reconstructed not as a sort of middle ground between the mutually exclusive approaches nor as their sum, but as a result of the overlapping of various worlds and the procedures of transition from one state of affairs to another. The description of a conflict is therefore seen as a system of worlds connected by modal relations, with a system of worlds emerging as a reality to be described. This approach makes it possible to describe the processes from the points of view of the participating parties and, at the same time, to reveal their basic attitudes. The main idea of this research is shown by the problems analysed: the description of conflict as methodology; language and behaviour (general problems of semiotic description), the logico-semantic analysis of the notions of "problem and conflict", "Genesis and Chronology", "the recurrent model of the (historical) explanation and interpretation of the conflict". Zolyan used data on the Karabagh conflict to demonstrate the dependence of the structure of semio-cultural codes on current political development and considered post-soviet history as a semio-cultural problem. He sought to consider and reveal the logic of manipulations with history, and proposed the logic of preferences as a possible instrument for achieving compromise.
Resumo:
Assessment of soil disturbance on the Custer National Forest was conducted during two summers to determine if the U.S. Forest Service Forest Soil Disturbance Monitoring Protocol (FSDMP) was able to distinguish post-harvest soil conditions in a chronological sequence of sites harvested using different ground-based logging systems. Results from the first year of sampling suggested that the FSDMP point sampling method may not be sensitive enough to measure post-harvest disturbance in stands with low levels of disturbance. Therefore, a revised random transect method was used during the second sampling season to determine the actual extent of soil disturbance in these cutting units. Using combined data collected from both summers I detected statistically significant differences (p < 0.05) in fine fraction bulk density measurements between FSDMP disturbance classes across all sites. Disturbance class 3 (most severe) had the highest reported bulk density, which suggest that the FSDMP visual class estimates are defined adequately allowing for correlations to be made between visual disturbance and actual soil physical characteristics. Forest site productivity can be defined by its ability to retain carbon and convert it to above- and belowground biomass. However, forest management activities that alter basic site characteristics have the potential to alter productivity. Soil compaction is one critical management impact that is important to understand; compaction has been shown to impede the root growth potential of plants, reduce water infiltration rates increasing erosion potential, and alter plant available water and nutrients, depending on soil texture. A new method to assess ground cover, erosion, and other soil disturbances was recently published by the U.S. Forest Service, as the Forest Soil Disturbance Protocol (FSDMP). The FSDMP allows soil scientists to visually assign a disturbance class estimate (0 – none, 1, 2, 3 – severe) from field measures of consistently defined soil disturbance indicators (erosion, fire, rutting, compaction, and platy/massive/puddled structure) in small circular (15 cm) plots to compare soil quality properties pre- and post- harvest condition. Using this protocol we were able to determine that ground-based timber harvesting activities occurring on the Custer National Forest are not reaching the 15% maximum threshold for detrimental soil disturbance outlined by the Region 1 Soil Quality Standards.
Resumo:
The aging population has become a burning issue for all modern societies around the world recently. There are two important issues existing now to be solved. One is how to continuously monitor the movements of those people having suffered a stroke in natural living environment for providing more valuable feedback to guide clinical interventions. The other one is how to guide those old people effectively when they are at home or inside other buildings and to make their life easier and convenient. Therefore, human motion tracking and navigation have been active research fields with the increasing number of elderly people. However, motion capture has been extremely challenging to go beyond laboratory environments and obtain accurate measurements of human physical activity especially in free-living environments, and navigation in free-living environments also poses some problems such as the denied GPS signal and the moving objects commonly presented in free-living environments. This thesis seeks to develop new technologies to enable accurate motion tracking and positioning in free-living environments. This thesis comprises three specific goals using our developed IMU board and the camera from the imaging source company: (1) to develop a robust and real-time orientation algorithm using only the measurements from IMU; (2) to develop a robust distance estimation in static free-living environments to estimate people’s position and navigate people in static free-living environments and simultaneously the scale ambiguity problem, usually appearing in the monocular camera tracking, is solved by integrating the data from the visual and inertial sensors; (3) in case of moving objects viewed by the camera existing in free-living environments, to firstly design a robust scene segmentation algorithm and then respectively estimate the motion of the vIMU system and moving objects. To achieve real-time orientation tracking, an Adaptive-Gain Orientation Filter (AGOF) is proposed in this thesis based on the basic theory of deterministic approach and frequency-based approach using only measurements from the newly developed MARG (Magnet, Angular Rate, and Gravity) sensors. To further obtain robust positioning, an adaptive frame-rate vision-aided IMU system is proposed to develop and implement fast vIMU ego-motion estimation algorithms, where the orientation is estimated in real time from MARG sensors in the first step and then used to estimate the position based on the data from visual and inertial sensors. In case of the moving objects viewed by the camera existing in free-living environments, a robust scene segmentation algorithm is firstly proposed to obtain position estimation and simultaneously the 3D motion of moving objects. Finally, corresponding simulations and experiments have been carried out.