994 resultados para Computational-Linguistic resource
Resumo:
Today’s international business in highly related to crossing national, cultural and linguistic borders making communication and linguistic skills a vital part of the trade. The purpose of the study is to understand the role of linguistic skills in trust creation in international business relationships. Subobjectives are to discuss the importance of linguistic skills in international business context, to evaluate the strategic value of trust in business relationships and to analyze the extent to which linguistic skills affect trust formation. The scope is restricted to business-to-business markets. The theoretical background consists of different theories and previous studies related to trust and linguistic skills. Based on the theory a new LTS-framework is created to demonstrate a process model of linguistic skills affecting trust creation in international B2B relationships. This study is qualitative using interviews as a data collection method. Altogether eleven interviews were conducted between October 2014 and February 2015. All of the interviewees worked for organizations operating in the field of international business in B2B markets, spoke multiple languages and had a lot of experience in sales and negotiations. This study confirms that linguistic skills are an important part of international business. In many organizations English is used as lingua franca. However, there are several benefits of speaking the mother tongue of the customer. It makes people feel more relaxed and it makes the relationship more intimate and allows to continue developing it at a more personal level. From the strategic point of view trust creates competitive advantage to a company adding strategic value to the business. The data also supported the view that linguistic skills definitely impact the trust formation process. Quickness and easiness could be stated as the main benefits. It was seen that trust forms faster because both parties understand each other better and they become more open about information sharing within a shorter period of time. These findings and the importance of linguistic skills in trust creation should be acknowledged by organizations, especially regarding the human resource management. Boundary spanners are in key positions so special attention should be put into hiring and educating employees which then take care of company’s relationships. Eventually, these benefits are economical and affect to the profitability of the organization
Resumo:
The central thesis of this report is that human language is NP-complete. That is, the process of comprehending and producing utterances is bounded above by the class NP, and below by NP-hardness. This constructive complexity thesis has two empirical consequences. The first is to predict that a linguistic theory outside NP is unnaturally powerful. The second is to predict that a linguistic theory easier than NP-hard is descriptively inadequate. To prove the lower bound, I show that the following three subproblems of language comprehension are all NP-hard: decide whether a given sound is possible sound of a given language; disambiguate a sequence of words; and compute the antecedents of pronouns. The proofs are based directly on the empirical facts of the language user's knowledge, under an appropriate idealization. Therefore, they are invariant across linguistic theories. (For this reason, no knowledge of linguistic theory is needed to understand the proofs, only knowledge of English.) To illustrate the usefulness of the upper bound, I show that two widely-accepted analyses of the language user's knowledge (of syntactic ellipsis and phonological dependencies) lead to complexity outside of NP (PSPACE-hard and Undecidable, respectively). Next, guided by the complexity proofs, I construct alternate linguisitic analyses that are strictly superior on descriptive grounds, as well as being less complex computationally (in NP). The report also presents a new framework for linguistic theorizing, that resolves important puzzles in generative linguistics, and guides the mathematical investigation of human language.
Resumo:
A novel Swarm Intelligence method for best-fit search, Stochastic Diffusion Search, is presented capable of rapid location of the optimal solution in the search space. Population based search mechanisms employed by Swarm Intelligence methods can suffer lack of convergence resulting in ill defined stopping criteria and loss of the best solution. Conversely, as a result of its resource allocation mechanism, the solutions SDS discovers enjoy excellent stability.
Resumo:
We discuss the development and performance of a low-power sensor node (hardware, software and algorithms) that autonomously controls the sampling interval of a suite of sensors based on local state estimates and future predictions of water flow. The problem is motivated by the need to accurately reconstruct abrupt state changes in urban watersheds and stormwater systems. Presently, the detection of these events is limited by the temporal resolution of sensor data. It is often infeasible, however, to increase measurement frequency due to energy and sampling constraints. This is particularly true for real-time water quality measurements, where sampling frequency is limited by reagent availability, sensor power consumption, and, in the case of automated samplers, the number of available sample containers. These constraints pose a significant barrier to the ubiquitous and cost effective instrumentation of large hydraulic and hydrologic systems. Each of our sensor nodes is equipped with a low-power microcontroller and a wireless module to take advantage of urban cellular coverage. The node persistently updates a local, embedded model of flow conditions while IP-connectivity permits each node to continually query public weather servers for hourly precipitation forecasts. The sampling frequency is then adjusted to increase the likelihood of capturing abrupt changes in a sensor signal, such as the rise in the hydrograph – an event that is often difficult to capture through traditional sampling techniques. Our architecture forms an embedded processing chain, leveraging local computational resources to assess uncertainty by analyzing data as it is collected. A network is presently being deployed in an urban watershed in Michigan and initial results indicate that the system accurately reconstructs signals of interest while significantly reducing energy consumption and the use of sampling resources. We also expand our analysis by discussing the role of this approach for the efficient real-time measurement of stormwater systems.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The research presents as its central theme the study of the bibliographic record conversion process. The object of study is framed by an understanding of analogic bibliographic record conversion to the Bibliograhpic MARC21 format, based on a syntactic and semantic analysis of records described according to descriptive metadata structure standards and content standards. The objective of this research the objective is to develop a theoretical-conceptual model of syntactic and semantic of bibliographic records, from Linguistic studies of Saussure and Hjelmslev of manifestations of human language, which subsidizes the development of a computacional interpreter, focused to the conversion of bibliographic records to MARC21 Bibliographic Format, which can be confirmed both the semantic value of the informational resource represented as the reliability of the representation. Given the aforementioned objectives, the methodological trajectory of the research is based on the qualitative approach, of an exploratory, descriptive and experimental nature, and with recourse to the literature. Contributions on the theoretical plane can be envisaged regarding the development of questions inherent to the syntactic and semantic aspects of bibliographic records, and by involving, at the same time, interdisciplinarity between Information Science, Computer Science and Linguistics. Contributions to the practical field are identified by the fact the study covers the development of the Scan for MARC, a computational interpreter that can be adopted by any institution that wishes to use the conversion procedure for bibliographic record databases to the MARC21 Bibliographic Format from description and visualization schemes of bibliographic records (AACR2r and ISBD), an aspect of the research which is considered innovative.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Linguística e Língua Portuguesa - FCLAR
Resumo:
Data-intensive Grid applications require huge data transfers between grid computing nodes. These computing nodes, where computing jobs are executed, are usually geographically separated. A grid network that employs optical wavelength division multiplexing (WDM) technology and optical switches to interconnect computing resources with dynamically provisioned multi-gigabit rate bandwidth lightpath is called a Lambda Grid network. A computing task may be executed on any one of several computing nodes which possesses the necessary resources. In order to reflect the reality in job scheduling, allocation of network resources for data transfer should be taken into consideration. However, few scheduling methods consider the communication contention on Lambda Grids. In this paper, we investigate the joint scheduling problem while considering both optical network and computing resources in a Lambda Grid network. The objective of our work is to maximize the total number of jobs that can be scheduled in a Lambda Grid network. An adaptive routing algorithm is proposed and implemented for accomplishing the communication tasks for every job submitted in the network. Four heuristics (FIFO, ESTF, LJF, RS) are implemented for job scheduling of the computational tasks. Simulation results prove the feasibility and efficiency of the proposed solution.
Resumo:
Data-intensive Grid applications require huge data transfers between grid computing nodes. These computing nodes, where computing jobs are executed, are usually geographically separated. A grid network that employs optical wavelength division multiplexing (WDM) technology and optical switches to interconnect computing resources with dynamically provisioned multi-gigabit rate bandwidth lightpath is called a Lambda Grid network. A computing task may be executed on any one of several computing nodes which possesses the necessary resources. In order to reflect the reality in job scheduling, allocation of network resources for data transfer should be taken into consideration. However, few scheduling methods consider the communication contention on Lambda Grids. In this paper, we investigate the joint scheduling problem while considering both optical network and computing resources in a Lambda Grid network. The objective of our work is to maximize the total number of jobs that can be scheduled in a Lambda Grid network. An adaptive routing algorithm is proposed and implemented for accomplishing the communication tasks for every job submitted in the network. Four heuristics (FIFO, ESTF, LJF, RS) are implemented for job scheduling of the computational tasks. Simulation results prove the feasibility and efficiency of the proposed solution.
Resumo:
Nowadays, modern society is gradually becoming multicultural. However, only in the last few years awareness on its importance has been raised. In the case of Colombia, multiculturalism has existed since the pre-Columbian period and today there are more than 80 ethnic groups and 65 indigenous languages in the country. The aim of this work is to illustrate the status of indigenous languages in Colombia and to enlighten about the importance of recognizing, protecting and strengthening the use of these native languages. Subsequent to this, it will be point out that linguistic diversity should be considered a resource and not a barrier to achieve unity in diversity. Finally, ethno-education will be presented as an adequate educational program that may guarantee an equal linguistic representation in the country.
Esperienza di creazione di entrate lessicografiche combinatorie: metodi e dati dal progetto CombiNet
Resumo:
The present dissertation aims at simulating the construction of lexicographic layouts for an Italian combinatory dictionary based on real linguistic data, extracted from corpora by using computational methods. This work is based on the assumption that the intuition of the native speaker, or the lexicographer, who manually extracts and classifies all the relevant data, are not adequate to provide sufficient information on the meaning and use of words. Therefore, a study of the real use of language is required and this is particularly true for dictionaries that collect the combinatory behaviour of words, where the task of the lexicographer is to identify typical combinations where a word occurs. This study is conducted in the framework of the CombiNet project aimed at studying Italian Word Combinationsand and at building an online, corpus-based combinatory lexicographic resource for the Italian language. This work is divided into three chapters. Chapter 1 describes the criteria considered for the classification of word combinations according to the work of Ježek (2011). Chapter 1 also contains a brief comparison between the most important Italian combinatory dictionaries and the BBI Dictionary of Word Combinations in order to describe how word combinations are considered in these lexicographic resources. Chapter 2 describes the main computational methods used for the extraction of word combinations from corpora, taking into account the advantages and disadvantages of the two methods. Chapter 3 mainly focuses on the practical word carried out in the framework of the CombiNet project, with reference to the tools and resources used (EXTra, LexIt and "La Repubblica" corpus). Finally, the data extracted and the lexicographic layout of the lemmas to be included in the combinatory dictionary are commented, namely the words "acqua" (water), "braccio" (arm) and "colpo" (blow, shot, stroke).
Resumo:
Background The reduction in the amount of food available for European avian scavengers as a consequence of restrictive public health policies is a concern for managers and conservationists. Since 2002, the application of several sanitary regulations has limited the availability of feeding resources provided by domestic carcasses, but theoretical studies assessing whether the availability of food resources provided by wild ungulates are enough to cover energetic requirements are lacking. Methodology/Findings We assessed food provided by a wild ungulate population in two areas of NE Spain inhabited by three vulture species and developed a P System computational model to assess the effects of the carrion resources provided on their population dynamics. We compared the real population trend with to a hypothetical scenario in which only food provided by wild ungulates was available. Simulation testing of the model suggests that wild ungulates constitute an important food resource in the Pyrenees and the vulture population inhabiting this area could grow if only the food provided by wild ungulates would be available. On the contrary, in the Pre-Pyrenees there is insufficient food to cover the energy requirements of avian scavenger guilds, declining sharply if biomass from domestic animals would not be available. Conclusions/Significance Our results suggest that public health legislation can modify scavenger population trends if a large number of domestic ungulate carcasses disappear from the mountains. In this case, food provided by wild ungulates could be not enough and supplementary feeding could be necessary if other alternative food resources are not available (i.e. the reintroduction of wild ungulates), preferably in European Mediterranean scenarios sharing similar and socio-economic conditions where there are low densities of wild ungulates. Managers should anticipate the conservation actions required by assessing food availability and the possible scenarios in order to make the most suitable decisions.
Resumo:
Microsoft Project is one of the most-widely used software packages for project management. For the scheduling of resource-constrained projects, the package applies a priority-based procedure using a specific schedule-generation scheme. This procedure performs relatively poorly when compared against other software packages or state-of-the-art methods for resource-constrained project scheduling. In Microsoft Project 2010, it is possible to work with schedules that are infeasible with respect to the precedence or the resource constraints. We propose a novel schedule-generation scheme that makes use of this possibility. Under this scheme, the project tasks are scheduled sequentially while taking into account all temporal and resource constraints that a user can define within Microsoft Project. The scheme can be implemented as a priority-rule based heuristic procedure. Our computational results for two real-world construction projects indicate that this procedure outperforms the built-in procedure of Microsoft Project
Resumo:
The Solver Add-in of Microsoft Excel is widely used in courses on Operations Research and in industrial applications. Since the 2010 version of Microsoft Excel, the Solver Add-in comprises a so-called evolutionary solver. We analyze how this metaheuristic can be applied to the resource-constrained project scheduling problem (RCPSP). We present an implementation of a schedule-generation scheme in a spreadsheet, which combined with the evolutionary solver can be used for devising good feasible schedules. Our computational results indicate that using this approach, non-trivial instances of the RCPSP can be (approximately) solved to optimality.