933 resultados para parallel and distributed information processing
Resumo:
During "The reader and their information needs" students tried to implement the theory discussed in class. To this end, organized into sub-groups and worked in different types of libraries. The following paper covers the practice that the students: Jenssy Arguedas Salazar Hernández Sandoval Lidiette Oses and Olga Corrales held in the Library "Sister Onorina Leporati" Help Ma College of Alajuela.
Resumo:
We compare auctioning and grandfathering as allocation mechanisms of emission permits when there is a secondary market with market power and firms have private information on their own abatement technologies. Based on real-life cases such as the EU ETS, we consider a multi-unit, multi-bid uniform auction. At the auction, each firm anticipates its role in the secondary market, either as a leader or a follower. This role affects each firms’ valuation of the permits (which are not common across firms) as well as their bidding strategies and it precludes the auction from generating a cost-effective allocation of permits, as it occurs in simpler auction models. Auctioning tends to be more cost-effective than grandfathering when the firms’ abatement cost functions are sufficiently different from one another, especially if the follower has lower abatement costs than the leader and the dispersion of the marginal costs is large enough.
Resumo:
This paper analyses the intermediary role of the technical bodies that support the use of budgetary and financial information by central government politicians in Portugal. The main findings show that information brokers are playing a central role in preparing this information in a credible, simple and understandable way. However, even if not intentionally, the information they present can be biased. Politicians need to be aware that the information brokers they rely on may not be giving them ‘neutral’ information.
Resumo:
Trade credit is an important source of finance for SMEs and this study investigates the use of the financial statements and other information in making trade credit decisions in smaller entities in Finland, the UK, USA and South Africa. The study adds to the literature by examining the information needs of unincorporated entities as a basis for making comparisons with small, unlisted companies. In-depth, semi-structured interviews in each country were used to collect data from the owner-managers of SMEs and from credit rating agencies and credit insurers. The findings provide insights into similarities and differences between countries and between developed and developing economies. The evidence suggests that there are three main influences on the trade credit decision: formal and report-based information, soft information relating to social capital and contingency factors. The latter dictate the extent to which hard/formal information versus soft/informal information is used.
Resumo:
This paper proposes a principal-agent model between banks and firms with risk and asymmetric information. A mixed form of finance to firms is assumed. The capital structure of firms is a relevant cause for the final aggregate level of investment in the economy. In the model analyzed, there may be a separating equilibrium, which is not economically efficient, because aggregate investments fall short of the first-best level. Based on European firm-level data, an empirical model is presented which validates the result of the relevance of the capital structure of firms. The relative magnitude of equity in the capital structure makes a real difference to the profits obtained by firms in the economy.
Resumo:
This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new), and respiratory rate predictor RRP) with three main components of cow’s milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p-value < 0.001 and R2 (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation (p-value < 0.001) with R2 (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.
Resumo:
Audit report on Iowa Public Television for the year ended June 30, 2009
Resumo:
The application of modern ICT technologies is radically changing many fields pushing toward more open and dynamic value chains fostering the cooperation and integration of many connected partners, sensors, and devices. As a valuable example, the emerging Smart Tourism field derived from the application of ICT to Tourism so to create richer and more integrated experiences, making them more accessible and sustainable. From a technological viewpoint, a recurring challenge in these decentralized environments is the integration of heterogeneous services and data spanning multiple administrative domains, each possibly applying different security/privacy policies, device and process control mechanisms, service access, and provisioning schemes, etc. The distribution and heterogeneity of those sources exacerbate the complexity in the development of integrating solutions with consequent high effort and costs for partners seeking them. Taking a step towards addressing these issues, we propose APERTO, a decentralized and distributed architecture that aims at facilitating the blending of data and services. At its core, APERTO relies on APERTO FaaS, a Serverless platform allowing fast prototyping of the business logic, lowering the barrier of entry and development costs to newcomers, (zero) fine-grained scaling of resources servicing end-users, and reduced management overhead. APERTO FaaS infrastructure is based on asynchronous and transparent communications between the components of the architecture, allowing the development of optimized solutions that exploit the peculiarities of distributed and heterogeneous environments. In particular, APERTO addresses the provisioning of scalable and cost-efficient mechanisms targeting: i) function composition allowing the definition of complex workloads from simple, ready-to-use functions, enabling smarter management of complex tasks and improved multiplexing capabilities; ii) the creation of end-to-end differentiated QoS slices minimizing interfaces among application/service running on a shared infrastructure; i) an abstraction providing uniform and optimized access to heterogeneous data sources, iv) a decentralized approach for the verification of access rights to resources.
Resumo:
In this thesis, the optimal operation of a neighborhood of smart households in terms of minimizing the total energy cost is analyzed. Each household may comprise several assets such as electric vehicles, controllable appliances, energy storage and distributed generation. Bi-directional power flow is considered for each household . Apart from the distributed generation unit, technological options such as vehicle-to-home and vehicle-to-grid are available to provide energy to cover self-consumption needs and to export excessive energy to other households, respectively.
Resumo:
Este trabalho é uma parte do tema global “Suporte à Computação Paralela e Distribuída em Java”, também tema da tese de Daniel Barciela no mestrado de Engenharia Informática do Instituto Superior de Engenharia do Porto. O seu objetivo principal consiste na definição/criação da interface com o programador, assim como também abrange a forma como os nós comunicam e cooperam entre si para a execução de determinadas tarefas, de modo a atingirem um único objetivo global. No âmbito desta dissertação foi realizado um estudo prévio relativamente aos modelos teóricos referentes à computação paralela, assim como também foram analisadas linguagens e frameworks que fornecem suporte a este mesmo tipo de computação. Este estudo teve como principal objetivo a análise da forma como estes modelos e linguagens permitem ao programador expressar o processamento paralelo no desenvolvimento das aplicações. Como resultado desta dissertação surgiu a framework denominada Distributed Parallel Framework for Java (DPF4j), cujo objetivo principal é fornecer aos programadores o suporte para o desenvolvimento de aplicações paralelas e distribuídas. Esta framework foi desenvolvida na linguagem Java. Esta dissertação contempla a parte referente à interface de programação e a toda a comunicação entre nós cooperantes da framework DPF4j. Por fim, foi demonstrado através dos testes realizados que a DPF4j, apesar de ser ainda um protótipo, já demonstra ter uma performance superior a outras frameworks e linguagens que possuem os mesmos objetivos.
Resumo:
In the past few years Tabling has emerged as a powerful logic programming model. The integration of concurrent features into the implementation of Tabling systems is demanded by need to use recently developed tabling applications within distributed systems, where a process has to respond concurrently to several requests. The support for sharing of tables among the concurrent threads of a Tabling process is a desirable feature, to allow one of Tabling’s virtues, the re-use of computations by other threads and to allow efficient usage of available memory. However, the incremental completion of tables which are evaluated concurrently is not a trivial problem. In this dissertation we describe the integration of concurrency mechanisms, by the way of multi-threading, in a state of the art Tabling and Prolog system, XSB. We begin by reviewing the main concepts for a formal description of tabled computations, called SLG resolution and for the implementation of Tabling under the SLG-WAM, the abstract machine supported by XSB. We describe the different scheduling strategies provided by XSB and introduce some new properties of local scheduling, a scheduling strategy for SLG resolution. We proceed to describe our implementation work by describing the process of integrating multi-threading in a Prolog system supporting Tabling, without addressing the problem of shared tables. We describe the trade-offs and implementation decisions involved. We then describe an optimistic algorithm for the concurrent sharing of completed tables, Shared Completed Tables, which allows the sharing of tables without incurring in deadlocks, under local scheduling. This method relies on the execution properties of local scheduling and includes full support for negation. We provide a theoretical framework and discuss the implementation’s correctness and complexity. After that, we describe amethod for the sharing of tables among threads that allows parallelism in the computation of inter-dependent subgoals, which we name Concurrent Completion. We informally argue for the correctness of Concurrent Completion. We give detailed performance measurements of the multi-threaded XSB systems over a variety of machines and operating systems, for both the Shared Completed Tables and the Concurrent Completion implementations. We focus our measurements inthe overhead over the sequential engine and the scalability of the system. We finish with a comparison of XSB with other multi-threaded Prolog systems and we compare our approach to concurrent tabling with parallel and distributed methods for the evaluation of tabling. Finally, we identify future research directions.
Resumo:
Even though Software Transactional Memory (STM) is one of the most promising approaches to simplify concurrent programming, current STM implementations incur significant overheads that render them impractical for many real-sized programs. The key insight of this work is that we do not need to use the same costly barriers for all the memory managed by a real-sized application, if only a small fraction of the memory is under contention lightweight barriers may be used in this case. In this work, we propose a new solution based on an approach of adaptive object metadata (AOM) to promote the use of a fast path to access objects that are not under contention. We show that this approach is able to make the performance of an STM competitive with the best fine-grained lock-based approaches in some of the more challenging benchmarks. (C) 2015 Elsevier Inc. All rights reserved.