923 resultados para Representativeness of tasks
Resumo:
The use of n-tuple or weightless neural networks as pattern recognition devices is well known (Aleksander and Stonham, 1979). They have some significant advantages over the more common and biologically plausible networks, such as multi-layer perceptrons; for example, n-tuple networks have been used for a variety of tasks, the most popular being real-time pattern recognition, and they can be implemented easily in hardware as they use standard random access memories. In operation, a series of images of an object are shown to the network, each being processed suitably and effectively stored in a memory called a discriminator. Then, when another image is shown to the system, it is processed in a similar manner and the system reports whether it recognises the image; is the image sufficiently similar to one already taught? If the system is to be able to recognise and discriminate between m-objects, then it must contain m-discriminators. This can require a great deal of memory. This paper describes various ways in which memory requirements can be reduced, including a novel method for multiple discriminator n-tuple networks used for pattern recognition. By using this method, the memory normally required to handle m-objects can be used to recognise and discriminate between 2^m — 2 objects.
Resumo:
Using functional magnetic resonance imaging, we found that when bilinguals named pictures or read words aloud, in their native or nonnative language, activation was higher relative to monolinguals in 5 left hemisphere regions: dorsal precentral gyrus, pars triangularis, pars opercularis, superior temporal gyrus, and planum temporale. We further demonstrate that these areas are sensitive to increasing demands on speech production in monolinguals. This suggests that the advantage of being bilingual comes at the expense of increased work in brain areas that support monolingual word processing. By comparing the effect of bilingualism across a range of tasks, we argue that activation is higher in bilinguals compared with monolinguals because word retrieval is more demanding; articulation of each word is less rehearsed; and speech output needs careful monitoring to avoid errors when competition for word selection occurs between, as well as within,language.
Resumo:
In this paper we report on a study conducted using the Middle Atmospheric Nitrogen TRend Assessment (MANTRA) balloon measurements of stratospheric constituents and temperature and the Canadian Middle Atmosphere Model (CMAM). Three different kinds of data are used to assess the inter-consistency of the combined dataset: single profiles of long-lived species from MANTRA 1998, sparse climatologies from the ozonesonde measurements during the four MANTRA campaigns and from HALOE satellite measurements, and the CMAM climatology. In doing so, we evaluate the ability of the model to reproduce the measured fields and to thereby test our ability to describe mid-latitude summertime stratospheric processes. The MANTRA campaigns were conducted at Vanscoy, Saskatchewan, Canada (52◦ N, 107◦ W)in late August and early September of 1998, 2000, 2002 and 2004. During late summer at mid-latitudes, the stratosphere is close to photochemical control, providing an ideal scenario for the study reported here. From this analysis we find that: (1) reducing the value for the vertical diffusion coefficient in CMAM to a more physically reasonable value results in the model better reproducing the measured profiles of long-lived species; (2) the existence of compact correlations among the constituents, as expected from independent measurements in the literature and from models, confirms the self-consistency of the MANTRA measurements; and (3) the 1998 measurements show structures in the chemical species profiles that can be associated with transport, adding to the growing evidence that the summertime stratosphere can be much more disturbed than anticipated. The mechanisms responsible for such disturbances need to be understood in order to assess the representativeness of the measurements and to isolate longterm trends.
Resumo:
The NeuroHub project aims to develop a research information system for neuroscientists at three different partner institutions: Oxford, Reading and Southampton. Each research group has different working practices, research methodologies and user requirements, which have lead to the development of a system that supports a wide variety of tasks in the neuroscience research life cycle. In this paper, we present how these user requirements have been translated in a research information environment that supports a community of over 70 researchers using the system for day-to-day research tasks.
Resumo:
Recent global assessments have shown the limited coverage of protected areas across tropical biotas, fuelling a growing interest in the potential conservation services provided by anthropogenic landscapes. Here we examine the geographic distribution of biological diversity in the Atlantic Forest of South America, synthesize the most conspicuous forest biodiversity responses to human disturbances, propose further conservation initiatives for this biota, and offer a range of general insights into the prospects of forest species persistence in human-modified tropical forest landscapes worldwide. At the biome scale, the most extensive pre-Columbian habitats across the Atlantic Forest ranged across elevations below 800 masl, which still concentrate most areas within the major centers of species endemism. Unfortunately, up to 88% of the original forest habitat has been lost, mainly across these low to intermediate elevations, whereas protected areas are clearly skewed towards high elevations above 1200 masl. At the landscape scale, most remaining Atlantic Forest cover is embedded within dynamic agro-mosaics including elements such as small forest fragments, early-to-late secondary forest patches and exotic tree mono-cultures. In this sort of aging or long-term modified landscapes, habitat fragmentation appears to effectively drive edge-dominated portions of forest fragments towards an early-successional system, greatly limiting the long-term persistence of forest-obligate and forest-dependent species. However, the extent to which forest habitats approach early-successional systems, thereby threatening the bulk of the Atlantic Forest biodiversity, depends on both past and present landscape configuration. Many elements of human-modified landscapes (e.g. patches of early-secondary forests and tree mono-cultures) may offer excellent conservation opportunities, but they cannot replace the conservation value of protected areas and hitherto unprotected large patches of old-growth forests. Finally, the biodiversity conservation services provided by anthropogenic landscapes across Atlantic Forest and other tropical forest regions can be significantly augmented by coupling biodiversity corridor initiatives with biota-scale attempts to plug existing gaps in the representativeness of protected areas. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In 2006 the Route load balancing algorithm was proposed and compared to other techniques aiming at optimizing the process allocation in grid environments. This algorithm schedules tasks of parallel applications considering computer neighborhoods (where the distance is defined by the network latency). Route presents good results for large environments, although there are cases where neighbors do not have an enough computational capacity nor communication system capable of serving the application. In those situations the Route migrates tasks until they stabilize in a grid area with enough resources. This migration may take long time what reduces the overall performance. In order to improve such stabilization time, this paper proposes RouteGA (Route with Genetic Algorithm support) which considers historical information on parallel application behavior and also the computer capacities and load to optimize the scheduling. This information is extracted by using monitors and summarized in a knowledge base used to quantify the occupation of tasks. Afterwards, such information is used to parameterize a genetic algorithm responsible for optimizing the task allocation. Results confirm that RouteGA outperforms the load balancing carried out by the original Route, which had previously outperformed others scheduling algorithms from literature.
Resumo:
The aim of task scheduling is to minimize the makespan of applications, exploiting the best possible way to use shared resources. Applications have requirements which call for customized environments for their execution. One way to provide such environments is to use virtualization on demand. This paper presents two schedulers based on integer linear programming which schedule virtual machines (VMs) in grid resources and tasks on these VMs. The schedulers differ from previous work by the joint scheduling of tasks and VMs and by considering the impact of the available bandwidth on the quality of the schedule. Experiments show the efficacy of the schedulers in scenarios with different network configurations.
Resumo:
The InteGrade project is a multi-university effort to build a novel grid computing middleware based on the opportunistic use of resources belonging to user workstations. The InteGrade middleware currently enables the execution of sequential, bag-of-tasks, and parallel applications that follow the BSP or the MPI programming models. This article presents the lessons learned over the last five years of the InteGrade development and describes the solutions achieved concerning the support for robust application execution. The contributions cover the related fields of application scheduling, execution management, and fault tolerance. We present our solutions, describing their implementation principles and evaluation through the analysis of several experimental results. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
To integrate study visits to different workplaces in higher education implies important benefits for the course quality. The study visit gives the students a better understanding for the real situations they will meet in working life. However for practical and economical reasons is that not always possible. The purpose of this project is to create a virtual company that shall replace the real one for study visits. The goal is to create a realistic picture and that intended use of it can come as close as possible to a real study visit. It is also important to facilitate linking theory and practice. The virtual company is built up by pictures, videos and text. All material is made available on a web page and when entering the students will meet a layout of the company. From that position is it possible to walk around and look at videos from different workstations. Besides that can they also listen to interviews with managers and representatives of staff as well as reading reports concerning productivity and the work environment. The focus of the study visit is work sciences, therefore the material also include some visualized information about work hazards. On the web page there are also a number of tasks for the students to carry out. Until the autumn 2011, 132 students at Dalarna University have visited and produced reports from the virtual company. They were studying in programs for mechanical engineering, production technicians and human resource management. An evaluation among some ten students showed that the study visit to the virtual company is flexible in time and effective, but that students wish to have even more detailed information about the company. Experiences from four years of use in a number of classes show that the concept is worth further development. Furthermore with production of new material the concept is likely to be applicable for other purposes.
Resumo:
This research will discuss some experiences from a four year school research study. It was conducted in cooperation with teachers from four municipalities in Dalarna. The aim of the research was to examine teachers´ professional development when they participated in collaborative discussions based on video recordings and video edited material from specific lessons in their own practice. The study had two foci one was to investigate methods and tools that teachers can use to develop their ability to assess their students while working on multimodal tasks. The other was to examine how video can be used by teachers wanting to obtain knowledge about assessing students. The study is based on several theories about when teachers collaborate to create new knowledge. The first is the design theoretical approach – where visual ethnography and a semiotic approach contribute to problematize the use and mixture of different modes. A basic assumption of the framework here is that meanings are made and communicated in mathematics through a wide range of semiotic modes. By using video as an essential tool in the research the framework theories concerning visual ethnography, video documentation and individuals as reflective practitioners were also needed. The findings can be divided into the following themes: the use of tasks for assessment, collaborative discussion, equipment, ethical dilemmas. Collaborative discussions were evaluated as a meaningful way of sharing knowledge. The use of video recordings in association with these discussions raised important ethical issues. Working with the assessment framework was of great interest to the teachers but it took a lot of time from their ordinary work. In this way the project highlighted more general aspects of school development. The research also concerns teachers´ use of collaborative discussions in assessment work, multimodal tasks in mathematics and video as a research tool in general.
Resumo:
The development of practical agent languages has progressed significantly over recent years, but this has largely been independent of distinct developments in aspects of multiagent cooperation and planning. For example, while the popular AgentSpeak(L) has had various extensions and improvements proposed, it still essentially a single-agent language. In response, in this paper, we describe a simple, yet effective, technique for multiagent planning that enables an agent to take advantage of cooperating agents in a society. In particular, we build on a technique that enables new plans to be added to a plan library through the invocation of an external planning component, and extend it to include the construction of plans involving the chaining of subplans of others. Our mechanism makes use of plan patterns that insulate the planning process from the resulting distributed aspects of plan execution through local proxy plans that encode information about the preconditions and effects of the external plans provided by agents willing to cooperate. In this way, we allow an agent to discover new ways of achieving its goals through local planning and the delegation of tasks for execution by others, allowing it to overcome individual limitations.
Resumo:
Dynamic composition of services provides the ability to build complex distributed applications at run time by combining existing services, thus coping with a large variety of complex requirements that cannot be met by individual services alone. However, with the increasing amount of available services that differ in granularity (amount of functionality provided) and qualities, selecting the best combination of services becomes very complex. In response, this paper addresses the challenges of service selection, and makes a twofold contribution. First, a rich representation of compositional planning knowledge is provided, allowing the expression of multiple decompositions of tasks at arbitrary levels of granularity. Second, two distinct search space reduction techniques are introduced, the application of which, prior to performing service selection, results in significant improvement in selection performance in terms of execution time, which is demonstrated via experimental results.
Resumo:
Enquanto muito se discute nos meios acadêmicos no próprio Congresso Nacional teor das reformas política, tributária previdenciária, todas elas, aliás, fundamentais ao país, esta monografia aborda outras dimensões de reforma, relacionadas organização funcional do Estado Brasileiro. Assim porque exigência de um Estado eficiente responde não somente questão de sua legitimidade, mas é, hoje, no ambiente globalizado de intensa competição de constante instabilidade, imperativo do próprio desenvolvimento de uma nação, especialmente Brasil, que deve, paralelamente, enfrentar um enorme déficit social acumulado. Procura-se, então, discutir, numa visão prospectiva, uma nova divisão de funções entre os Poderes Executivo Legislativo, especialmente no que concerne produção normativa, cujo modelo atual, ainda fundado no dogma da reserva de lei, apresenta-se incapaz de responder com grau de agilidade e competência que se exige do Estado atual. Remanesce, obviamente, preocupação com preservação dos valores democráticos com contenção do poder do Executivo, que instruíram formação dos Estados constitucionais, mas monopolização, pelo Legislativo, da produção de normas gerais abstratas não fator imprescindível ao alcance de tais objetivos sequer corresponde ao modelo empiricamente em vigor no Brasil. No outro aspecto, da divisão vertical de funções determinada pelo modelo federativo brasileiro, aborda-se "crise de relacionamento" entre os Entes Federativos, que se nota na dificuldade de adoção de políticas públicas conjuntas, problema que mais se agrava ante justa demanda da sociedade em encontrar, para seus organismos, espaço de participação nessa mesma seara. Deixando parte algumas sugestões de reconfiguração federativa, com inclusão ou exclusão de Entes, presente estudo procura defender adoção de soluções institucionais para a concretização da cooperação entre os Entes e para participação da sociedade na busca do interesse público.
Resumo:
Desde o final da década de 90, a securitização de ativos vem se firmando no mercado brasileiro como uma importante alternativa de captação de recursos. Essa inovação financeira permite às empresa o acesso direto ao mercado de capitais, para a venda de títulos lastreados em suas carteiras de recebíveis, eliminando a intermediação bancária e permitindo reduções no custo de capital, inclusive em comparação com títulos convencionais de dívida corporativa. Os títulos de securitização são em regra emitidos por um veículo de propósito específico (FIDC ou companhia securitizadora), com o objetivo de segregar os riscos do originador/tomador em relação aos créditos securitizados (lastro da emissão). Em 2004, a Lei 11.076 criou os novos títulos do agronegócio (CDA-WA, CDCA, LCA e CRA), inspirada na legislação da securitização imobiliária (SFI - Lei 9.514/97), buscando disponibilizar ao agronegócio uma nova fonte de recursos, via emissão de títulos no mercado de capitais. Desde então, um número crescente de operações estruturadas com esses papéis pôde ser observada, demonstrando sua funcionalidade e potencial. No entanto, o volume de captações públicas mais sofisticadas fundadas na emissão de cotas de FIDCs e de CRAs ainda se apresenta muito reduzida em relação à demanda do agronegócio por financiamento, sobretudo levando-se em conta a representatividade desse setor no Brasil. O setor sucro-energético é provavelmente o segmento do agronegócio que está em melhor posição para o desenvolvimento de operações de securitização, por apresentar características como: escala, padronização dos produtos, grau de consolidação dos grupos empresariais e perspectivas de crescimento, além do forte apelo ambiental. Os papéis associados a esse segmento possuem, dessa forma, um potencial singular para superar a resistência natural de investidores às aplicações financeiras na área agrícola. Este trabalho dedica-se a investigar como o conceito de securitização pode ser aplicado em operações ligadas ao agronegócio, particularmente ao setor sucro-alcooleiro. A partir de um estudo de caso, serão analisados aspectos operacionais de uma emissão pública de CRAs, ressaltando os mecanismos e procedimentos adotados para lidar com as particularidades dos títulos oriundos do agronegócio. O estudo mostra que a estruturação desse tipo de operação apresenta algumas características e desafios diferentes das operações fundadas em outros papéis, porém a priori administráveis a partir das técnicas tradicionais de securitização e da incorporação de mecanismos suplementares de gestão de riscos.
Resumo:
Este trabalho estã centrado no estudo do caso -- Sistema Comercial da Companhia de Saneamento do Paranã - SANEPAR e consequentemente na apresentação de alternativas organizacionais voltadas para o estabelecimento de coerência entre objetivos, recursos e ambiência que envolvem esse sistema. Foi dese~volvido tendo como base teórica o modelo de anãlise organizacional sustentado por Jay R. Galbraith em ORGANIZATION DESIGN, 1977. Calcado num diagnóstico participativo, foi buscado o alcance de objetivos voltados para o conhecimento de aspectos do sistema, como transação ambiental, a natureza das tarefas, a estrutura e o processo de tomada de decisão. Para tanto, foi recomendado a aplicação de um IIMix Organizacional composto de 6 estrategias basicas, provenientes tanto da administração classica quanto da contingencial/orgânica. Pela situação organizacional encontrada, se concluiu que o Sistema Comercial da SANEPAR apresenta uma administração difusa e, bastante descaracterizada estruturalmente, o que compromete a eficiência de seu desempenho. Sua ãrea de competência/autoridade funcional, estã subordinada a um orgao de planejamento que, alem das suas atividades peculiares, tambem se encarrega da direção do Sistema Comercial. Essa situação incide na qualidade dos resultados finais necessarios, i. e, na eficacia e efetividade dos dois sistemas dentro do contexto em que atuam, visto estarem seus principais recursos, humanos, gerando decisões sem uma base informacional consistente e relevante e, dentro de um desenho estrutural que dificulta a consecução dos resultados. O mix recomendado visa dotar o sistema de maior capacidade de transaçao com o ambiente em geral, via uma administração estrategica calcada num desenho estrutural concebido para propiciar informação precisa, decisao relevante e alcance de resultados necessãrios. r proposto visando subordinar o sistema diretamente a uma Diretoria, emancipando-o da subordinação ao Planejamento e, elevando sua autoridade, ao mesmo nivel de importância dos principais sistemas da Empresa. Dessa forma, foi planejada uma atuação gerencial mais dinâmica e profissional ao sistema via processo de tomada de decisão lógico, releva~ te e conhecido. Assim, objetiva-se reduzir a niveis insignificantes a incerteza constatada, principal causadora dos atrasos e distorções ocorrentes no processo e alcance dos resultados desejados. Esses instrumentos, certamente, facilitarão as relações no interior do sistema e, deste, com o ambiente em geral, garantindo assim, a abertura do caminho rumo ao atinamento da eficácia e efetividade gerencial.