919 resultados para website usability
Resumo:
Present work presents a code written in the very simple programming language MATLAB, for three dimensional linear elastostatics, using constant boundary elements. The code, in full or in part, is not a translation or a copy of any of the existing codes. Present paper explains how the code is written, and lists all the formulae used. Code is verified by using the code to solve a simple problem which has the well known approximate analytical solution. Of course, present work does not make any contribution to research on boundary elements, in terms of theory. But the work is justified by the fact that, to the best of author’s knowledge, as of now, one cannot find an open access MATLAB code for three dimensional linear elastostatics using constant boundary elements. Author hopes this paper to be of help to beginners who wish to understand how a simple but complete boundary element code works, so that they can build upon and modify the present open access code to solve complex engineering problems quickly and easily. The code is available online for open access (as supplementary file for the present paper), and may be downloaded from the website for the present journal.
Resumo:
Clustered architecture processors are preferred for embedded systems because centralized register file architectures scale poorly in terms of clock rate, chip area, and power consumption. Although clustering helps by improving the clock speed, reducing the energy consumption of the logic, and making the design simpler, it introduces extra overheads by way of inter-cluster communication. This communication happens over long global wires having high load capacitance which leads to delay in execution and significantly high energy consumption. Inter-cluster communication also introduces many short idle cycles, thereby significantly increasing the overall leakage energy consumption in the functional units. The trend towards miniaturization of devices (and associated reduction in threshold voltage) makes energy consumption in interconnects and functional units even worse, and limits the usability of clustered architectures in smaller technologies. However, technological advancements now permit the design of interconnects and functional units with varying performance and power modes. In this paper, we propose scheduling algorithms that aggregate the scheduling slack of instructions and communication slack of data values to exploit the low-power modes of functional units and interconnects. Finally, we present a synergistic combination of these algorithms that simultaneously saves energy in functional units and interconnects to improves the usability of clustered architectures by achieving better overall energy-performance trade-offs. Even with conservative estimates of the contribution of the functional units and interconnects to the overall processor energy consumption, the proposed combined scheme obtains on average 8% and 10% improvement in overall energy-delay product with 3.5% and 2% performance degradation for a 2-clustered and a 4-clustered machine, respectively. We present a detailed experimental evaluation of the proposed schemes. Our test bed uses the Trimaran compiler infrastructure. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Song-selection and mood are interdependent. If we capture a song’s sentiment, we can determine the mood of the listener, which can serve as a basis for recommendation systems. Songs are generally classified according to genres, which don’t entirely reflect sentiments. Thus, we require an unsupervised scheme to mine them. Sentiments are classified into either two (positive/negative) or multiple (happy/angry/sad/...) classes, depending on the application. We are interested in analyzing the feelings invoked by a song, involving multi-class sentiments. To mine the hidden sentimental structure behind a song, in terms of “topics”, we consider its lyrics and use Latent Dirichlet Allocation (LDA). Each song is a mixture of moods. Topics mined by LDA can represent moods. Thus we get a scheme of collecting similar-mood songs. For validation, we use a dataset of songs containing 6 moods annotated by users of a particular website.
Resumo:
Daily rainfall datasets of 10 years (1998-2007) of Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) version 6 and India Meteorological Department (IMD) gridded rain gauge have been compared over the Indian landmass, both in large and small spatial scales. On the larger spatial scale, the pattern correlation between the two datasets on daily scales during individual years of the study period is ranging from 0.4 to 0.7. The correlation improved significantly (similar to 0.9) when the study was confined to specific wet and dry spells each of about 5-8 days. Wavelet analysis of intraseasonal oscillations (ISO) of the southwest monsoon rainfall show the percentage contribution of the major two modes (30-50 days and 10-20 days), to be ranging respectively between similar to 30-40% and 5-10% for the various years. Analysis of inter-annual variability shows the satellite data to be underestimating seasonal rainfall by similar to 110 mm during southwest monsoon and overestimating by similar to 150 mm during northeast monsoon season. At high spatio-temporal scales, viz., 1 degrees x1 degrees grid, TMPA data do not correspond to ground truth. We have proposed here a new analysis procedure to assess the minimum spatial scale at which the two datasets are compatible with each other. This has been done by studying the contribution to total seasonal rainfall from different rainfall rate windows (at 1 mm intervals) on different spatial scales (at daily time scale). The compatibility spatial scale is seen to be beyond 5 degrees x5 degrees average spatial scale over the Indian landmass. This will help to decide the usability of TMPA products, if averaged at appropriate spatial scales, for specific process studies, e.g., cloud scale, meso scale or synoptic scale.
Resumo:
Scatter/Gather systems are increasingly becoming useful in browsing document corpora. Usability of the present-day systems are restricted to monolingual corpora, and their methods for clustering and labeling do not easily extend to the multilingual setting, especially in the absence of dictionaries/machine translation. In this paper, we study the cluster labeling problem for multilingual corpora in the absence of machine translation, but using comparable corpora. Using a variational approach, we show that multilingual topic models can effectively handle the cluster labeling problem, which in turn allows us to design a novel Scatter/Gather system ShoBha. Experimental results on three datasets, namely the Canadian Hansards corpus, the entire overlapping Wikipedia of English, Hindi and Bengali articles, and a trilingual news corpus containing 41,000 articles, confirm the utility of the proposed system.
Resumo:
In the present work, historical and instrumental seismicity data of India and its adjoining areas (within 300km from Indian political boundary) are compiled to form the earthquake catalog for the country covering the period from 1505 to 2009. The initial catalogue consisted of about 139563 earthquake events and after declustering,the total number of events obtained was 61315. Region specific earthquake magnitude scaling relations correlating different magnitude scales were achieved and a homogenous earthquake catalogue in moment magnitude (MW) scale was developed for the region. This paper also presents the results of the use of Geographic Information Systems (GIS) to prepare a digitized seismic source map of India. The latest earthquake data were superimposed on the digitized source map to get a final Seismotectonic map of India. The study area has been divided into 1225 grid points (approximately 110km×110km) and the seismicity analysis has been done to get the spatial variation of seismicity parameters ‘a’ and ‘b’ across the country. The homogenized earthquake catalogue with the event details is listed in the website http://civil.iisc.ernet.in/~sreevals/resource.htm
Resumo:
An attempt has been made to quantify the variability in the seismic activity rate across the whole of India and adjoining areas (0–45°N and 60–105°E) using earthquake database compiled from various sources. Both historical and instrumental data were compiled and the complete catalog of Indian earthquakes till 2010 has been prepared. Region-specific earthquake magnitude scaling relations correlating different magnitude scales were achieved to develop a homogenous earthquake catalog for the region in unified moment magnitude scale. The dependent events (75.3%) in the raw catalog have been removed and the effect of aftershocks on the variation of b value has been quantified. The study area was divided into 2,025 grid points (1°91°) and the spatial variation of the seismicity across the region have been analyzed considering all the events within 300 km radius from each grid point. A significant decrease in seismic b value was seen when declustered catalog was used which illustrates that a larger proportion of dependent events in the earthquake catalog are related to lower magnitude events. A list of 203,448 earth- quakes (including aftershocks and foreshocks) occurred in the region covering the period from 250 B.C. to 2010 A.D. with all available details is uploaded in the website http://www.civil.iisc.ernet.in/*sreevals/resource.htm.
Resumo:
In this paper we present a massively parallel open source solver for Richards equation, named the RichardsFOAM solver. This solver has been developed in the framework of the open source generalist computational fluid dynamics tool box OpenFOAM (R) and is capable to deal with large scale problems in both space and time. The source code for RichardsFOAM may be downloaded from the CPC program library website. It exhibits good parallel performances (up to similar to 90% parallel efficiency with 1024 processors both in strong and weak scaling), and the conditions required for obtaining such performances are analysed and discussed. These performances enable the mechanistic modelling of water fluxes at the scale of experimental watersheds (up to few square kilometres of surface area), and on time scales of decades to a century. Such a solver can be useful in various applications, such as environmental engineering for long term transport of pollutants in soils, water engineering for assessing the impact of land settlement on water resources, or in the study of weathering processes on the watersheds. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Examina uma das dimensões dos mecanismos digitais de interação e participação política oferecidos por parlamentos para a sociedade - a gestão da informação. São mapeadas as formas de participação política empregadas nos portais legislativos da América Latina, com o objetivo de conhecer as informações que cercam as iniciativas e contextualizar o estudo de caso da Câmara dos Deputados do Brasil. Procura-se entender como a Câmara dos Deputados do Brasil realiza a coleta, a organização, a distribuição, o armazenamento e o uso da informação concernente aos mecanismos de interação e participação política, de caráter multilateral, empregados em seu Portal. Conclui-se que os parlamentos latino-americanos colocam à disposição da sociedade dezenas de canais digitais de interação e participação, como uma tendência irreversível das democracias modernas, mas a gestão da informação inerente às experiências ainda é um desafio a ser alcançado.
Resumo:
A dissertação busca responder em que medida todas as possibilidades de interação ofertadas pelo Portal da Câmara dos Deputados atendem às necessidades de interação política dos cidadãos que utilizam estes novos canais. Quem são os usuários destas novas ferramentas? Qual a avaliação que eles fazem destes novos canais? Qual é a opinião deles sobre as possibilidades de participação eletrônica? O principal aspecto teórico abordado nesta dissertação é a seguinte discussão: a internet replica formas de participação tradicionais ou é realmente capaz de fazer com que mais cidadãos, incluindo aqueles que estão alheios e desinteressados, participem? Essas questões foram direcionadas ao Portal da Câmara dos Deputados do Brasil, que, ao longo dos últimos anos, soube adaptar-se e promover um espaço de amplo acesso a informações legislativas, com capacidade de contato e interação entre o cidadão e o seu representante e é considerado, atualmente, o melhor portal legislativo da América do Sul. A dissertação utilizou duas metodologias distintas. A primeira consistiu em analisar as estatísticas de acesso ao Portal, identificando assim os padrões de acesso: os sites de referência e os caminhos de busca através dos quais se chega ao Portal. A segunda consistiu na condução de um Web Survey para coletar a opinião dos usuários. O questionário aplicado teve como objetivo coletar avaliações das ferramentas disponibilizadas pelo Portal, identificar o perfil dos usuários e compreender-lhes o comportamento político no mundo off-line. Um dos principais achados é o de que a Democracia Eletrônica desenvolvida pelo Portal da Câmara dos Deputados tem servido mais aos profissionais da política que ao cidadão comum. Mas, mesmo assim, o cidadão comum, interessado em buscar participação, contato e interação com os atores políticos, está procurando essas ferramentas on-line. Palavras chaves: Democracia Eletrônica, Câmara dos Deputados, pesquisa on-line, estatísticas de acesso.
Resumo:
This is a report to the California Department of Fish and Game. Between 2003 and 2008, the Foundation of CSUMB produced fish habitat maps and GIS layers for CDFG based on CDFG field data. This report describes the data entry, mapping, and website construction procedures associated with the project. Included are the maps that have been constructed. This report marks the completion of the Central Coast region South District Basin Planning and Habitat Mapping Project. (Document contains 40 pages)
Resumo:
The goal of this project was to gather information on wetland restoration projects in the Moro Bay, California, region. Data provided to the San Francisco Estuary Institute (SFEI) will be used to enhance a web-based, public access database, the Bay Area Wetland Project Tracker. Wetland Tracker provides information on the location, size, sponsors, habitats, contact persons, and status of included projects. Its website provides an interactive map of planned and completed wetland projects (http://www.wetlandtracker.org). (Document contains 4 pages)
Resumo:
Aponta aspectos que devem ser considerados na identificação de metadados de assunto granulares para a legislação federal brasileira. O objeto de estudo foi o Sistema de Legislação Informatizada (Legin Web) disponível no Portal da Câmara dos Deputados. Os objetivos específicos foram: identificar tipos de assuntos amplamente utilizados na indexação da legislação federal brasileira e aspectos do contexto de busca de informação que interferissem na identificação dos metadados de assunto; analisar possibilidades de metadados de assunto para a legislação federal com base em padrões de metadados e modelos de organização da informação abordados na literatura; e, com isso, propor metadados de assunto para a legislação federal brasileira. A ideia é usar esses metadados para diminuir a imprecisão dos resultados das pesquisas na legislação federal, tornando o processo mais rápido e eficiente.
Resumo:
Em uso desde a Grécia antiga e atualmente massificado na maioria dos países do mundo, o sistema de votação tradicional baseado em cédulas de papel possui diversos problemas associados à segurança, tais como dificuldades para evitar coerção do eleitor, venda do voto e substituição fraudulenta do eleitor. Além de problemas de usabilidade que acarretam erros de preenchimento da cédula e um processo de apuração lento, que pode durar dias. Ao lado disso, o sistema tradicional não fornece a contraprova do voto, que permite ao eleitor conferir se o seu voto foi corretamente contabilizado na apuração. Inicialmente acreditou-se que a informatização do sistema de votação resolveria todos os problemas do sistema tradicional. Porém, com a sua implantação em alguns países o sistema de votação eletrônica não mostrou-se capaz de fornecer garantias irrefutáveis que não tivesse sido alvo de alterações fraudulentas durante o seu desenvolvimento ou operação. A má reputação do sistema eletrônico está principalmente associada à falta de transparência dos processos que, em sua maioria, não proporcionam a materialização do voto, conferido pelo eleitor para fins de contagem manual, e nem geram evidências (contraprova) da correta contabilização do voto do eleitor. O objetivo deste trabalho é propor uma arquitetura de votação eletrônica que integra, de forma segura, o anonimato e autenticidade do votante, a confidencialidade e integridade do voto/sistema. O sistema aumenta a usabilidade do esquema de votação baseado em "Três Cédulas" de papel, implementando-o computacionalmente. O esquema oferece maior credibilidade ao sistema de votação através da materialização e contraprova do voto, resistência à coerção e ao comércio do voto. Utilizando esquemas de criptografia assimétrica e segurança computacional clássica, associado a um sistema de auditoria eficiente, a proposta garante segurança e transparência nos processos envolvidos. A arquitetura de construção modular distribui a responsabilidade entre suas entidades, agregando-lhe robustez e viabilizando eleições em grande escala. O protótipo do sistema desenvolvido usando serviços web e Election Markup Language mostra a viabilidade da proposta.