995 resultados para directories
Resumo:
La publicació d'aquesta nòmina coincideix amb el 225è aniversari de la creació de la "Academia Médico Práctica de Barcelona", després anomenada Reial Acadèmia de Medicina de Catalunya. El llibre no pretén ser una història de l'Acadèmia si no que es limita a llistar els gairebé 350 membres numeraris de l'acadèmia. Ara bé, per fer-ho menys feixuc, s'han afegit breus referències sobre l'evolució, la composició i els locals on s'ha desenvolupat la seva activitat.
Resumo:
Annual directory of PreK-12, area education agencies, universities, colleges, and the Iowa Department of Education
Resumo:
This is the first in an annual series of in-depth surveys of the public library scene in Iowa. The report includes two main sections: an enlarged directory consisting of five separate smaller directories and an expanded library statistical survey.
Resumo:
Työn tavoitteena on esittää strategisen tuotteen toimittajan valintaprosessi. Tavoitteena on esittää valintaprosessin eri vaiheet ja niissä huomioitavat asiat sekä eri vaiheisiin sovellettavia menetelmiä ja työkaluja.Valintaprosessi alkaa tarpeen määrittämisellä, joka käsittää sekä tuotteen ominaisuuksien että toimittajasuhteen kuvaamisen. Strategisten tuotteiden kohdalla tavoitteena on useimmiten pitkäaikainen yhteistyömuoto valitun toimittajan kanssa. Tarpeen määrittämisen jälkeen etsitään eri lähteistä, kuten erilaisista kaupallisista hakemistoista, internetistä, ammattijulkaisuista sekä henkilökohtaisten kontaktien avulla, potentiaalisia toimittajia, joista luodaan ehdokaslista. Seuraavana on vuorossa sovellettavien valintakriteerien määrittäminen. Tyypillisesti tarkastellaan ainakin toimittajan taloudellista tilannetta, laatua, tuotantoa, kuljetusta, palvelua ja raportointia ja tiedonvälitystä. Tämän jälkeen suoritetaan ensimmäinen seulonta, jossa karsitaan epäsopivimmat ehdokkaat pois prosessin jatkovaiheista. Tarvittavia tietoja voidaan hankkia kirjallisten kyselyiden sekä puhelin- ja henkilöhaastatteluiden avulla. Yksityiskohtaisessa arvioinnissa suoritetaan perusteellinen toimittajien vertailu aikaisemmin valittujen valintakriteerien mukaan. Toimittajan yksityiskohtaiseen arviointiin on tarjolla erilaisia menetelmiä, kuten luokiteltu arviointi, painotettu pistearviointi ja kustannussuhdearviointi. Neuvotteluihin valitaan tyypillisesti muutama sopivin toimittaja ja neuvotteluiden jälkeen on pystyttävä valitsemaan sopivin toimittaja tai vaihtoehtoisesti pari toimittajaa, joiden kesken sopimus jaetaan.
Resumo:
Diplomityössä perehdytään nykyisiin hakemistotekniikoihin ja niiden hyödyntä-miseen. World Wide Web on tuonut hakemistomaailmaankin aivan uusia ulottuvuuksia ja tästä on osoituksena LDAP-hakemistoprotokollan suosion merkittävä kasvu. LDAP soveltuu Internet-maailmaan erinomaisesti keveytensä, helppokäyttöisyytensä ja nopeutensa ansiosta. Käytännön osuudessa toteutettiin LDAP-tekniikkaa hyödyntävä yrityshakemisto, jolla voitiin WWW-käyttöliittymällä etsiä ja muokata yrityksen henkilö- ja yksikkötietoja. Työn tarkoitus oli selvittää LDAP:n soveltuvuutta kaupalliseen sovellukseen ja kerätä kokemuksia siihen liittyvistä asioista ja tekniikoista.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Tutkimuksen tavoitteena oli selvittää tietoon liittyviä tietojohtamisen esteitä organisaatiossa. Lisäksi selvitettiin, mitä esteitä löytyy organisaatiokulttuurin, johtamisen ja teknologian osalta, mihin tietoprosessiin ja millaiseen tietoon esteet liittyvät. Tutkimuksessa perehdyttiin tietoon, tietojohtamiseen, tietojohtamisen mahdollistajiin ja esteisiin. Tutkimus suoritettiin case-organisaatiossa laadullisena tapaustutkimuksena. Organisaation keskijohdon parissa tehtiin kuusi puolistrukturoitua haastattelua. Aineisto analysoitiin sisällön analyysinä. Suurimmat tietojohtamisen esteet olivat ajan puute ja sisäisen tiedon löytämisen, varastoinnin ja jakamisen puute. Eniten tietojohtamisen esteitä löytyi organisaatiokulttuurista. Esteitä löytyi kaikista tietoprosesseista, kun taas aiemmissa tutkimuksissa ne liittyivät pääosin tiedon jakamiseen. Tiedon löytämisen helpottaminen, kuten tietohakemistojen luominen ja intranetin ”siivoaminen”, tehostaisi työtä. Tärkeän tiedon kartoittaminen, varastointi, tiedon jakaminen sekä ilmapiirin avoimuus voisivat auttaa organisaatiota kohti tehokkaampaa tiedon liikkumista ja asiakaspalvelua.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Business directory for Canada and Newfoundland for the year 1899
Resumo:
As the number of processors in distributed-memory multiprocessors grows, efficiently supporting a shared-memory programming model becomes difficult. We have designed the Protocol for Hierarchical Directories (PHD) to allow shared-memory support for systems containing massive numbers of processors. PHD eliminates bandwidth problems by using a scalable network, decreases hot-spots by not relying on a single point to distribute blocks, and uses a scalable amount of space for its directories. PHD provides a shared-memory model by synthesizing a global shared memory from the local memories of processors. PHD supports sequentially consistent read, write, and test- and-set operations. This thesis also introduces a method of describing locality for hierarchical protocols and employs this method in the derivation of an abstract model of the protocol behavior. An embedded model, based on the work of Johnson[ISCA19], describes the protocol behavior when mapped to a k-ary n-cube. The thesis uses these two models to study the average height in the hierarchy that operations reach, the longest path messages travel, the number of messages that operations generate, the inter-transaction issue time, and the protocol overhead for different locality parameters, degrees of multithreading, and machine sizes. We determine that multithreading is only useful for approximately two to four threads; any additional interleaving does not decrease the overall latency. For small machines and high locality applications, this limitation is due mainly to the length of the running threads. For large machines with medium to low locality, this limitation is due mainly to the protocol overhead being too large. Our study using the embedded model shows that in situations where the run length between references to shared memory is at least an order of magnitude longer than the time to process a single state transition in the protocol, applications exhibit good performance. If separate controllers for processing protocol requests are included, the protocol scales to 32k processor machines as long as the application exhibits hierarchical locality: at least 22% of the global references must be able to be satisfied locally; at most 35% of the global references are allowed to reach the top level of the hierarchy.
Resumo:
Primera conferencia. Bibliotecas y Repositorios Digitales: Gestión del Conocimiento, Acceso Abierto y Visibilidad Latinoamericana. (BIREDIAL) Mayo 9 al 11 de 2011. Bogotá, Colombia.
Resumo:
Purpose - The purpose of this paper is to identify the most popular techniques used to rank a web page highly in Google. Design/methodology/approach - The paper presents the results of a study into 50 highly optimized web pages that were created as part of a Search Engine Optimization competition. The study focuses on the most popular techniques that were used to rank highest in this competition, and includes an analysis on the use of PageRank, number of pages, number of in-links, domain age and the use of third party sites such as directories and social bookmarking sites. A separate study was made into 50 non-optimized web pages for comparison. Findings - The paper provides insight into the techniques that successful Search Engine Optimizers use to ensure a page ranks highly in Google. Recognizes the importance of PageRank and links as well as directories and social bookmarking sites. Research limitations/implications - Only the top 50 web sites for a specific query were analyzed. Analysing more web sites and comparing with similar studies in different competition would provide more concrete results. Practical implications - The paper offers a revealing insight into the techniques used by industry experts to rank highly in Google, and the success or other-wise of those techniques. Originality/value - This paper fulfils an identified need for web sites and e-commerce sites keen to attract a wider web audience.
Resumo:
As indústrias criativas são hoje um tema de intenso debate na literatura acadêmica internacional e nas organizações públicas e governamentais. Essas indústrias nasceram como um conceito conciliador entre as indústrias culturais tradicionais, as artes criativas e as novas tecnologias de informação. O objetivo desta pesquisa foi fazer um levantamento bibliográfico sobre o tema e um mapeamento de um core dessas indústrias no país e no Estado de São Paulo. Para a realização deste mapeamento, utilizou-se de informações provenientes de fontes secundárias, como de relatórios de institutos de pesquisa, listas telefônicas e órgãos de classe. Os resultados apontam para um desenvolvimento mais pronunciado das indústrias criativas focadas em produção de bens culturais de massa, como Televisão e rádio, bem como, menos expressivamente porém, em audiovisual. No Estado de São Paulo, apenas 1,0% do PIB está associado às atividades das indústrias criativas, com esperada concentração na capital e região metropolitana. Este relatório aponta ainda algumas linhas de pesquisas futuras sobre o tema.
Resumo:
This paper presents a methodology and a tool for projects involving analogue and digital signals. A sub-systems group was developed to translation a Matlab/Simulink model in the correspondent structural model described in VHDL-AMS. The developed translation tool, named of MS(2)SV, can reads a file containing a Simulink model translating it in the correspondent VHDL-AMS structural code. The tool also creates the directories structure and necessary files to simulate the model translated in System Vision environment. Three models of D/A converters available commercially that use R-2R ladder network were studied. This work considers some of challenges set by the electronic industry for the further development of simulation methodologies and tools in the field of mixed-signal technology. Although the objective of the studies has been the D/A converter, the developed methodology has potentiality to be extended to consider control systems and mechatronic systems.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)