983 resultados para Computer Architecture
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
In this thesis I examine Service Oriented Architecture (SOA) considering both its positive and negative qualities for business organizations and IT. In SOA, services are loosely coupled and invoked through standard interfaces to enable business process independence from the underlying technology. As an architecture, SOA brings the key benefit of service reuse that may mean anything from simple application reuse to taking advantage of entire business processes across enterprises. SOA also promises interoperability especially by the Web services standards that enable platform independency. Cost efficiency is mainly a result of the savings in IT maintenance and reduced development costs. The most severe limitations of SOA are performance implications and security issues, but the applicability of SOA is also limited. Additional disadvantages of a service oriented approach include problems in data management and complexity questions, and the lack of agreement about SOA and its twofold nature as a business as well as technology approach leads to problematic interpretation of the available information. In this thesis I find the benefits and limitations of SOA for the purpose described above and propose that companies need to consider the decision to implement SOA carefully to determine whether the benefits will outdo the costs in the individual case.
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).
Resumo:
Focal epilepsy is increasingly recognized as the result of an altered brain network, both on the structural and functional levels and the characterization of these widespread brain alterations is crucial for our understanding of the clinical manifestation of seizure and cognitive deficits as well as for the management of candidates to epilepsy surgery. Tractography based on Diffusion Tensor Imaging allows non-invasive mapping of white matter tracts in vivo. Recently, diffusion spectrum imaging (DSI), based on an increased number of diffusion directions and intensities, has improved the sensitivity of tractography, notably with respect to the problem of fiber crossing and recent developments allow acquisition times compatible with clinical application. We used DSI and parcellation of the gray matter in regions of interest to build whole-brain connectivity matrices describing the mutual connections between cortical and subcortical regions in patients with focal epilepsy and healthy controls. In addition, the high angular and radial resolution of DSI allowed us to evaluate also some of the biophysical compartment models, to better understand the cause of the changes in diffusion anisotropy. Global connectivity, hub architecture and regional connectivity patterns were altered in TLE patients and showed different characteristics in RTLE vs LTLE with stronger abnormalities in RTLE. The microstructural analysis suggested that disturbed axonal density contributed more than fiber orientation to the connectivity changes affecting the temporal lobes whereas fiber orientation changes were more involved in extratemporal lobe changes. Our study provides further structural evidence that RTLE and LTLE are not symmetrical entities and DSI-based imaging could help investigate the microstructural correlate of these imaging abnormalities.
Resumo:
Objectives: The present study evaluates the reliability of the Radio Memory® software (Radio Memory; Belo Horizonte,Brasil.) on classifying lower third molars, analyzing intra- and interexaminer agreement of the results. Study Design: An observational, descriptive study of 280 lower third molars was made. The corresponding orthopantomographs were analyzed by two examiners using the Radio Memory® software. The exam was repeated 30 days after the first observation by each examiner. Both intra- and interexaminer agreement were determined using the SPSS v 12.0 software package for Windows (SPSS; Chicago, USA). Results: Intra- and interexaminer agreement was shown for both the Pell & Gregory and the Winter classifications, p<0.01, with 99% significant correlation between variables in all the cases. Conclusions: The use of Radio Memory® software for the classification of lower third molars is shown to be a valid alternative to the conventional method (direct evaluation on the orthopantomograph), for both clinical and investigational applications.
Resumo:
AbstractObjective:To compare the accuracy of computer-aided ultrasound (US) and magnetic resonance imaging (MRI) by means of hepatorenal gradient analysis in the evaluation of nonalcoholic fatty liver disease (NAFLD) in adolescents.Materials and Methods:This prospective, cross-sectional study evaluated 50 adolescents (aged 11–17 years), including 24 obese and 26 eutrophic individuals. All adolescents underwent computer-aided US, MRI, laboratory tests, and anthropometric evaluation. Sensitivity, specificity, positive and negative predictive values and accuracy were evaluated for both imaging methods, with subsequent generation of the receiver operating characteristic (ROC) curve and calculation of the area under the ROC curve to determine the most appropriate cutoff point for the hepatorenal gradient in order to predict the degree of steatosis, utilizing MRI results as the gold-standard.Results:The obese group included 29.2% girls and 70.8% boys, and the eutrophic group, 69.2% girls and 30.8% boys. The prevalence of NAFLD corresponded to 19.2% for the eutrophic group and 83% for the obese group. The ROC curve generated for the hepatorenal gradient with a cutoff point of 13 presented 100% sensitivity and 100% specificity. As the same cutoff point was considered for the eutrophic group, false-positive results were observed in 9.5% of cases (90.5% specificity) and false-negative results in 0% (100% sensitivity).Conclusion:Computer-aided US with hepatorenal gradient calculation is a simple and noninvasive technique for semiquantitative evaluation of hepatic echogenicity and could be useful in the follow-up of adolescents with NAFLD, population screening for this disease as well as for clinical studies.
Resumo:
Peer-reviewed
Resumo:
Peer-reviewed
Resumo:
Peer-reviewed
Resumo:
A BASIC computer program (REMOVAL) was developed to compute in a VAXNMS environment all the calculations of the removal method for population size estimation (catch-effort method for closed populations with constant sampling effort). The program follows the maximum likelihood methodology,checks the failure conditions, applies the appropriate formula, and displays the estimates of population size and catchability, with their standard deviations and coefficients of variation, and two goodness-of-fit statistics with their significance levels. Data of removal experiments for the cyprinodontid fish Aphanius iberus in the Alt Emporda wetlands are used to exemplify the use of the program
Resumo:
The Catalan reception of the 1966 manifestos by Robert Venturi and Aldo Rossi marks the scenario of a breakup: while North America debates about the architectural shape as a linguistic structure, Italy dips its roots in the Modern Movement tradition as an origin for a new temporal and ideological architectural dimension. The first contacts between Rossi and Spain verify this search and allow the Italian to construct common itineraries with some architects from Barcelona. From these exchanges the 2C group will be born, taking part on typical vanguardist mechanisms: they will publish the magazine, 2C. The construction of the city (1972-1985), they will attend the XV Triennale di Milano in 1973 with the Torres Clavé Plan (1971) and the Aldo Rossi + 21 Spanish architects exhibition (1975) while he will organize the three issues of the Seminarios Internacionales de Arquitectura Contemporánea (S.I.AC.) which took place in Santiago, Sevilla and Barcelona between 1976 and 1980. In front of the unfolding of the firsts, the American contacts of Federico Correa, Oriol Bohigas, Lluís Domènech and the PER studio or the teaching work of Rafael Moneo from Barcelona since 1971, allow to draw replica itineraries with the foundation of the magazine Arquitecturas Bis (1974-1985), the organization of the meetings between international publications such as Lotus and Oppositions in Cadaqués (1975) and New York (1977), while stablishing exchanges with members of the Five Architects. Replicas that in 1976 conduct the initial ideological affirmations between Rossi and the 2C group towards irreconcilable distancing. Verifying the itinerary of the journey that the Italian leads from the Italian resistance towards the American backing down is part of the aim of this article