997 resultados para Deep architecture


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A modem software development requires quick results and excellent quality, which leads to high demand for reusability in design and implementation of software components. The purpose of this thesis was to design and implement a reusable framework for portal front ends, including common portal features, such as authentication and authorization. The aim was also to evaluate frameworks as components of reuse and compare them to other reuse techniques. As the result of this thesis, a goo'd picture of framework's life cycle, problem domain and the actual implementation process of the framework, was obtained. It was also found out that frameworks fit well to solve recurrent and similar problems in a restricted problem domain. The outcome of this thesis was a prototype of a generic framework and an example application built on it. The implemented framework offered an abstract base for the portal front ends, using object-oriented methods and wellknown design patterns. The example application demonstrated the speed and ease of the application development based on the application frameworks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies the problems and their reasons a software architect faces in his work. The purpose of the study is to search and identify potential factors causing problens in system integration and software engineering. Under a special interest are non-technical factors causing different kinds of problems. Thesis was executed by interviewing professionals that took part in e-commerce project in some corporation. Interviewed professionals consisted of architects from technical implementation projects, corporation's architect team leader, different kind of project managers and CRM manager. A specific theme list was used as an guidance of the interviews. Recorded interviews were transcribed and then classified using ATLAS.ti software. Basics of e-commerce, software engineering and system integration is described too. Differences between e-commerce and e-business as well as traditional business are represented as are basic types of e-commerce. Software's life span, general problems of software engineering and software design are covered concerning software engineering. In addition, general problems of the system integration and the special requirements set by e-commerce are described in the thesis. In the ending there is a part where the problems founded in study are described and some areas of software engineering where some development could be done so that same kind of problems could be avoided in the future.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The bryozoan fauna growing on deep-water corals (Lophelia, Madrepora) from the upper-slope of Catalonia (Blanes and Banyuls-sur-mer: NW Mediterranean Sea) was studied. Among the 36 species recorded, a new species, Escharella acuta sp. nov., and a new subspecies, Escharina dutertrei protecta ssp. nov., are described; five other species have been rarely reported or were unknown from the Mediterranean Sea (Copidozoum exiguum, Amphiblestrum flemingii, Schizomavella neptuni, Smittina crystallina, Phylactellipora eximia) . This epibiotic bryozoan fauna differs clearly from shallow-water assemblages and comprises a greater proportion of boreo-atlantic species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction La stimulation cérébrale profonde est reconnue comme étant un traitement efficace des pathologies du mouvement. Nous avons récemment modifié notre technique chirurgicale, en limitant le nombre de pénétrations intracérébrales à trois par hémisphère. Objectif Le premier objectif de cette étude est d'évaluer la précision de l'électrode implantée des deux côtés de la chirurgie, depuis l'implémentation de cette technique chirurgicale. Le deuxième objectif est d'étudier si l'emplacement de l'électrode implantée était amélioré grâce à l'électrophysiologie. Matériel et méthode Il s'agit d'une étude rétrospective reprenant les protocoles opératoires et imageries à résonnance magnétique (IRM) cérébrales de 30 patients ayant subi une stimulation cérébrale profonde bilatérale. Pour l'électrophysiologie, nous avons utilisé trois canules parallèles du « Ben Gun », centrées sur la cible planifiée grâce à l'IRM. Les IRM pré- et post-opératoires ont été fusionnées. La distance entre la cible planifiée et le centre de l'artéfact de l'électrode implantée a été mesurée. Résultats Il n'y a pas eu de différence significative concernant la précision du ciblage des deux côtés (hémisphères) de la chirurgie. Il y a eu plus d'ajustements peropératoires du deuxième côté de la chirurgie, basé sur l'électrophysiologie, ce qui a permis d'approcher de manière significative la cible planifiée grâce à l'IRM, sur l'axe médio- latéral. Conclusion Il y a plus d'ajustements nécessaires de la position de la deuxième électrode, possiblement en lien avec le « brain shift ». Nous suggérons de ce fait d'utiliser une trajectoire d'électrode centrale accompagnée par de l'électrophysiologie, associé à une évaluation clinique. En cas de résultat clinique sub-optimal, nous proposons d'effectuer une exploration multidirectionnelle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the results of stereoscopic observations of the satellite galaxy Segue 1 with the MAGIC Telescopes, carried out between 2011 and 2013. With almost 160 hours of good-quality data, this is the deepest observational campaign on any dwarf galaxy performed so far in the very high energy range of the electromagnetic spectrum. We search this large data sample for signals of dark matter particles in the mass range between 100 GeV and 20 TeV. For this we use the full likelihood analysis method, which provides optimal sensitivity to characteristic gamma-ray spectral features, like those expected from dark matter annihilation or decay. In particular, we focus our search on gamma-rays produced from different final state Standard Model particles, annihilation with internal bremsstrahlung, monochromatic lines and box-shaped signals. Our results represent the most stringent constraints to the annihilation cross-section or decay lifetime obtained from observations of satellite galaxies, for masses above few hundred GeV. In particular, our strongest limit (95% confidence level) corresponds to a ~ 500 GeV dark matter particle annihilating into τ+τ−, and is of order langleσannvrangle simeq 1.2 × 10−24 cm3 s−1 a factor ~ 40 above the langleσannvrangle simeq thermal value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis I examine Service Oriented Architecture (SOA) considering both its positive and negative qualities for business organizations and IT. In SOA, services are loosely coupled and invoked through standard interfaces to enable business process independence from the underlying technology. As an architecture, SOA brings the key benefit of service reuse that may mean anything from simple application reuse to taking advantage of entire business processes across enterprises. SOA also promises interoperability especially by the Web services standards that enable platform independency. Cost efficiency is mainly a result of the savings in IT maintenance and reduced development costs. The most severe limitations of SOA are performance implications and security issues, but the applicability of SOA is also limited. Additional disadvantages of a service oriented approach include problems in data management and complexity questions, and the lack of agreement about SOA and its twofold nature as a business as well as technology approach leads to problematic interpretation of the available information. In this thesis I find the benefits and limitations of SOA for the purpose described above and propose that companies need to consider the decision to implement SOA carefully to determine whether the benefits will outdo the costs in the individual case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: To correlate the chronic stimulated electrode position on postoperative MRI with the clinical response obtained in PD patients. Material and Method: We retrospectively reviewed 14 consecutive parkinsonian patients who were selected for STN-DBS surgery. Coordinates were determined on an IR T2 MRI coronal section per pendicular to AC-PC plane 3 mm posterior to midcommissural point (MCP) and 12 mm lateral to the midline the inferior aspect of subthalamic region. A CRW stereotactic frame was used for the surgical procedure. A 3D IR T2 MRI was performed postoperatively to determine the location of the stimulated contact in each patient. The clinical results were assessed independently by the neurological team. Results: All but 2 patients had monopolar stimulation. The mean coordinates of the stimulated contacts were: AP ^ ÿ4:23G1:4, Lat ^ 1:12G0:15, Vert ^ ÿ4:1 G2:7 to the MCP. With a mean follow-up of 8 months, all stimulated patients had a significant clinical improvement (preop/postop «ON» UPDRS: 25:8G7:0= 23:3 G8:6; preop/postop «OFF» UPDRS: 50:2G11:4=26:0 G7:8), 60% of them without any antiparkinsonian drug. Conclusion: According to the stereotactic atlas of Schaltenbrand and Warren and the 3D shape of the STN, our results show that our targetting is accurate and almost all the stimulated contacts are comprised in the STN volume. This indicates that MRI is a safe, precise and reproducible procedure for targetting the STN. The location of the stimulated contact within the STN volume is a good predictor of the clinical results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Si l'examen clinique revêt une importance essentielle en lymphologie et exige des praticiens expérimentés, la lymphoscintigraphie et plus récemment la lympho-fluoroscopie au vert d'indocyanine constituent des moyens d'investigation précieux dans la prévention, le diagnostic et le traitement des pathologies vasculaires lymphatiques. L'intérêt de la lymphoscintigraphie réside dans l'analyse qualitative et quantitative de la migration des macromolécules par les vaisseaux lymphatiques et l'évaluation du secteur lymphatique profond. La lympho-fluoroscopie se distingue de la lymphoscintigraphie par l'obtention d'une cartographie détaillée des vaisseaux lymphatiques superficiels et d'images dynamiques en temps réel. Elle apporte à l'angiologue et au physiothérapeute des informations irremplaçables sur leur contractilité et la présence de dérivations compensatoires à privilégier lors du drainage lymphatique manuel. Venous thromboembolism is a frequent disease with an annual incidence of 0.75-2.69/1000 reaching 2-7/1000 > 70 years. Deep vein thrombosis (DVT) and pulmonary embolism are two manifestations of the same underlying disease. Most frequent localization of DVT is at lower limbs. The diagnostic workup begins with an estimation of DVT risk, a judicious use of D-Dimers, and compression venous ultrasound depending on DVT probability. The development of direct oral anticoagulants and recent data on interventional DVT treatment, in selected cases, have widened the therapeutic spectrum of DVT. The present article aims at informing the primary care physician of the optimized workup of patients with lower limb suspicion of DVT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: No studies have identified which patients with upper-extremity deep vein thrombosis (DVT) are at low risk for adverse events within the first week of therapy. METHODS: We used data from Registro Informatizado de la Enfermedad TromboEmbólica to explore in patients with upper-extremity DVT a prognostic score that correctly identified patients with lower limb DVT at low risk for pulmonary embolism, major bleeding, or death within the first week. RESULTS: As of December 2014, 1135 outpatients with upper-extremity DVT were recruited. Of these, 515 (45%) were treated at home. During the first week, three patients (0.26%) experienced pulmonary embolism, two (0.18%) had major bleeding, and four (0.35%) died. We assigned 1 point to patients with chronic heart failure, creatinine clearance levels 30-60 mL min(-1) , recent bleeding, abnormal platelet count, recent immobility, or cancer without metastases; 2 points to those with metastatic cancer; and 3 points to those with creatinine clearance levels < 30 mL min(-1) . Overall, 759 (67%) patients scored ≤ 1 point and were considered to be at low risk. The rate of the composite outcome within the first week was 0.26% (95% confidence interval [CI] 0.004-0.87) in patients at low risk and 1.86% (95% CI 0.81-3.68) in the remaining patients. C-statistics was 0.73 (95% CI 0.57-0.88). Net reclassification improvement was 22%, and integrated discrimination improvement was 0.0055. CONCLUSIONS: Using six easily available variables, we identified outpatients with upper-extremity DVT at low risk for adverse events within the first week. These data may help to safely treat more patients at home.