924 resultados para Combined Web crippling and Flange Crushing
Resumo:
The great majority of the courses on science and technology areas where lab work is a fundamental part of the apprenticeship was not until recently available to be taught at distance. This reality is changing with the dissemination of remote laboratories. Supported by resources based on new information and communication technologies, it is now possible to remotely control a wide variety of real laboratories. However, most of them are designed specifically to this purpose, are inflexible and only on its functionality they resemble the real ones. In this paper, an alternative remote lab infrastructure devoted to the study of electronics is presented. Its main characteristics are, from a teacher's perspective, reusability and simplicity of use, and from a students' point of view, an exact replication of the real lab, enabling them to complement or finish at home the work started at class. The remote laboratory is integrated in the Learning Management System in use at the school, and therefore, may be combined with other web experiments and e-learning strategies, while safeguarding security access issues.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Trabalho de Projeto apresentado ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Marketing Digital, sob orientação do Mestre Paulo Gonçalves
Resumo:
Dissertação apresentada na faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertação de Mestrado apresentada ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Marketing Digital, sob orientação do Prof. Paulo Alexandre Pires
Resumo:
O surgir da World Wide Web providenciou aos utilizadores uma série de oportunidades no que diz respeito ao acesso a dados e informação. Este acesso tornou-se um ato banal para qualquer utilizador da Web, tanto pelo utilizador comum como por outros mais experientes, tanto para obter informações básicas, como outras informações mais complexas. Todo este avanço tecnológico permitiu que os utilizadores tivessem acesso a uma vasta quantidade de informação, dispersa pelo globo, não tendo, na maior parte das vezes, a informação qualquer tipo de ligação entre si. A necessidade de se obter informação de interesse relativamente a determinado tema, mas tendo que recorrer a diversas fontes para obter toda a informação que pretende obter e comparar, torna-se um processo moroso para o utilizador. Pretende-se que este processo de recolha de informação de páginas web seja o mais automatizado possível, dando ao utilizador a possibilidade de utilizar algoritmos e ferramentas de análise e processamento automáticas, reduzindo desta forma o tempo e esforço de realização de tarefas sobre páginas web. Este processo é denominado Web Scraping. Neste trabalho é descrita uma arquitetura de sistema de web scraping automático e configurável baseado em tecnologias existentes, nomeadamente no contexto da web semântica. Para tal o trabalho desenvolvido analisa os efeitos da aplicação do Web Scraping percorrendo os seguintes pontos: • Identificação e análise de diversas ferramentas de web scraping; • Identificação do processo desenvolvido pelo ser humano complementar às atuais ferramentas de web scraping; • Design duma arquitetura complementar às ferramentas de web scraping que dê apoio ao processo de web scraping do utilizador; • Desenvolvimento dum protótipo baseado em ferramentas e tecnologias existentes; • Realização de experiências no domínio de aplicação de páginas de super-mercados portugueses; • Analisar resultados obtidos a partir destas.
Resumo:
This article presents a work-in-progress version of a Dublin Core Application Profile (DCAP) developed to serve the Social and Solidarity Economy (SSE). Studies revealed that this community is interested in implementing both internal interoperability between their Web platforms to build a global SSE e-marketplace, and external interoperability among their Web platforms and external ones. The Dublin Core Application Profile for Social and Solidarity Economy (DCAP-SSE) serves this purpose. SSE organisations are submerged in the market economy but they have specificities not taken into account in this economy. The DCAP-SSE integrates terms from well-known metadata schemas, Resource Description Framework (RDF) vocabularies or ontologies, in order to enhance interoperability and take advantage of the benefits of the Linked Open Data ecosystem. It also integrates terms from the new essglobal RDF vocabulary which was created with the goal to respond to the SSE-specific needs. The DCAP-SSE also integrates five new Vocabulary Encoding Schemes to be used with DCAP-SSE properties. The DCAP development was based on a method for the development of application profiles (Me4MAP). We believe that this article has an educational value since it presents the idea that it is important to base DCAP developments on a method. This article shows the main results of applying such a method.
Resumo:
Master’s Degree Dissertation
Resumo:
As we move more closely to the practical concept of the Internet of Things and, our reliance on public and private APIs increases, web services and their related topics have become utterly crucial to the informatics community. However, the question about which style of web services would best solve a particular problem, can raise signi cant and multifarious debates. There can be found two implementation styles that highlight themselves: the RPC-oriented style represented by the SOAP protocol’s implementations and the hypermedia style, which is represented by the REST architectural style’s implementations. As we search examples of already established web services, we can nd a handful of robust and reliable public and private SOAP APIs, nevertheless, it seems that RESTful services are gaining popularity in the enterprise community. For the current generation of developers that work on informatics solutions, REST seems to represent a fundamental and straightforward alternative and even, a more deep-rooted approach than SOAP. But are they comparable? Do both approaches have each speci c best suitable scenarios? Such study is brie y carried out in the present document’s chapters, starting with the respective background study, following an analysis of the hypermedia approach and an instantiation of its architecture, in a particular case study applied in a BPM context.
Resumo:
OBJECTIVE: Combined hyperlipidaemia is a common and highly atherogenic lipid phenotype with multiple lipoprotein abnormalities that are difficult to normalise with single-drug therapy. The ATOMIX multicentre, controlled clinical trial compared the efficacy and safety of atorvastatin and bezafibrate in patients with diet-resistant combined hyperlipidaemia. PATIENTS AND STUDY DESIGN: Following a 6-week placebo run-in period, 138 patients received atorvastatin 10mg or bezafibrate 400mg once daily in a randomised, double-blind, placebo-controlled trial. To meet predefined low-density lipoprotein-cholesterol (LDL-C) target levels, atorvastatin dosages were increased to 20mg or 40mg once daily after 8 and 16 weeks, respectively. RESULTS: After 52 weeks, atorvastatin achieved greater reductions in LDL-C than bezafibrate (percentage decrease 35 vs 5; p < 0.0001), while bezafibrate achieved greater reductions in triglyceride than atorvastatin (percentage decrease 33 vs 21; p < 0.05) and greater increases in high-density lipoprotein-cholesterol (HDL-C) [percentage increase 28 vs 17; p < 0.01 ]. Target LDL-C levels (according to global risk) were attained in 62% of atorvastatin recipients and 6% of bezafibrate recipients, and triglyceride levels <200 mg/dL were achieved in 52% and 60% of patients, respectively. In patients with normal baseline HDL-C, bezafibrate was superior to atorvastatin for raising HDL-C, while in those with baseline HDL-C <35 mg/dL, the two drugs raised HDL-C to a similar extent after adjustment for baseline values. Both drugs were well tolerated. CONCLUSION: The results show that atorvastatin has an overall better efficacy than bezafibrate in concomitantly reaching LDL-C and triglyceride target levels in combined hyperlipidaemia, thus supporting its use as monotherapy in patients with this lipid phenotype.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
INTRODUCTION: Extended spectrum β-lactamases (ESBLs) are enzymes that degrade β-lactam antibiotics and have been reported to be an important cause of nosocomial infection in worldwide. METHODS: During 2009, 659 enterobacteria strains were isolated from different clinical specimens and tested for ESBL production. The disk approximation test, combined disk method and addition of clavulanic acid were used for phenotypic detection of the ESBL-producing strains and PCR for detection of the blaTEM and blaCTX-M genes. RESULTS: Among the isolates, 125 were ESBL producers. The blaCTX-M and blaTEM genes were detected in 90.4% and 75% of the strains, respectively. Most strains were isolated from urine. Klebsiella pneumoniae was the most prevalent organism. Microorganisms presented high resistance to the antibiotics. CONCLUSIONS: These results support the need for extending ESBL detection methods to different pathogens of the Enterobacteriaceae family because these methods are only currently standardized by the CLSI for Escherichia coli, Klebsiella pneumoniae, Klebsiella oxytoca and Proteus mirabilis. Carbapenems were the antibiotic class of choice for the treatment of infections caused by ESBL-producing Enterobacteriaceae.
Resumo:
Both culture coverage and digital journalism are contemporary phenomena that have undergone several transformations within a short period of time. Whenever the media enters a period of uncertainty such as the present one, there is an attempt to innovate in order to seek sustainability, skip the crisis or find a new public. This indicates that there are new trends to be understood and explored, i.e., how are media innovating in a digital environment? Not only does the professional debate about the future of journalism justify the need to explore the issue, but so do the academic approaches to cultural journalism. However, none of the studies so far have considered innovation as a motto or driver and tried to explain how the media are covering culture, achieving sustainability and engaging with the readers in a digital environment. This research examines how European media which specialize in culture or have an important cultural section are innovating in a digital environment. Specifically, we see how these innovation strategies are being taken in relation to the approach to culture and dominant cultural areas, editorial models, the use of digital tools for telling stories, overall brand positioning and extensions, engagement with the public and business models. We conducted a mixed methods study combining case studies of four media projects, which integrates qualitative web features and content analysis, with quantitative web content analysis. Two major general-interest journalistic brands which started as physical newspapers – The Guardian (London, UK) and Público (Lisbon, Portugal) – a magazine specialized in international affairs, culture and design – Monocle (London, UK) – and a native digital media project that was launched by a cultural organization – Notodo, by La Fábrica – were the four case studies chosen. Findings suggest, on one hand, that we are witnessing a paradigm shift in culture coverage in a digital environment, challenging traditional boundaries related to cultural themes and scope, angles, genres, content format and delivery, engagement and business models. Innovation in the four case studies lies especially along the product dimensions (format and content), brand positioning and process (business model and ways to engage with users). On the other hand, there are still perennial values that are crucial to innovation and sustainability, such as commitment to journalism, consistency (to the reader, to brand extensions and to the advertiser), intelligent differentiation and the capability of knowing what innovation means and how it can be applied, since this thesis also confirms that one formula doesn´t suit all. Changing minds, exceeding cultural inertia and optimizing the memory of the websites, looking at them as living, organic bodies, which continuously interact with the readers in many different ways, and not as a closed collection of articles, are still the main challenges for some media.
Resumo:
The use of chemical analysis of microbial components, including proteins, became an important achievement in the 80’s of the last century to the microbial identification. This led a more objective microbial identification scheme, called chemotaxonomy, and the analytical tools used in the field are mainly 1D/2D gel electrophoresis, spectrophotometry, high-performance liquid chromatography, gas chromatography, and combined gas chromatography-mass spectrometry. The Edman degradation reaction was also applied to peptides sequence giving important insights to the microbial identification. The rapid development of these techniques, in association with knowledge generated by DNA sequencing and phylogeny based on rRNA gene and housekeeping genes sequences, boosted the microbial identification to an unparalleled scale. The recent results of mass spectrometry (MS), like Matrix-Assisted Laser Desorption/Ionisation Time-of-Flight (MALDI-TOF), for rapid and reliable microbial identification showed considerable promise. In addition, the technique is rapid, reliable and inexpensive in terms of labour and consumables when compared with other biological techniques. At present, MALDI-TOF MS adds an additional step for polyphasic identification which is essential when there is a paucity of characters or high DNA homologies for delimiting very close related species. The full impact of this approach is now being appreciated when more diverse species are studied in detail and successfully identified. However, even with the best polyphasic system, identification of some taxa remains time-consuming and determining what represents a species remains subjective. The possibilities opened with new and even more robust mass spectrometers combined with sound and reliable databases allow not only the microbial identification based on the proteome fingerprinting but also include de novo specific proteins sequencing as additional step. These approaches are pushing the boundaries in the microbial identification field.
Resumo:
A search for Higgs boson production in association with a W or Z boson, in the H→ W W ∗ decay channel, is performed with a data sample collected with the ATLAS detector at the LHC in proton-proton collisions at centre-of-mass energies s√=7 TeV and 8 TeV, corresponding to integrated luminosities of 4.5 fb−1 and 20.3 fb−1, respectively. The WH production mode is studied in two-lepton and three-lepton final states, while two- lepton and four-lepton final states are used to search for the ZH production mode. The observed significance, for the combined W H and ZH production, is 2.5 standard deviations while a significance of 0.9 standard deviations is expected in the Standard Model Higgs boson hypothesis. The ratio of the combined W H and ZH signal yield to the Standard Model expectation, μ V H , is found to be μ V H = 3.0 − 1.1 + 1.3 (stat.) − 0.7 + 1.0 (sys.) for the Higgs boson mass of 125.36 GeV. The W H and ZH production modes are also combined with the gluon fusion and vector boson fusion production modes studied in the H → W W ∗ → ℓνℓν decay channel, resulting in an overall observed significance of 6.5 standard deviations and μ ggF + VBF + VH = 1. 16 − 0.15 + 0.16 (stat.) − 0.15 + 0.18 (sys.). The results are interpreted in terms of scaling factors of the Higgs boson couplings to vector bosons (κ V ) and fermions (κ F ); the combined results are: |κ V | = 1.06 − 0.10 + 0.10 , |κ F | = 0. 85 − 0.20 + 0.26 .