962 resultados para Meteor, Javascript, applicazione web, framework full stack
Resumo:
The fuzzy online reputation analysis framework, or “foRa” (plural of forum, the Latin word for marketplace) framework, is a method for searching the Social Web to find meaningful information about reputation. Based on an automatic, fuzzy-built ontology, this framework queries the social marketplaces of the Web for reputation, combines the retrieved results, and generates navigable Topic Maps. Using these interactive maps, communications operatives can zero in on precisely what they are looking for and discover unforeseen relationships between topics and tags. Thus, using this framework, it is possible to scan the Social Web for a name, product, brand, or combination thereof and determine query-related topic classes with related terms and thus identify hidden sources. This chapter also briefly describes the youReputation prototype (www.youreputation.org), a free web-based application for reputation analysis. In the course of this, a small example will explain the benefits of the prototype.
Resumo:
For the main part, electronic government (or e-government for short) aims to put digital public services at disposal for citizens, companies, and organizations. To that end, in particular, e-government comprises the application of Information and Communications Technology (ICT) to support government operations and provide better governmental services (Fraga, 2002) as possible with traditional means. Accordingly, e-government services go further as traditional governmental services and aim to fundamentally alter the processes in which public services are generated and delivered, after this manner transforming the entire spectrum of relationships of public bodies with its citizens, businesses and other government agencies (Leitner, 2003). To implement this transformation, one of the most important points is to inform the citizen, business, and/or other government agencies faithfully and in an accessible way. This allows all the partaking participants of governmental affairs for a transition from passive information access to active participation (Palvia and Sharma, 2007). In addition, by a corresponding handling of the participants' data, a personalization towards these participants may even be accomplished. For instance, by creating significant user profiles as a kind of participants' tailored knowledge structures, a better-quality governmental service may be provided (i.e., expressed by individualized governmental services). To create such knowledge structures, thus known information (e.g., a social security number) can be enriched by vague information that may be accurate to a certain degree only. Hence, fuzzy knowledge structures can be generated, which help improve governmental-participants relationship. The Web KnowARR framework (Portmann and Thiessen, 2013; Portmann and Pedrycz, 2014; Portmann and Kaltenrieder, 2014), which I introduce in my presentation, allows just all these participants to be automatically informed about changes of Web content regarding a- respective governmental action. The name Web KnowARR thereby stands for a self-acting entity (i.e. instantiated form the conceptual framework) that knows or apprehends the Web. In this talk, the frameworks respective three main components from artificial intelligence research (i.e. knowledge aggregation, representation, and reasoning), as well as its specific use in electronic government will be briefly introduced and discussed.
Resumo:
The Social Web offers increasingly simple ways to publish and disseminate personal or opinionated information, which can rapidly exhibit a disastrous influence on the online reputation of organizations. Based on social Web data, this study describes the building of an ontology based on fuzzy sets. At the end of a recurring harvesting of folksonomies by Web agents, the aggregated tags are purified, linked, and transformed to a so-called fuzzy grassroots ontology by means of a fuzzy clustering algorithm. This self-updating ontology is used for online reputation analysis, a crucial task of reputation management, with the goal to follow the online conversation going on around an organization to discover and monitor its reputation. In addition, an application of the Fuzzy Online Reputation Analysis (FORA) framework, lesson learned, and potential extensions are discussed in this article.
Resumo:
PURPOSE To develop internationally harmonised standards for programmes of training in intensive care medicine (ICM). METHODS Standards were developed by using consensus techniques. A nine-member nominal group of European intensive care experts developed a preliminary set of standards. These were revised and refined through a modified Delphi process involving 28 European national coordinators representing national training organisations using a combination of moderated discussion meetings, email, and a Web-based tool for determining the level of agreement with each proposed standard, and whether the standard could be achieved in the respondent's country. RESULTS The nominal group developed an initial set of 52 possible standards which underwent four iterations to achieve maximal consensus. All national coordinators approved a final set of 29 standards in four domains: training centres, training programmes, selection of trainees, and trainers' profiles. Only three standards were considered immediately achievable by all countries, demonstrating a willingness to aspire to quality rather than merely setting a minimum level. Nine proposed standards which did not achieve full consensus were identified as potential candidates for future review. CONCLUSIONS This preliminary set of clearly defined and agreed standards provides a transparent framework for assuring the quality of training programmes, and a foundation for international harmonisation and quality improvement of training in ICM.
Resumo:
It has become increasingly clear that desertification can only be tackled through a multi-disciplinary approach that not only involves scientists but also stakeholders. In the DESIRE project such an approach was taken. As a first step, a conceptual framework was developed in which the factors and processes that may lead to land degradation and desertification were described. Many of these factors do not work independently, but can reinforce or weaken one another, and to illustrate these relationships sustainable management and policy feedback loops were included. This conceptual framework can be applied globally, but can also be made site-specific to take into account that each study site has a unique combination of bio-physical, socio-economic and political conditions. Once the conceptual framework was defined, a methodological framework was developed in which the methodological steps taken in the DESIRE approach were listed and their logic and sequence were explained. The last step was to develop a concrete working plan to put the project into action, involving stakeholders throughout the process. This series of steps, in full or in part, offers explicit guidance for other organizations or projects that aim to reduce land degradation and desertification.
Resumo:
Debuggers are crucial tools for developing object-oriented software systems as they give developers direct access to the running systems. Nevertheless, traditional debuggers rely on generic mechanisms to explore and exhibit the execution stack and system state, while developers reason about and formulate domain-specific questions using concepts and abstractions from their application domains. This creates an abstraction gap between the debugging needs and the debugging support leading to an inefficient and error-prone debugging effort. To reduce this gap, we propose a framework for developing domain-specific debuggers called the Moldable Debugger. The Moldable Debugger is adapted to a domain by creating and combining domain-specific debugging operations with domain-specific debugging views, and adapts itself to a domain by selecting, at run time, appropriate debugging operations and views. We motivate the need for domain-specific debugging, identify a set of key requirements and show how our approach improves debugging by adapting the debugger to several domains.
Resumo:
Background The RCSB Protein Data Bank (PDB) provides public access to experimentally determined 3D-structures of biological macromolecules (proteins, peptides and nucleic acids). While various tools are available to explore the PDB, options to access the global structural diversity of the entire PDB and to perceive relationships between PDB structures remain very limited. Methods A 136-dimensional atom pair 3D-fingerprint for proteins (3DP) counting categorized atom pairs at increasing through-space distances was designed to represent the molecular shape of PDB-entries. Nearest neighbor searches examples were reported exemplifying the ability of 3DP-similarity to identify closely related biomolecules from small peptides to enzyme and large multiprotein complexes such as virus particles. The principle component analysis was used to obtain the visualization of PDB in 3DP-space. Results The 3DP property space groups proteins and protein assemblies according to their 3D-shape similarity, yet shows exquisite ability to distinguish between closely related structures. An interactive website called PDB-Explorer is presented featuring a color-coded interactive map of PDB in 3DP-space. Each pixel of the map contains one or more PDB-entries which are directly visualized as ribbon diagrams when the pixel is selected. The PDB-Explorer website allows performing 3DP-nearest neighbor searches of any PDB-entry or of any structure uploaded as protein-type PDB file. All functionalities on the website are implemented in JavaScript in a platform-independent manner and draw data from a server that is updated daily with the latest PDB additions, ensuring complete and up-to-date coverage. The essentially instantaneous 3DP-similarity search with the PDB-Explorer provides results comparable to those of much slower 3D-alignment algorithms, and automatically clusters proteins from the same superfamilies in tight groups. Conclusion A chemical space classification of PDB based on molecular shape was obtained using a new atom-pair 3D-fingerprint for proteins and implemented in a web-based database exploration tool comprising an interactive color-coded map of the PDB chemical space and a nearest neighbor search tool. The PDB-Explorer website is freely available at www.cheminfo.org/pdbexplorer and represents an unprecedented opportunity to interactively visualize and explore the structural diversity of the PDB.
Resumo:
AIM Virtual patients (VPs) are a one-of-a-kind e-learning resource, fostering clinical reasoning skills through clinical case examples. The combination with face-to-face teaching is important for their successful integration, which is referred to as "blended learning". So far little is known about the use of VPs in the field of continuing medical education and residency training. The pilot study presented here inquired the application of VPs in the framework of a pediatric residency revision course. METHODS Around 200 participants of a pediatric nephology lecture ('nephrotic and nephritic syndrome in children') were offered two VPs as a wrap-up session at the revision course of the German Society for Pediatrics and Adolescent Medicine (DGKJ) 2009 in Heidelberg, Germany. Using a web-based survey form, different aspects were evaluated concerning the learning experiences with VPs, the combination with the lecture, and the use of VPs for residency training in general. RESULTS N=40 evaluable survey forms were returned (approximately 21%). The return rate was impaired by a technical problem with the local Wi-Fi firewall. The participants perceived the work-up of the VPs as a worthwhile learning experience, with proper preparation for diagnosing and treating real patients with similar complaints. Case presentations, interactivity, and locally and timely independent repetitive practices were, in particular, pointed out. On being asked about the use of VPs in general for residency training, there was a distinct demand for more such offers. CONCLUSION VPs may reasonably complement existing learning activities in residency training.
Resumo:
Beginning in the late 1980s, lobster (Homarus americanus) landings for the state of Maine and the Bay of Fundy increased to levels more than three times their previous 20-year means. Reduced predation may have permitted the expansion of lobsters into previously inhospitable territory, but we argue that in this region the spatial patterns of recruitment and the abundance of lobsters are substantially driven by events governing the earliest life history stages, including the abundance and distribution of planktonic stages and their initial settlement as Young-of-Year (YOY) lobsters. Settlement densities appear to be strongly driven by abundance of the pelagic postlarvae. Postlarvae and YOY show large-scale spatial patterns commensurate with coastal circulation, but also multi-year trends in abundance and abrupt shifts in abundance and spatial patterns that signal strong environmental forcing. The extent of the coastal shelf that defines the initial settlement grounds for lobsters is important to future population modeling. We address one part of this definition by examining patterns of settlement with depth, and discuss a modeling framework for the full life history of lobsters in the Gulf of Maine.
Resumo:
We report down-core sedimentary Nd isotope (epsilon Nd) records from two South Atlantic sediment cores, MD02-2594 and GeoB3603-2, located on the western South African continental margin. The core sites are positioned downstream of the present-day flow path of North Atlantic Deep Water (NADW) and close to the Southern Ocean, which makes them suitable for reconstructing past variability in NADW circulation over the last glacial cycle. The Fe-Mn leachates epsilon Nd records show a coherent decreasing trend from glacial radiogenic values towards less radiogenic values during the Holocene. This trend is confirmed by epsilon Nd in fish debris and mixed planktonic foraminifera, albeit with an offset during the Holocene to lower values relative to the leachates, matching the present-day composition of NADW in the Cape Basin. We interpret the epsilon Nd changes as reflecting the glacial shoaling of Southern Ocean waters to shallower depths combined with the admixing of southward flowing Northern Component Water (NCW). A compilation of Atlantic epsilon Nd records reveals increasing radiogenic isotope signatures towards the south and with increasing depth. This signal is most prominent during the Last Glacial Maximum (LGM) and of similar amplitude across the Atlantic basin, suggesting continuous deep water production in the North Atlantic and export to the South Atlantic and the Southern Ocean. The amplitude of the epsilon Nd change from the LGM to Holocene is largest in the southernmost cores, implying a greater sensitivity to the deglacial strengthening of NADW at these sites. This signal impacted most prominently the South Atlantic deep and bottom water layers that were particularly deprived of NCW during the LGM. The epsilon Nd variations correlate with changes in 231Pa/230Th ratios and benthic d13C across the deglacial transition. Together with the contrasting 231Pa/230Th: epsilon Nd pattern of the North and South Atlantic, this indicates a progressive reorganization of the AMOC to full strength during the Holocene.