862 resultados para Web, Html 5, JavaScript, Dart, Structured Web Programming


Relevância:

50.00% 50.00%

Publicador:

Resumo:

We present an empirical evaluation and comparison of two content extraction methods in HTML: absolute XPath expressions and relative XPath expressions. We argue that the relative XPath expressions, although not widely used, should be used in preference to absolute XPath expressions in extracting content from human-created Web documents. Evaluation of robustness covers four thousand queries executed on several hundred webpages. We show that in referencing parts of real world dynamic HTML documents, relative XPath expressions are on average significantly more robust than absolute XPath ones.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Rationale, aims and objectives: Patients with both cardiac disease and diabetes have poorer health outcomes than patients with only one chronic condition. While evidence indicates that internet based interventions may improve health outcomes for patients with a chronic disease, there is no literature on internet programs specific to cardiac patients with comorbid diabetes. Therefore this study aimed to develop a specific web-based program, then to explore patients’ perspectives on the usefulness of a new program. Methods: The interpretive approach using semi-structured interviews on a purposive sample of eligible patients with type 2 diabetes and a cardiac condition in a metropolitan hospital in Brisbane, Australia. Thematic analysis was undertaken to describe the perceived usefulness of a newly developed Heart2heart webpage. Results: Themes identified included confidence in hospital health professionals and reliance on doctors to manage conditions. Patients found the webpage useful for managing their conditions at home. Conclusions: The new Heart2heart webpage provided a positive and useful resource. Further research on to determine the potential influence of this resource on patients’ self-management behaviours is paramount. Implications for practice include using multimedia strategies for providing information to patients’ comorbidities of cardiac disease and type 2 diabetes, and further development on enhancement of such strategies

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Information available on company websites can help people navigate to the offices of groups and individuals within the company. Automatically retrieving this within-organisation spatial information is a challenging AI problem This paper introduces a novel unsupervised pattern-based method to extract within-organisation spatial information by taking advantage of HTML structure patterns, together with a novel Conditional Random Fields (CRF) based method to identify different categories of within-organisation spatial information. The results show that the proposed method can achieve a high performance in terms of F-Score, indicating that this purely syntactic method based on web search and an analysis of HTML structure is well-suited for retrieving within-organisation spatial information.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Tämä tutkielma käsittelee World Wide Webin sisältämien verkkosivujen sisältöjen käyttöä korpusmaisesti kielitieteellisenä tutkimusaineistona. World Wide Web sisältää moninkertaisesti enemmän tekstiä kuin suurimmat olemassa olevat perinteiset tekstikorpukset, joten verkkosivuilta voi todennäköisesti löytää paljon esiintymiä sellaisista sanoista ja rakenteista, jotka ovat perinteisissä korpuksissa harvinaisia. Verkkosivuja voidaan käyttää aineistona kahdella eri tavalla: voidaan kerätä satunnainen otos verkkosivuista ja luoda itsenäinen korpus niiden sisällöistä, tai käyttää koko World Wide Webiä korpuksena verkkohakukoneiden kautta. Verkkosivuja on käytetty tutkimusaineistona monilla eri kielitieteen aloilla, kuten leksikograafisessa tutkimuksessa, syntaktisten rakenteiden tutkimuksessa, pedagogisena materiaalina ja vähemmistökielten tutkimuksessa. Verkkosivuilla on perinteisiin korpuksiin verrattuna useita haitallisia ominaisuuksia, jotka pitää ottaa huomioon, kun niitä käytetään aineistona. Kaikki sivut eivät sisällä kelvollista tekstiä, ja sivut ovat usein esimerkiksi HTML-muotoisia, jolloin ne pitää muuttaa helpommin käsiteltävissä olevaan muotoon. Verkkosivut sisältävät enemmän kielellisiä virheitä kuin perinteiset korpukset, ja niiden tekstityypit ja aihepiirit ovat runsaslukuisempia kuin perinteisten korpusten. Aineiston keräämiseen verkkosivuilta tarvitaan tehokkaita ohjelmatyökaluja. Näistä yleisimpiä ovat kaupalliset verkkohakukoneet, joiden kautta on mahdollista päästä nopeasti käsiksi suureen määrään erilaisia sivuja. Näiden lisäksi voidaan käyttää erityisesti kielitieteellisiin tarpeisiin kehitettyjä työkaluja. Tässä tutkielmassa esitellään ohjelmatyökalut WebCorp, WebAsCorpus.org, BootCaT ja Web as Corpus Toolkit, joiden avulla voi hakea aineistoa verkkosivuilta nimenomaan kielitieteellisiin tarkoituksiin.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Background Psychotic-like experiences (PLEs) are subclinical delusional ideas and perceptual disturbances that have been associated with a range of adverse mental health outcomes. This study reports a qualitative and quantitative analysis of the acceptability, usability and short term outcomes of Get Real, a web program for PLEs in young people. Methods Participants were twelve respondents to an online survey, who reported at least one PLE in the previous 3 months, and were currently distressed. Ratings of the program were collected after participants trialled it for a month. Individual semi-structured interviews then elicited qualitative feedback, which was analyzed using Consensual Qualitative Research (CQR) methodology. PLEs and distress were reassessed at 3 months post-baseline. Results User ratings supported the program's acceptability, usability and perceived utility. Significant reductions in the number, frequency and severity of PLE-related distress were found at 3 months follow-up. The CQR analysis identified four qualitative domains: initial and current understandings of PLEs, responses to the program, and context of its use. Initial understanding involved emotional reactions, avoidance or minimization, limited coping skills and non-psychotic attributions. After using the program, participants saw PLEs as normal and common, had greater self-awareness and understanding of stress, and reported increased capacity to cope and accept experiences. Positive responses to the program focused on its normalization of PLEs, usefulness of its strategies, self-monitoring of mood, and information putting PLEs into perspective. Some respondents wanted more specific and individualized information, thought the program would be more useful for other audiences, or doubted its effectiveness. The program was mostly used in low-stress situations. Conclusions The current study provided initial support for the acceptability, utility and positive short-term outcomes of Get Real. The program now requires efficacy testing in randomized controlled trials.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Peanut (Arachis hypogaea L.) is an economically important legume crop in irrigated production areas of northern Australia. Although the potential pod yield of the crop in these areas is about 8 t ha(-1), most growers generally obtain around 5 t ha(-1), partly due to poor irrigation management. Better information and tools that are easy to use, accurate, and cost-effective are therefore needed to help local peanut growers improve irrigation management. This paper introduces a new web-based decision support system called AQUAMAN that was developed to assist Australian peanut growers schedule irrigations. It simulates the timing and depth of future irrigations by combining procedures from the food and agriculture organization (FAO) guidelines for irrigation scheduling (FAO-56) with those of the agricultural production systems simulator (APSIM) modeling framework. Here, we present a description of AQUAMAN and results of a series of activities (i.e., extension activities, case studies, and a survey) that were conducted to assess its level of acceptance among Australian peanut growers, obtain feedback for future improvements, and evaluate its performance. Application of the tool for scheduling irrigations of commercial peanut farms since its release in 2004-2005 has shown good acceptance by local peanuts growers and potential for significantly improving yield. Limited comparison with the farmer practice of matching the pan evaporation demand during rain-free periods in 2006-2007 and 2008-2009 suggested that AQUAMAN enabled irrigation water savings of up to 50% and the realization of enhanced water and irrigation use efficiencies.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In lake ecosystems, both fish and invertebrate predators have dramatic effects on their prey communities. Fish predation selects large cladocerans while invertebrate predators prefer prey of smaller size. Since invertebrate predators are the preferred food items for fish, their occurrence at high densities is often connected with the absence or low number of fish. It is generally believed that invertebrate predators can play a significant role only if the density of planktivorous fish is low. However, in eutrophic clay-turbid Lake Hiidenvesi (southern Finland), a dense population of predatory Chaoborus flavicans larvae coexists with an abundant fish population. The population covers the stratifying area of the lake and attains a maximum population density of 23000 ind. m-2. This thesis aims to clarify the effects of Chaoborus flavicans on the zooplankton community and the environmental factors facilitating the coexistence of fish and invertebrate predators. In the stratifying area of Lake Hiidenvesi, the seasonal succession of cladocerans was exceptional. The spring biomass peak of cladocerans was missing and the highest biomass occurred in midsummer. In early summer, the consumption rate by chaoborids clearly exceeded the production rate of cladocerans and each year the biomass peak of cladocerans coincided with the minimum chaoborid density. In contrast, consumption by fish was very low and each study year cladocerans attained maximum biomass simultaneously with the highest consumption by smelt (Osmerus eperlanus). The results indicated that Chaoborus flavicans was the main predator of cladocerans in the stratifying area of Lake Hiidenvesi. The clay turbidity strongly contributed to the coexistence of chaoborids and smelt at high densities. Turbidity exceeding 30 NTU combined with light intensity below 0.1 μE m-2 s-1provides an efficient daytime refuge for chaoborids, but turbidity alone is not an adequate refuge unless combined with low light intensity. In the non-stratifying shallow basins of Lake Hiidenvesi, light intensity exceeds this level during summer days at the bottom of the lake, preventing Chaoborus forming a dense population in the shallow parts of the lake. Chaoborus can be successful particularly in deep, clay-turbid lakes where they can remain high in the water column close to their epilimnetic prey. Suspended clay alters the trophic interactions by weakening the link between fish and Chaoborus, which in turn strengthens the effect of Chaoborus predation on crustacean zooplankton. Since food web management largely relies on manipulations of fish stocks and the cascading effects of such actions, the validity of the method in deep clay-turbid lakes may be questioned.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The identification of sequence (amino acids or nucleotides) motifs in a particular order in biological sequences has proved to be of interest. This paper describes a computing server, SSMBS, which can locate anddisplay the occurrences of user-defined biologically important sequence motifs (a maximum of five) present in a specific order in protein and nucleotide sequences. While the server can efficiently locate motifs specified using regular expressions, it can also find occurrences of long and complex motifs. The computation is carried out by an algorithm developed using the concepts of quantifiers in regular expressions. The web server is available to users around the clock at http://dicsoft1.physics.iisc.ernet.in/ssmbs/.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Business processes and application functionality are becoming available as internal web services inside enterprise boundaries as well as becoming available as commercial web services from enterprise solution vendors and web services marketplaces. Typically there are multiple web service providers offering services capable of fulfilling a particular functionality, although with different Quality of Service (QoS). Dynamic creation of business processes requires composing an appropriate set of web services that best suit the current need. This paper presents a novel combinatorial auction approach to QoS aware dynamic web services composition. Such an approach would enable not only stand-alone web services but also composite web services to be a part of a business process. The combinatorial auction leads to an integer programming formulation for the web services composition problem. An important feature of the model is the incorporation of service level agreements. We describe a software tool QWESC for QoS-aware web services composition based on the proposed approach.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Encoding protein 3D structures into 1D string using short structural prototypes or structural alphabets opens a new front for structure comparison and analysis. Using the well-documented 16 motifs of Protein Blocks (PBs) as structural alphabet, we have developed a methodology to compare protein structures that are encoded as sequences of PBs by aligning them using dynamic programming which uses a substitution matrix for PBs. This methodology is implemented in the applications available in Protein Block Expert (PBE) server. PBE addresses common issues in the field of protein structure analysis such as comparison of proteins structures and identification of protein structures in structural databanks that resemble a given structure. PBE-T provides facility to transform any PDB file into sequences of PBs. PBE-ALIGNc performs comparison of two protein structures based on the alignment of their corresponding PB sequences. PBE-ALIGNm is a facility for mining SCOP database for similar structures based on the alignment of PBs. Besides, PBE provides an interface to a database (PBE-SAdb) of preprocessed PB sequences from SCOP culled at 95% and of all-against-all pairwise PB alignments at family and superfamily levels. PBE server is freely available at http://bioinformatics.univ-reunion.fr/ PBE/.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Owing to high evolutionary divergence, it is not always possible to identify distantly related protein domains by sequence search techniques. Intermediate sequences possess sequence features of more than one protein and facilitate detection of remotely related proteins. We have demonstrated recently the employment of Cascade PSI-BLAST where we perform PSI-BLAST for many 'generations', initiating searches from new homologues as well. Such a rigorous propagation through generations of PSI-BLAST employs effectively the role of intermediates in detecting distant similarities between proteins. This approach has been tested on a large number of folds and its performance in detecting superfamily level relationships is similar to 35% better than simple PSI-BLAST searches. We present a web server for this search method that permits users to perform Cascade PSI-BLAST searches against the Pfam, SCOP and SwissProt databases. The URL for this server is http://crick.mbu.iisc.ernet.in/similar to CASCADE/CascadeBlast.html.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The method of structured programming or program development using a top-down, stepwise refinement technique provides a systematic approach for the development of programs of considerable complexity. The aim of this paper is to present the philosophy of structured programming through a case study of a nonnumeric programming task. The problem of converting a well-formed formula in first-order logic into prenex normal form is considered. The program has been coded in the programming language PASCAL and implemented on a DEC-10 system. The program has about 500 lines of code and comprises 11 procedures.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Depth measures the extent of atom/residue burial within a protein. It correlates with properties such as protein stability, hydrogen exchange rate, protein-protein interaction hot spots, post-translational modification sites and sequence variability. Our server, DEPTH, accurately computes depth and solvent-accessible surface area (SASA) values. We show that depth can be used to predict small molecule ligand binding cavities in proteins. Often, some of the residues lining a ligand binding cavity are both deep and solvent exposed. Using the depth-SASA pair values for a residue, its likelihood to form part of a small molecule binding cavity is estimated. The parameters of the method were calibrated over a training set of 900 high-resolution X-ray crystal structures of single-domain proteins bound to small molecules (molecular weight < 1.5 KDa). The prediction accuracy of DEPTH is comparable to that of other geometry-based prediction methods including LIGSITE, SURFNET and Pocket-Finder (all with Matthew's correlation coefficient of similar to 0.4) over a testing set of 225 single and multi-chain protein structures. Users have the option of tuning several parameters to detect cavities of different sizes, for example, geometrically flat binding sites. The input to the server is a protein 3D structure in PDB format. The users have the option of tuning the values of four parameters associated with the computation of residue depth and the prediction of binding cavities. The computed depths, SASA and binding cavity predictions are displayed in 2D plots and mapped onto 3D representations of the protein structure using Jmol. Links are provided to download the outputs. Our server is useful for all structural analysis based on residue depth and SASA, such as guiding site-directed mutagenesis experiments and small molecule docking exercises, in the context of protein functional annotation and drug discovery.