995 resultados para Web as a Corpus


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A software system, recently developed by the authors for the efficient capturing, editing, and delivery of audio-visual web lectures, was used to create a series of lectures for a first-year undergraduate course in Dynamics. These web lectures were developed to serve as an extra study resource for students attending lectures and not as a replacement. A questionnaire was produced to obtain feedback from students. The overall response was very favorable and numerous requests were made for other lecturers to adopt this technology. Despite the students' approval of this added resource, there was no significant improvement in overall examination performance

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A combination of experiments and non-linear finite element analyses are used to investigate the effect of offset web holes on the web crippling strength of cold-formed steel channel sections under the end-two-flange (ETF) loading condition; the cases of both flanges fastened and unfastened to the support are considered. The web holes are located at the mid-depth of the sections, with a horizontal clear distance of the web holes to the near edge of the bearing plate. Finite element analysis results are compared against the laboratory test results; good agreement was obtained in terms of both strength and failure modes. A parametric study was then undertaken to investigate both the effect of the position of holes in the web and the cross-section sizes on the web crippling strength of the channel sections. It was demonstrated that the main factors influencing the web crippling strength are the ratio of the hole depth to the depth of the web, and the ratio of the distance from the edge of the bearing to the flat depth of the web. Design recommendations in the form of web crippling strength reduction factors are proposed in this study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the rapid expansion of the internet and the increasing demand on Web servers, many techniques were developed to overcome the servers' hardware performance limitation. Mirrored Web Servers is one of the techniques used where a number of servers carrying the same "mirrored" set of services are deployed. Client access requests are then distributed over the set of mirrored servers to even up the load. In this paper we present a generic reference software architecture for load balancing over mirrored web servers. The architecture was designed adopting the latest NaSr architectural style [1] and described using the ADLARS [2] architecture description language. With minimal effort, different tailored product architectures can be generated from the reference architecture to serve different network protocols and server operating systems. An example product system is described and a sample Java implementation is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A web-service is a remote computational facility which is made available for general use by means of the internet. An orchestration is a multi-threaded computation which invokes remote services. In this paper game theory is used to analyse the behaviour of orchestration evaluations when underlying web-services are unreliable. Uncertainty profiles are proposed as a means of defining bounds on the number of service failures that can be expected during an orchestration evaluation. An uncertainty profile describes a strategic situation that can be analyzed using a zero-sum angel-daemon game with two competing players: an angel a whose objective is to minimize damage to an orchestration and a daemon d who acts in a destructive fashion. An uncertainty profile is assessed using the value of its angel daemon game. It is shown that uncertainty profiles form a partial order which is monotonic with respect to assessment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Place-names are a fundamental concept in all academic collections: everything happens somewhere. Contemporary place-names are comprehensively represented in digital gazetteer and geospatial web services such as GeoNames. However, despite millions of pounds of investment by JISC and other agencies in historical online resources in recent years, there is currently no equivalent for historic place-names. This project will digitize the entire 86 volume corpus of the Survey of English Place-Names (SEPN), the ultimate authority on historic place-names in England, and make its 4 million forms available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the behaviour of a set of services in a stressed web environment where performance patterns may be difficult to predict. In stressed environments the performances of some providers may degrade while the performances of others, with elastic resources, may improve. The allocation of web-based providers to users (brokering) is modelled by a strategic non-cooperative angel-daemon game with risk profiles. A risk profile specifies a bound on the number of unreliable service providers within an environment without identifying the names of these providers. Risk profiles offer a means of analysing the behaviour of broker agents which allocate service providers to users. A Nash equilibrium is a fixed point of such a game in which no user can locally improve their choice of provider – thus, a Nash equilibrium is a viable solution to the provider/user allocation problem. Angel daemon games provide a means of reasoning about stressed environments and offer the possibility of designing brokers using risk profiles and Nash equilibria.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Potentially inappropriate prescribing in older people is common in primary care and can result in increased morbidity, adverse drug events, hospitalizations and mortality. In Ireland, 36% of those aged 70 years or over received at least one potentially inappropriate medication, with an associated expenditure of over €45 million.The main objective of this study is to determine the effectiveness and acceptability of a complex, multifaceted intervention in reducing the level of potentially inappropriate prescribing in primary care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matching query interfaces is a crucial step in data integration across multiple Web databases. The problem is closely related to schema matching that typically exploits different features of schemas. Relying on a particular feature of schemas is not suffcient. We propose an evidential approach to combining multiple matchers using Dempster-Shafer theory of evidence. First, our approach views the match results of an individual matcher as a source of evidence that provides a level of confidence on the validity of each candidate attribute correspondence. Second, it combines multiple sources of evidence to get a combined mass function that represents the overall level of confidence, taking into account the match results of different matchers. Our combination mechanism does not require use of weighing parameters, hence no setting and tuning of them is needed. Third, it selects the top k attribute correspondences of each source attribute from the target schema based on the combined mass function. Finally it uses some heuristics to resolve any conflicts between the attribute correspondences of different source attributes. Our experimental results show that our approach is highly accurate and effective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Temporal dynamics and speaker characteristics are two important features of speech that distinguish speech from noise. In this paper, we propose a method to maximally extract these two features of speech for speech enhancement. We demonstrate that this can reduce the requirement for prior information about the noise, which can be difficult to estimate for fast-varying noise. Given noisy speech, the new approach estimates clean speech by recognizing long segments of the clean speech as whole units. In the recognition, clean speech sentences, taken from a speech corpus, are used as examples. Matching segments are identified between the noisy sentence and the corpus sentences. The estimate is formed by using the longest matching segments found in the corpus sentences. Longer speech segments as whole units contain more distinct dynamics and richer speaker characteristics, and can be identified more accurately from noise than shorter speech segments. Therefore, estimation based on the longest recognized segments increases the noise immunity and hence the estimation accuracy. The new approach consists of a statistical model to represent up to sentence-long temporal dynamics in the corpus speech, and an algorithm to identify the longest matching segments between the noisy sentence and the corpus sentences. The algorithm is made more robust to noise uncertainty by introducing missing-feature based noise compensation into the corpus sentences. Experiments have been conducted on the TIMIT database for speech enhancement from various types of nonstationary noise including song, music, and crosstalk speech. The new approach has shown improved performance over conventional enhancement algorithms in both objective and subjective evaluations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Web sites that rely on databases for their content are now ubiquitous. Query result pages are dynamically generated from these databases in response to user-submitted queries. Automatically extracting structured data from query result pages is a challenging problem, as the structure of the data is not explicitly represented. While humans have shown good intuition in visually understanding data records on a query result page as displayed by a web browser, no existing approach to data record extraction has made full use of this intuition. We propose a novel approach, in which we make use of the common sources of evidence that humans use to understand data records on a displayed query result page. These include structural regularity, and visual and content similarity between data records displayed on a query result page. Based on these observations we propose new techniques that can identify each data record individually, while ignoring noise items, such as navigation bars and adverts. We have implemented these techniques in a software prototype, rExtractor, and tested it using two datasets. Our experimental results show that our approach achieves significantly higher accuracy than previous approaches. Furthermore, it establishes the case for use of vision-based algorithms in the context of data extraction from web sites.