977 resultados para metrics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Equity experts agree with research findings that the metrics for measuring socioeconomic status (SES) are problematic. But they disagree that it really matters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Current approaches of predicting protein functions from a protein-protein interaction (PPI) dataset are based on an assumption that the available functions of the proteins (a.k.a. annotated proteins) will determine the functions of the proteins whose functions are unknown yet at the moment (a.k.a. un-annotated proteins). Therefore, the protein function prediction is a mono-directed and one-off procedure, i.e. from annotated proteins to un-annotated proteins. However, the interactions between proteins are mutual rather than static and mono-directed, although functions of some proteins are unknown for some reasons at present. That means when we use the similarity-based approach to predict functions of un-annotated proteins, the un-annotated proteins, once their functions are predicted, will affect the similarities between proteins, which in turn will affect the prediction results. In other words, the function prediction is a dynamic and mutual procedure. This dynamic feature of protein interactions, however, was not considered in the existing prediction algorithms.

Results: In this paper, we propose a new prediction approach that predicts protein functions iteratively. This iterative approach incorporates the dynamic and mutual features of PPI interactions, as well as the local and global semantic influence of protein functions, into the prediction. To guarantee predicting functions iteratively, we propose a new protein similarity from protein functions. We adapt new evaluation metrics to evaluate the prediction quality of our algorithm and other similar algorithms. Experiments on real PPI datasets were conducted to evaluate the effectiveness of the proposed approach in predicting unknown protein functions.

Conclusions:
The iterative approach is more likely to reflect the real biological nature between proteins when predicting functions. A proper definition of protein similarity from protein functions is the key to predicting functions iteratively. The evaluation results demonstrated that in most cases, the iterative approach outperformed non-iterative ones with higher prediction quality in terms of prediction precision, recall and F-value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we focus on the ‘reverse editing’ problem in movie analysis, i.e., the extraction of film takes, original camera shots that a film editor extracts and arranges to produce a finished scene. The ability to disassemble final scenes and shots into takes is essential for nonlinear browsing, content annotation and the extraction of higher order cinematic constructs from film. In this work, we investigate agglomerative hierachical clustering methods along with different similarity metrics and group distances for this task, and demonstrate our findings with 10 movies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we investigate the potential of caching to improve QoS in the context of continuous media applications over wired best-effort networks. We propose the use of a flexible caching scheme, called GD-Multi in caching continuous media (CM) objects. An important novel feature of our scheme is the provision of user or system administrator inputs in determining the cost function. Based on the proposed flexible cost function, Multi, an improvised Greedy Dual (GD) replacement algorithm called GD-multi (GDM) has been developed for layered multi-resolution multimedia streams. The proposed Multi function takes receiver feedback into account. We investigate the influence of parameters such as loss rate, jitter, delay and area in determining a proxy’s cache contents so as to enhance QoS perceived by clients. Simulation studies show improvement in QoS perceived at the clients in accordance to supplied optimisation metrics. From an implementation perspective, signalling requirements for carrying QoS feedback are minimal and fully compatible with existing RTSP-based Internet applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: To establish if evaluations of multifocal contact lens performance conducted at dispensing are representative of behavior after a moderate adaptation period.

Methods: Eighty-eight presbyopic subjects, across four clinical sites, wore each of four multifocal soft contact lenses (ACUVUE BIFOCAL, Focus Progressives, Proclear Multifocal, and SofLens Multifocal) for 4 days of daily wear. Comprehensive performance assessments were conducted at dispensing and after 4 days wear and included the following objective metrics: LogMAR acuity (contrast, 90% and 10%; illumination, 250 and 10 cd/m2; distance, 6 m, 100 cm, and 40 cm), stereopsis (RANDOT), reading critical print size and maximum speed and range of clear vision at near. Subjective assessments were made, with 100-point numerical rating scales, of comfort, ghosting (distance, near), visual quality (distance, intermediate, and near), and the appearance of haloes. At two sites, subjects (n = 39) also rated visual fluctuation (distance, intermediate, and near), facial recognition, and overall satisfaction.

Results: Among the objective variables, significant differences (paired t test, P<0.05) between dispensing and 4 days were found only for range of clear vision at near (2.9 ± 2.0 cm; mean difference ± standard deviation) and high contrast near acuity in low illumination (-0.013 ± 0.011 LogMAR). With the exception of insertion comfort, all subjective variables showed significant decrements over the same period. Overall satisfaction declined by an average of 10.9 ± 5.1 points.

Conclusions: Early assessment is relatively unrepresentative of performance later on during multifocal contact lens wear. Acuity based measures of vision remain substantially unchanged over the medium term, apparently because these metrics are insensitive indicators of performance compared with subjective alternatives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the effectiveness of three different operating strategies applied to the Fuzzy ARTMAP (FAM) neural network in pattern classification tasks is analyzed and compared. Three types of FAM, namely average FAM, voting FAM, and ordered FAM, are formed for experimentation. In average FAM, a pool of the FAM networks is trained using random sequences of input patterns, and the performance metrics from multiple networks are averaged. In voting FAM, predictions from a number of FAM networks are combined using the majority-voting scheme to reach a final output. In ordered FAM, a pre-processing procedure known as the ordering algorithm is employed to identify a fixed sequence of input patterns for training the FAM network. Three medical data sets are employed to evaluate the performances of these three types of FAM. The results are analyzed and compared with those from other learning systems. Bootstrapping has also been used to analyze and quantify the results statistically. [ABSTRACT FROM AUTHOR].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ELearning suffers from the lack of face-to-face interaction and can deprive learners from the benefits of social interaction and comparison. In this paper we present the results of a study conducted for the impact of social comparison. The study was conducted by collecting students’ engagement with an eLearning tool, the attendance, and grades scored by students at specific milestones and presented these metrics to students as feedback using Kiviat charts. The charts were complemented with appropriate recommendations to allow them to adapt their study strategy and behaviour. The study spanned over 4 semesters (2 with and 2 without the Kiviats) and the results were analysed using paired T tests to test the pre and post results on topics covered by the eLearning tool. Survey questionnaires were also administered at the end for qualitative analysis. The results indicated that the Kiviat feedback with recommendation had positive impact on learning outcomes and attitudes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Realizing value from IT investments continues to be a challenge for most healthcare organizations. IT governance (ITG) is envisaged to solve many of these challenges. ITG is the practice that establishes accountability framework for IT investments by allocating decision rights among major participants involved in IT decision processes. As ITG is relatively new in healthcare industry, it is expected that knowledge about how healthcare organizations govern their IT decisions is limited. This research aims to extend this knowledge and to assist both researchers and professionals by providing insights on how IT decisions are made and governed in healthcare organizations (HOs). This research adopts case-study methodology to investigate IT governance in two distinctly different HOs. The research findings indicate that HOs implement ITG to achieve alignment between business objectives and IT. Both HOs set up a five-stage IT decision process to identify, evaluate and prioritize IT investment ideas. They also established generic committee-structures that clearly defined roles and decision authorities to govern such process. It is suggested here that ITG in HOs is heavily influenced by strategic priorities, organizational structure, governance experience and governmental initiatives. Effective ITG in HOs is challenged by IT alignment, policy government, involvement of healthcare executives, and lack of business metrics to justify and evaluate decisions. The research proposes recommendations to address these challenges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multicast is an important mechanism in modern wireless networks and has attracted significant efforts to improve its performance with different metrics including throughput, delay, energy efficiency, etc. Traditionally, an ideal loss-free channel model is widely used to facilitate routing protocol design. However, the quality of wireless links would be affected or even jeopardized by many factors like collisions, fading or the noise of environment. In this paper, we propose a reliable multicast protocol, called CodePipe, with advanced performance in terms of energy-efficiency, throughput and fairness in lossy wireless networks. Built upon opportunistic routing and random linear network coding, CodePipe not only simplifies transmission coordination between nodes, but also improves the multicast throughput significantly by exploiting both intra-batch and inter-batch coding opportunities. In particular, four key techniques, namely, LP-based opportunistic routing structure, opportunistic feeding, fast batch moving and inter-batch coding, are proposed to offer substantial improvement in throughput, energy-efficiency and fairness. We evaluate CodePipe on ns2 simulator by comparing with other two state-of-art multicast protocols, MORE and Pacifier. Simulation results show that CodePipe significantly outperforms both of them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Few studies document long-term colony-level metrics from colony establishment to maturity (equilibrium) and few test predictions of general models of colony development. We describe long-term trends in a colony of Australasian Gannets (Morus serrator) which has been monitored from an early stage in its development. The colony at Pope’s Eye, within Port Phillip Bay, Victoria, Australia was established in 1984 on an artificial structure and the first nest count (25 nests) was conducted in the same year. The colony was then studied for 15 of 19 years between 1988 and 2006–2007. During the study, 2,516 eggs were recorded, resulting in 1,694 chicks hatching (67 % of eggs), of which 1,310 (77 % of those hatched) fledged. At least 184 (14 %) of fledged offspring returned to Pope’s Eye as breeding adults. Since establishment, the number and density of nests increased (number of nests increased 8.8 % annually), with density increasing at varying rates in different areas of the colony. Early recruitment involved birds from a nearby colony, but within 5 years post establishment the first natal recruits were breeding at Pope’s Eye and thereafter natal recruitment was the main source of new breeding adults (totalling 81.4 % of all recruits). Age of recruitment varied throughout the study, though not systematically, and there was no difference between the sexes. The pattern of rapid initial growth is typical of patterns reported for other seabird colonies. However, as the colony (and birds within it) aged, there was no increase in breeding success and egg laying did not become earlier, as was expected from general models of colony development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Airport baggage handling systems are a critical infrastructure component within major airports, and essential to ensure smooth luggage transfer while preventing dangerous material being loaded onto aircraft. This paper proposes a standard set of measures to assess the expected performance of a baggage handling system through discrete event simulation. These evaluation methods also have application in the study of general network systems. Results from the application of these methods reveal operational characteristics of the studied BHS, in terms of metrics such as peak throughput, in-system time and system recovery time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-frame super-resolution algorithms aim to increase spatial resolution by fusing information from several low-resolution perspectives of a scene. While a wide array of super-resolution algorithms now exist, the comparative capability of these techniques in practical scenarios has not been adequately explored. In addition, a standard quantitative method for assessing the relative merit of super-resolution algorithms is required. This paper presents a comprehensive practical comparison of existing super-resolution techniques using a shared platform and 4 common greyscale reference images. In total, 13 different super-resolution algorithms are evaluated, and as accurate alignment is critical to the super-resolution process, 6 registration algorithms are also included in the analysis. Pixel-based visual information fidelity (VIFP) is selected from the 12 image quality metrics reviewed as the measure most suited to the appraisal of super-resolved images. Experimental results show that Bayesian super-resolution methods utilizing the simultaneous autoregressive (SAR) prior produce the highest quality images when combined with generalized stochastic Lucas-Kanade optical flow registration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Urban Sustainability expresses the level of conservation of a city while living a town or consuming its urban resources, but the measurement of urban sustainability depends on what are considered important indicators of conservation besides the permitted levels of consumption in accordance with adopted criteria. This criterion should have common factors that are shared for all the members tested or cities to be evaluated as in this particular case for Abu Dhabi, but also have specific factors that are related to the geographic place, community and culture, that is the measures of urban sustainability specific to a middle east climate, community and culture where GIS Vector and Raster analysis have a role or add a value in urban sustainability measurements or grading are considered herein. Scenarios were tested using various GIS data types to replicate urban history (ten years period), current status and expected future of Abu Dhabi City setting factors to climate, community needs and culture. The useful Vector or Raster GIS data sets that are related to every scenario where selected and analysed in the sense of how and how much it can benefit the urban sustainability ranking in quantity and quality tests, this besides assessing the suitable data nature, type and format, the important topology rules to be considered, the useful attributes to be added, the relationships which should be maintained between data types of a geo- database, and specify its usage in a specific scenario test, then setting weights to each and every data type representing some elements of a phenomenon related to urban suitability factor. The results of assessing the role of GIS analysis provided data collection specifications such as the measures of accuracy reliable to a certain type of GIS functional analysis used in an urban sustainability ranking scenario tests. This paper reflects the prior results of the research that is conducted to test the multidiscipline evaluation of urban sustainability using different indicator metrics, that implement vector GIS Analysis and Raster GIS analysis as basic tools to assist the evaluation and increase of its reliability besides assessing and decomposing it, after which a hypothetical implementation of the chosen evaluation model represented by various scenarios was implemented on the planned urban sustainability factors for a certain period of time to appraise the expected future grade of urban sustainability and come out with advises associated with scenarios for assuring gap filling and relative high urban future sustainability. The results this paper is reflecting are concentrating on the elements of vector and raster GIS analysis that assists the proper urban sustainability grading within the chosen model, the reliability of spatial data collected; analysis selected and resulted spatial information. Starting from selecting some important indicators to comprise the model which include regional culture, climate and community needs an example of what was used is Energy Demand & Consumption (Cooling systems). Thus, this factor is related to the climate and it‟s regional specific as the temperature varies around 30-45 degrees centigrade in city areas, GIS 3D Polygons of building data used to analyse the volume of buildings, attributes „building heights‟, estimate the number of floors from the equation, following energy demand was calculated and consumption for the unit volume, and compared it in scenario with possible sustainable energy supply or using different environmental friendly cooling systems this is followed by calculating the cooling system effects on an area unit selected to be 1 sq. km, combined with the level of greenery area, and open space, as represented by parks polygons, trees polygons, empty areas, pedestrian polygons and road surface area polygons. (initial measures showed that cooling system consumption can be reduced by around 15 -20 % with a well-planned building distributions, proper spaces and with using environmental friendly products and building material, temperature levels were also combined in the scenario extracted from satellite images as interpreted from thermal bands 3 times during the period of assessment. Other examples of the assessment of GIS analysis to urban sustainability took place included Waste Productivity, some effects of greenhouse gases measured by the intensity of road polygons and closeness to dwelling areas, industry areas as defined from land use land cover thematic maps produced from classified satellite images then vectors were created to take part in defining their role within the scenarios. City Noise and light intensity assessment was also investigated, as the region experiences rapid development and noise is magnified due to construction activities, closeness of the airports, and highways. The assessment investigated the measures taken by urban planners to reduce degradation or properly manage it. Finally as a conclusion tables were presented to reflect the scenario results in combination with GIS data types, analysis types, and the level of GIS data reliability to measure the sustainability level of a city related to cultural and regional demands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As each user tends to rate a small proportion of available items, the resulted Data Sparsity issue brings significant challenges to the research of recommender systems. This issue becomes even more severe for neighborhood-based collaborative filtering methods, as there are even lower numbers of ratings available in the neighborhood of the query item. In this paper, we aim to address the Data Sparsity issue in the context of the neighborhood-based collaborative filtering. Given the (user, item) query, a set of key ratings are identified, and an auto-adaptive imputation method is proposed to fill the missing values in the set of key ratings. The proposed method can be used with any similarity metrics, such as the Pearson Correlation Coefficient and Cosine-based similarity, and it is theoretically guaranteed to outperform the neighborhood-based collaborative filtering approaches. Results from experiments prove that the proposed method could significantly improve the accuracy of recommendations for neighborhood-based Collaborative Filtering algorithms. © 2012 ACM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a comparative evaluation of popular multi-label classification methods on several multi-label problems from different domains. The methods include multi-label k-nearest neighbor, binary relevance, label power set, random k-label set ensemble learning, calibrated label ranking, hierarchy of multi-label classifiers and triple random ensemble multi-label classification algorithms. These multi-label learning algorithms are evaluated using several widely used MLC evaluation metrics. The evaluation results show that for each multi-label classification problem a particular MLC method can be recommended. The multi-label evaluation datasets used in this study are related to scene images, multimedia video frames, diagnostic medical report, email messages, emotional music data, biological genes and multi-structural proteins categorization.