885 resultados para Metric
Resumo:
Certain curvature properties and scalar invariants of the mani- folds belonging to one of the main classes almost contact manifolds with Norden metric are considered. An example illustrating the obtained results is given and studied.
Resumo:
Families of linear connections are constructed on almost con- tact manifolds with Norden metric. An analogous connection to the symmetric Yano connection is obtained on a normal almost contact manifold with Norden metric and closed structural 1-form. The curvature properties of this connec- tion are studied on two basic classes of normal almost contact manifolds with Norden metric.
Resumo:
Марта Теофилова - Конструиран е пример на четиримерно специално комплексно многообразие с норденова метрика и постоянна холоморфна секционна кривина чрез двупара-метрично семейство от разрешими алгебри на Ли. Изследвани са кривинните свойства на полученото многообразие. Дадени са необходими и достатъчни усло-вия за разглежданото многообразие да бъде изотропно келерово.
Resumo:
Владимир Тодоров, Петър Стоев - Тази бележка съдържа елементарна конструкция на множество с указаните в заглавието свойства. Да отбележим в допълнение, че така полученото множество остава напълно несвързано дори и след като се допълни с краен брой елементи.
Resumo:
Ива Р. Докузова, Димитър Р. Разпопов - В настоящата статия е разгледан клас V оттримерни риманови многообразия M с метрика g и два афинорни тензора q и S. Дефинирана е и друга метрика ¯g в M. Локалните координати на всички тези тензори са циркулантни матрици. Намерени са: 1) зависимост между тензора на кривина R породен от g и тензора на кривина ¯R породен от ¯g; 2) тъждество за тензора на кривина R в случая, когато тензорът на кривина ¯R се анулира; 3) зависимост между секционната кривина на прозволна двумерна q-площадка {x, qx} и скаларната кривина на M.
Resumo:
This work contributes to the development of search engines that self-adapt their size in response to fluctuations in workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computational resources to or from the engine. In this paper, we focus on the problem of regrouping the metric-space search index when the number of virtual machines used to run the search engine is modified to reflect changes in workload. We propose an algorithm for incrementally adjusting the index to fit the varying number of virtual machines. We tested its performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud, while calibrating the results to compensate for the performance fluctuations of the platform. Our experiments show that, when compared with computing the index from scratch, the incremental algorithm speeds up the index computation 2–10 times while maintaining a similar search performance.
Resumo:
Video streaming via Transmission Control Protocol (TCP) networks has become a popular and highly demanded service, but its quality assessment in both objective and subjective terms has not been properly addressed. In this paper, based on statistical analysis a full analytic model of a no-reference objective metric, namely pause intensity (PI), for video quality assessment is presented. The model characterizes the video playout buffer behavior in connection with the network performance (throughput) and the video playout rate. This allows for instant quality measurement and control without requiring a reference video. PI specifically addresses the need for assessing the quality issue in terms of the continuity in the playout of TCP streaming videos, which cannot be properly measured by other objective metrics such as peak signal-to-noise-ratio, structural similarity, and buffer underrun or pause frequency. The performance of the analytical model is rigidly verified by simulation results and subjective tests using a range of video clips. It is demonstrated that PI is closely correlated with viewers' opinion scores regardless of the vastly different composition of individual elements, such as pause duration and pause frequency which jointly constitute this new quality metric. It is also shown that the correlation performance of PI is consistent and content independent. © 2013 IEEE.
Resumo:
2000 Mathematics Subject Classification: 54H25, 47H10.
Resumo:
2000 Mathematics Subject Classification: 35B40, 35L15.
Resumo:
In this paper a full analytic model for pause intensity (PI), a no-reference metric for video quality assessment, is presented. The model is built upon the video play out buffer behavior at the client side and also encompasses the characteristics of a TCP network. Video streaming via TCP produces impairments in play continuity, which are not typically reflected in current objective metrics such as PSNR and SSIM. Recently the buffer under run frequency/probability has been used to characterize the buffer behavior and as a measurement for performance optimization. But we show, using subjective testing, that under run frequency cannot reflect the viewers' quality of experience for TCP based streaming. We also demonstrate that PI is a comprehensive metric made up of a combination of phenomena observed in the play out buffer. The analytical model in this work is verified with simulations carried out on ns-2, showing that the two results are closely matched. The effectiveness of the PI metric has also been proved by subjective testing on a range of video clips, where PI values exhibit a good correlation with the viewers' opinion scores. © 2012 IEEE.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.
Resumo:
The attempts at carrying out terrorist attacks have become more prevalent. As a result, an increasing number of countries have become particularly vigilant against the means by which terrorists raise funds to finance their draconian acts against human life and property. Among the many counter-terrorism agencies in operation, governments have set up financial intelligence units (FIUs) within their borders for the purpose of tracking down terrorists’ funds. By investigating reported suspicious transactions, FIUs attempt to weed out financial criminals who use these illegal funds to finance terrorist activity. The prominent role played by FIUs means that their performance is always under the spotlight. By interviewing experts and conducting surveys of those associated with the fight against financial crime, this study investigated perceptions of FIU performance on a comparative basis between American and non-American FIUs. The target group of experts included financial institution personnel, civilian agents, law enforcement personnel, academicians, and consultants. Questions for the interview and surveys were based on the Kaplan and Norton’s Balanced Scorecard (BSC) methodology. One of the objectives of this study was to help determine the suitability of the BSC to this arena. While FIUs in this study have concentrated on performance by measuring outputs such as the number of suspicious transaction reports investigated, this study calls for a focus on outcomes involving all the parties responsible for financial criminal investigations. It is only through such an integrated approach that these various entities will be able to improve performance in solving financial crime. Experts in financial intelligence strongly believed that the quality and timeliness of intelligence was more important than keeping track of the number of suspicious transaction reports. Finally, this study concluded that the BSC could be appropriately applied to the arena of financial crime prevention even though the emphasis is markedly different from that in the private sector. While priority in the private sector is given to financial outcomes, in this arena employee growth and internal processes were perceived as most important in achieving a satisfactory outcome.
Resumo:
Publisher PDF
Resumo:
Objective
Scant evidence is available on the discordance between loneliness and social isolation among older adults. We aimed to investigate this discordance and any health implications that it may have.
Method
Using nationally representative datasets from ageing cohorts in Ireland (TILDA) and England (ELSA), we created a metric of discordance between loneliness and social isolation, to which we refer as Social Asymmetry. This metric was the categorised difference between standardised scores on a scale of loneliness and a scale of social isolation, giving categories of: Concordantly Lonely and Isolated, Discordant: Robust to Loneliness, or Discordant: Susceptible to Loneliness. We used regression and multilevel modelling to identify potential relationships between Social Asymmetry and cognitive outcomes.
Results
Social Asymmetry predicted cognitive outcomes cross-sectionally and at a two-year follow-up, such that Discordant: Robust to Loneliness individuals were superior performers, but we failed to find evidence for Social Asymmetry as a predictor of cognitive trajectory over time.
Conclusions
We present a new metric and preliminary evidence of a relationship with clinical outcomes. Further research validating this metric in different populations, and evaluating its relationship with other outcomes, is warranted.