969 resultados para Point Data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present study the effect of hot air recirculation is studied with suitable assumptions. It identifies that, the pressure drop across the tile is a dominant parameter which governs the recirculation. The rack suction pressure of the hardware along with the pressure drop across the tile determines the point of recirculation in the cold aisle. The positioning of hardware in the racks play an important role in controlling the recirculation point. The present study is thus helpful in the design of data centre air flow, based on the theory of jets. The air flow can be modelled both quantitatively and qualitatively based on the results

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The combined use of both radiosonde data and three-dimensional satellite derived data over ocean and land is useful for a better understanding of atmospheric thermodynamics. Here, an attempt is made to study the ther-modynamic structure of convective atmosphere during pre-monsoon season over southwest peninsular India utilizing satellite derived data and radiosonde data. The stability indices were computed for the selected stations over southwest peninsular India viz: Thiruvananthapuram and Cochin, using the radiosonde data for five pre- monsoon seasons. The stability indices studied for the region are Showalter Index (SI), K Index (KI), Lifted In-dex (LI), Total Totals Index (TTI), Humidity Index (HI), Deep Convective Index (DCI) and thermodynamic pa-rameters such as Convective Available Potential Energy (CAPE) and Convective Inhibition Energy (CINE). The traditional Showalter Index has been modified to incorporate the thermodynamics over tropical region. MODIS data over South Peninsular India is also used for the study. When there is a convective system over south penin-sular India, the value of LI over the region is less than −4. On the other hand, the region where LI is more than 2 is comparatively stable without any convection. Similarly, when KI values are in the range 35 to 40, there is a possibility for convection. The threshold value for TTI is found to be between 50 and 55. Further, we found that prior to convection, dry bulb temperature at 1000, 850, 700 and 500 hPa is minimum and the dew point tem-perature is a maximum, which leads to increase in relative humidity. The total column water vapor is maximum in the convective region and minimum in the stable region. The threshold values for the different stability indices are found to be agreeing with that reported in literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In our study we use a kernel based classification technique, Support Vector Machine Regression for predicting the Melting Point of Drug – like compounds in terms of Topological Descriptors, Topological Charge Indices, Connectivity Indices and 2D Auto Correlations. The Machine Learning model was designed, trained and tested using a dataset of 100 compounds and it was found that an SVMReg model with RBF Kernel could predict the Melting Point with a mean absolute error 15.5854 and Root Mean Squared Error 19.7576

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formal Concept Analysis is an unsupervised learning technique for conceptual clustering. We introduce the notion of iceberg concept lattices and show their use in Knowledge Discovery in Databases (KDD). Iceberg lattices are designed for analyzing very large databases. In particular they serve as a condensed representation of frequent patterns as known from association rule mining. In order to show the interplay between Formal Concept Analysis and association rule mining, we discuss the algorithm TITANIC. We show that iceberg concept lattices are a starting point for computing condensed sets of association rules without loss of information, and are a visualization method for the resulting rules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this text, we present two stereo-based head tracking techniques along with a fast 3D model acquisition system. The first tracking technique is a robust implementation of stereo-based head tracking designed for interactive environments with uncontrolled lighting. We integrate fast face detection and drift reduction algorithms with a gradient-based stereo rigid motion tracking technique. Our system can automatically segment and track a user's head under large rotation and illumination variations. Precision and usability of this approach are compared with previous tracking methods for cursor control and target selection in both desktop and interactive room environments. The second tracking technique is designed to improve the robustness of head pose tracking for fast movements. Our iterative hybrid tracker combines constraints from the ICP (Iterative Closest Point) algorithm and normal flow constraint. This new technique is more precise for small movements and noisy depth than ICP alone, and more robust for large movements than the normal flow constraint alone. We present experiments which test the accuracy of our approach on sequences of real and synthetic stereo images. The 3D model acquisition system we present quickly aligns intensity and depth images, and reconstructs a textured 3D mesh. 3D views are registered with shape alignment based on our iterative hybrid tracker. We reconstruct the 3D model using a new Cubic Ray Projection merging algorithm which takes advantage of a novel data structure: the linked voxel space. We present experiments to test the accuracy of our approach on 3D face modelling using real-time stereo images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the disadvantages of old age is that there is more past than future: this, however, may be turned into an advantage if the wealth of experience and, hopefully, wisdom gained in the past can be reflected upon and throw some light on possible future trends. To an extent, then, this talk is necessarily personal, certainly nostalgic, but also self critical and inquisitive about our understanding of the discipline of statistics. A number of almost philosophical themes will run through the talk: search for appropriate modelling in relation to the real problem envisaged, emphasis on sensible balances between simplicity and complexity, the relative roles of theory and practice, the nature of communication of inferential ideas to the statistical layman, the inter-related roles of teaching, consultation and research. A list of keywords might be: identification of sample space and its mathematical structure, choices between transform and stay, the role of parametric modelling, the role of a sample space metric, the underused hypothesis lattice, the nature of compositional change, particularly in relation to the modelling of processes. While the main theme will be relevance to compositional data analysis we shall point to substantial implications for general multivariate analysis arising from experience of the development of compositional data analysis…

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completely absent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involved parts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method is introduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that the theoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approach has reasonable properties from a compositional point of view. In particular, it is “natural” in the sense that it recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in the same paper a substitution method for missing values on compositional data sets is introduced

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developments in the statistical analysis of compositional data over the last two decades have made possible a much deeper exploration of the nature of variability, and the possible processes associated with compositional data sets from many disciplines. In this paper we concentrate on geochemical data sets. First we explain how hypotheses of compositional variability may be formulated within the natural sample space, the unit simplex, including useful hypotheses of subcompositional discrimination and specific perturbational change. Then we develop through standard methodology, such as generalised likelihood ratio tests, statistical tools to allow the systematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require special construction. We comment on the use of graphical methods in compositional data analysis and on the ordination of specimens. The recent development of the concept of compositional processes is then explained together with the necessary tools for a staying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland. Finally we point out a number of unresolved problems in the statistical analysis of compositional processes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we examine the problem of compositional data from a different starting point. Chemical compositional data, as used in provenance studies on archaeological materials, will be approached from the measurement theory. The results will show, in a very intuitive way that chemical data can only be treated by using the approach developed for compositional data. It will be shown that compositional data analysis is a particular case in projective geometry, when the projective coordinates are in the positive orthant, and they have the properties of logarithmic interval metrics. Moreover, it will be shown that this approach can be extended to a very large number of applications, including shape analysis. This will be exemplified with a case study in architecture of Early Christian churches dated back to the 5th-7th centuries AD

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This 7-minute video describes how Turning Point's Showbar can be used to manage questions during a presentation and includes useful techniques such as peer instruction and data slicing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is a research discussion about the Hampshire Hub - see http://protohub.net/. The aim is to find out more about the project, and discuss future collaboration and sharing of ideas. Mark Braggins (Hampshire Hub Partnership) will introduce the Hampshire Hub programme, setting out its main objectives, work done to-date, next steps including the Hampshire data store (which will use the PublishMyData linked data platform), and opportunities for University of Southampton to engage with the programme , including the forthcoming Hampshire Hackathons Bill Roberts (Swirrl) will give an overview of the PublishMyData platform, and how it will help deliver the objectives of the Hampshire Hub. He will detail some of the new functionality being added to the platform Steve Peters (DCLG Open Data Communities) will focus on developing a web of data that blends and combines local and national data sources around localities, and common topics/themes. This will include observations on the potential employing emerging new, big data sources to help deliver more effective, better targeted public services. Steve will illustrate this with practical examples of DCLG’s work to publish its own data in a SPARQL end-point, so that it can be used over the web alongside related 3rd party sources. He will share examples of some of the practical challenges, particularly around querying and re-using geographic LinkedData in a federated world of SPARQL end-point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives To determine the effect of human papillomavirus (HPV) quadrivalent vaccine on the risk of developing subsequent disease after an excisional procedure for cervical intraepithelial neoplasia or diagnosis of genital warts, vulvar intraepithelial neoplasia, or vaginal intraepithelial neoplasia. Design Retrospective analysis of data from two international, double blind, placebo controlled, randomised efficacy trials of quadrivalent HPV vaccine (protocol 013 (FUTURE I) and protocol 015 (FUTURE II)). Setting Primary care centres and university or hospital associated health centres in 24 countries and territories around the world. Participants Among 17 622 women aged 15–26 years who underwent 1:1 randomisation to vaccine or placebo, 2054 received cervical surgery or were diagnosed with genital warts, vulvar intraepithelial neoplasia, or vaginal intraepithelial neoplasia. Intervention Three doses of quadrivalent HPV vaccine or placebo at day 1, month 2, and month 6. Main outcome measures Incidence of HPV related disease from 60 days after treatment or diagnosis, expressed as the number of women with an end point per 100 person years at risk. Results A total of 587 vaccine and 763 placebo recipients underwent cervical surgery. The incidence of any subsequent HPV related disease was 6.6 and 12.2 in vaccine and placebo recipients respectively (46.2% reduction (95% confidence interval 22.5% to 63.2%) with vaccination). Vaccination was associated with a significant reduction in risk of any subsequent high grade disease of the cervix by 64.9% (20.1% to 86.3%). A total of 229 vaccine recipients and 475 placebo recipients were diagnosed with genital warts, vulvar intraepithelial neoplasia, or vaginal intraepithelial neoplasia, and the incidence of any subsequent HPV related disease was 20.1 and 31.0 in vaccine and placebo recipients respectively (35.2% reduction (13.8% to 51.8%)). Conclusions Previous vaccination with quadrivalent HPV vaccine among women who had surgical treatment for HPV related disease significantly reduced the incidence of subsequent HPV related disease, including high grade disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study to monitor boreal songbird trends was initiated in 1998 in a relatively undisturbed and remote part of the boreal forest in the Northwest Territories, Canada. Eight years of point count data were collected over the 14 years of the study, 1998-2011. Trends were estimated for 50 bird species using generalized linear mixed-effects models, with random effects to account for temporal (repeat sampling within years) and spatial (stations within stands) autocorrelation and variability associated with multiple observers. We tested whether regional and national Breeding Bird Survey (BBS) trends could, on average, predict trends in our study area. Significant increases in our study area outnumbered decreases by 12 species to 6, an opposite pattern compared to Alberta (6 versus 15, respectively) and Canada (9 versus 20). Twenty-two species with relatively precise trend estimates (precision to detect > 30% decline in 10 years; observed SE ≤ 3.7%/year) showed nonsignificant trends, similar to Alberta (24) and Canada (20). Precision-weighted trends for a sample of 19 species with both reliable trends at our site and small portions of their range covered by BBS in Canada were, on average, more negative for Alberta (1.34% per year lower) and for Canada (1.15% per year lower) relative to Fort Liard, though 95% credible intervals still contained zero. We suggest that part of the differences could be attributable to local resource pulses (insect outbreak). However, we also suggest that the tendency for BBS route coverage to disproportionately sample more southerly, developed areas in the boreal forest could result in BBS trends that are not representative of range-wide trends for species whose range is centred farther north.