49 resultados para Clustering over U-Matrix


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vitamin E absorption requires the presence of fat; however, limited information exists on the influence of fat quantity on optimal absorption. In the present study we compared the absorption of stable-isotope-labelled vitamin E following meals of varying fat content and source. In a randomised four-way cross-over study, eight healthy individuals consumed a capsule containing 150 mg H-2-labelled RRR-alpha-tocopheryl acetate with a test meal of toast with butter (17.5 g fat), cereal with full-fat milk (17.5 g fat), cereal with semi-skimmed milk (2.7 g fat) and water (0g fat). Blood was taken at 0, 0.5, 1, 1.5, 2, 3, 6 and 9 h following ingestion, chylomicrons were isolated, and H-2-labelled alpha-tocopherol was analysed in the chylomicron and plasma samples. There was a significant time (P<0.001) and treatment effect (P<0.001) in H-2-labelled alpha-tocopherol concentration in both chylomicrons and plasma between the test meals. H-2-labelled alpha-tocopherol concentration was significantly greater with the higher-fat toast and butter meal compared with the low-fat cereal meal or water (P< 0.001), and a trend towards greater concentration compared with the high-fat cereal meal (P= 0.065). There was significantly greater H-2-labelled α-tocopherol concentration with the high-fat cereal meal compared with the low-fat cereal meal (P< 0.05). The H-2-labelled alpha-tocopherol concentration following either the low-fat cereal meal or water was low. These results demonstrate that both the amount of fat and the food matrix influence vitamin E absorption. These factors should be considered by consumers and for future vitamin E intervention studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ashby was a keen observer of the world around him, as per his technological and psychiatrical developments. Over the years, he drew numerous philosophical conclusions on the nature of human intelligence and the operation of the brain, on artificial intelligence and the thinking ability of computers and even on science in general. In this paper, the quite profound philosophy espoused by Ashby is considered as a whole, in particular in terms of its relationship with the world as it stands now and even in terms of scientific predictions of where things might lead. A meaningful comparison is made between Ashby's comments and the science fiction concept of 'The Matrix' and serious consideration is given as to how much Ashby's ideas lay open the possibility of the matrix becoming a real world eventuality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The atmospheric component of the United Kingdom’s new High-resolution Global Environmental Model (HiGEM) has been run with interactive aerosol schemes that include biomass burning and mineral dust. Dust emission, transport, and deposition are parameterized within the model using six particle size divisions, which are treated independently. The biomass is modeled in three nonindependent modes, and emissions are prescribed from an external dataset. The model is shown to produce realistic horizontal and vertical distributions of these aerosols for each season when compared with available satellite- and ground-based observations and with other models. Combined aerosol optical depths off the coast of North Africa exceed 0.5 both in boreal winter, when biomass is the main contributor, and also in summer, when the dust dominates. The model is capable of resolving smaller-scale features, such as dust storms emanating from the Bode´ le´ and Saharan regions of North Africa and the wintertime Bode´ le´ low-level jet. This is illustrated by February and July case studies, in which the diurnal cycles of model variables in relation to dust emission and transport are examined. The top-of-atmosphere annual mean radiative forcing of the dust is calculated and found to be globally quite small but locally very large, exceeding 20 W m22 over the Sahara, where inclusion of dust aerosol is shown to improve the model radiative balance. This work extends previous aerosol studies by combining complexity with increased global resolution and represents a step toward the next generation of models to investigate aerosol–climate interactions. 1. Introduction Accurate modeling of mineral dust is known to be important because of its radiative impact in both numerical weather prediction models (Milton et al. 2008; Haywood et

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A radionuclide source term model has been developed which simulates the biogeochemical evolution of the Drigg low level waste (LLW) disposal site. The DRINK (DRIgg Near field Kinetic) model provides data regarding radionuclide concentrations in groundwater over a period of 100,000 years, which are used as input to assessment calculations for a groundwater pathway. The DRINK model also provides input to human intrusion and gaseous assessment calculations through simulation of the solid radionuclide inventory. These calculations are being used to support the Drigg post closure safety case. The DRINK model considers the coupled interaction of the effects of fluid flow, microbiology, corrosion, chemical reaction, sorption and radioactive decay. It represents the first direct use of a mechanistic reaction-transport model in risk assessment calculations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radial basis functions can be combined into a network structure that has several advantages over conventional neural network solutions. However, to operate effectively the number and positions of the basis function centres must be carefully selected. Although no rigorous algorithm exists for this purpose, several heuristic methods have been suggested. In this paper a new method is proposed in which radial basis function centres are selected by the mean-tracking clustering algorithm. The mean-tracking algorithm is compared with k means clustering and it is shown that it achieves significantly better results in terms of radial basis function performance. As well as being computationally simpler, the mean-tracking algorithm in general selects better centre positions, thus providing the radial basis functions with better modelling accuracy

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radial basis function networks can be trained quickly using linear optimisation once centres and other associated parameters have been initialised. The authors propose a small adjustment to a well accepted initialisation algorithm which improves the network accuracy over a range of problems. The algorithm is described and results are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: Our objective was to test the performance of CA125 in classifying serum samples from a cohort of malignant and benign ovarian cancers and age-matched healthy controls and to assess whether combining information from matrix-assisted laser desorption/ionization (MALDI) time-of-flight profiling could improve diagnostic performance. Materials and Methods: Serum samples from women with ovarian neoplasms and healthy volunteers were subjected to CA125 assay and MALDI time-of-flight mass spectrometry (MS) profiling. Models were built from training data sets using discriminatory MALDI MS peaks in combination with CA125 values and tested their ability to classify blinded test samples. These were compared with models using CA125 threshold levels from 193 patients with ovarian cancer, 290 with benign neoplasm, and 2236 postmenopausal healthy controls. Results: Using a CA125 cutoff of 30 U/mL, an overall sensitivity of 94.8% (96.6% specificity) was obtained when comparing malignancies versus healthy postmenopausal controls, whereas a cutoff of 65 U/mL provided a sensitivity of 83.9% (99.6% specificity). High classification accuracies were obtained for early-stage cancers (93.5% sensitivity). Reasons for high accuracies include recruitment bias, restriction to postmenopausal women, and inclusion of only primary invasive epithelial ovarian cancer cases. The combination of MS profiling information with CA125 did not significantly improve the specificity/accuracy compared with classifications on the basis of CA125 alone. Conclusions: We report unexpectedly good performance of serum CA125 using threshold classification in discriminating healthy controls and women with benign masses from those with invasive ovarian cancer. This highlights the dependence of diagnostic tests on the characteristics of the study population and the crucial need for authors to provide sufficient relevant details to allow comparison. Our study also shows that MS profiling information adds little to diagnostic accuracy. This finding is in contrast with other reports and shows the limitations of serum MS profiling for biomarker discovery and as a diagnostic tool

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Little has so far been reported on the robustness of non-orthogonal space-time block codes (NO-STBCs) over highly correlated channels (HCC). Some of the existing NO-STBCs are indeed weak in robustness against HCC. With a view to overcoming such a limitation, a generalisation of the existing robust NO-STBCs based on a 'matrix Alamouti (MA)' structure is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Sun's open magnetic field, magnetic flux dragged out into the heliosphere by the solar wind, varies by approximately a factor of 2 over the solar cycle. We consider the evolution of open solar flux in terms of a source and loss term. Open solar flux creation is likely to proceed at a rate dependent on the rate of photospheric flux emergence, which can be roughly parameterized by sunspot number or coronal mass ejection rate, when available. The open solar flux loss term is more difficult to relate to an observable parameter. The supersonic nature of the solar wind means open solar flux can only be removed by near-Sun magnetic reconnection between open solar magnetic field lines, be they open or closed heliospheric field lines. In this study we reconstruct open solar flux over the last three solar cycles and demonstrate that the loss term may be related to the degree to which the heliospheric current sheet (HCS) is warped, i.e., locally tilted from the solar rotation direction. This can account for both the large dip in open solar flux at the time of sunspot maximum as well as the asymmetry in open solar flux during the rising and declining phases of the solar cycle. The observed cycle-to-cycle variability is also well matched. Following Sheeley et al. (2001), we attribute modulation of open solar flux by the degree of warp of the HCS to the rate at which opposite polarity open solar flux is brought together by differential rotation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investment risk models with infinite variance provide a better description of distributions of individual property returns in the IPD database over the period 1981 to 2003 than Normally distributed risk models, which mirrors results in the U.S. and Australia using identical methodology. Real estate investment risk is heteroscedastic, but the Characteristic Exponent of the investment risk function is constant across time yet may vary by property type. Asset diversification is far less effective at reducing the impact of non-systematic investment risk on real estate portfolios than in the case of assets with Normally distributed investment risk. Multi-risk factor portfolio allocation models based on measures of investment codependence from finite-variance statistics are ineffectual in the real estate context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the dynamics of the residential property market in the United States between 1960 and 2011. Given the cyclically and apparent overvaluation of the market over this period, we determine whether deviations of real estate prices from their fundamentals were caused by the existence of two genres of bubbles: intrinsic bubbles and rational speculative bubbles. We find evidence of an intrinsic bubble in the market pre-2000, implying that overreaction to changes in rents contributed to the overvaluation of real estate prices. However, using a regime-switching model, we find evidence of periodically collapsing rational bubbles in the post-2000 market

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new algorithm for summarizing properties of large-scale time-evolving networks. This type of data, recording connections that come and go over time, is being generated in many modern applications, including telecommunications and on-line human social behavior. The algorithm computes a dynamic measure of how well pairs of nodes can communicate by taking account of routes through the network that respect the arrow of time. We take the conventional approach of downweighting for length (messages become corrupted as they are passed along) and add the novel feature of downweighting for age (messages go out of date). This allows us to generalize widely used Katz-style centrality measures that have proved popular in network science to the case of dynamic networks sampled at non-uniform points in time. We illustrate the new approach on synthetic and real data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global communicationrequirements andloadimbalanceof someparalleldataminingalgorithms arethe major obstacles to exploitthe computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication costin parallel data mining algorithms and, in particular, in the k-means algorithm for cluster analysis. In the straightforward parallel formulation of the k-means algorithm, data and computation loads are uniformly distributed over the processing nodes. This approach has excellent load balancing characteristics that may suggest it could scale up to large and extreme-scale parallel computing systems. However, at each iteration step the algorithm requires a global reduction operationwhichhinders thescalabilityoftheapproach.Thisworkstudiesadifferentparallelformulation of the algorithm where the requirement of global communication is removed, while maintaining the same deterministic nature ofthe centralised algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real-world distributed applications or can be induced by means ofmulti-dimensional binary searchtrees. The approachcanalso be extended to accommodate an approximation error which allows a further reduction ofthe communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing element