9 resultados para Non-commercial film distribution

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A field trial was undertaken to determine the influence of four commercially available film-forming polymers (Bond [alkyl phenyl hydroxyl polyoxyethylene], Newman Crop Spray 11E™ [paraffinic oil], Nu-Film P [poly-1-p menthene], and Spray Gard [di-1-p menthene]) on reducing salt spray injury on two woody species, evergreen oak (Quercus ilex L.) and laurel (Prunus laurocerasus L.). Irrespective of species, the film-forming polymers Nu-Film-P and Spay Gard did not provide any significant degree of protection against salt spray damage irrespective of concentration (1% or 2%) applied as measured by leaf chlorophyll concentrations, photosynthetic efficiency, visual leaf necrosis, foliar sodium and chloride content, and growth (height, leaf area). The film-forming polymer Newman Crop Spray 11E™ provided only 1-week protection against salt spray injury. The film-forming polymer Bond provided a significant (P < 0.05) degree of protection against salt spray injury 3 months after application as manifest by higher leaf chlorophyll content, photosynthetic efficiency, height and leaf area, and lower visual leaf necrosis and foliar Na and Cl content compared with nontreated controls. In conclusion, results indicate that application of a suitable film-forming polymer can provide a significant degree of protection of up to 3 months against salt spray injury in evergreen oak and laurel. Results also indicate that when applied at 1% or 2% solutions, no problems associated with phytotoxicity and rapid degradation on the leaf surface exist.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global communicationrequirements andloadimbalanceof someparalleldataminingalgorithms arethe major obstacles to exploitthe computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication costin parallel data mining algorithms and, in particular, in the k-means algorithm for cluster analysis. In the straightforward parallel formulation of the k-means algorithm, data and computation loads are uniformly distributed over the processing nodes. This approach has excellent load balancing characteristics that may suggest it could scale up to large and extreme-scale parallel computing systems. However, at each iteration step the algorithm requires a global reduction operationwhichhinders thescalabilityoftheapproach.Thisworkstudiesadifferentparallelformulation of the algorithm where the requirement of global communication is removed, while maintaining the same deterministic nature ofthe centralised algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real-world distributed applications or can be induced by means ofmulti-dimensional binary searchtrees. The approachcanalso be extended to accommodate an approximation error which allows a further reduction ofthe communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing element

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rainfall can be modeled as a spatially correlated random field superimposed on a background mean value; therefore, geostatistical methods are appropriate for the analysis of rain gauge data. Nevertheless, there are certain typical features of these data that must be taken into account to produce useful results, including the generally non-Gaussian mixed distribution, the inhomogeneity and low density of observations, and the temporal and spatial variability of spatial correlation patterns. Many studies show that rigorous geostatistical analysis performs better than other available interpolation techniques for rain gauge data. Important elements are the use of climatological variograms and the appropriate treatment of rainy and nonrainy areas. Benefits of geostatistical analysis for rainfall include ease of estimating areal averages, estimation of uncertainties, and the possibility of using secondary information (e.g., topography). Geostatistical analysis also facilitates the generation of ensembles of rainfall fields that are consistent with a given set of observations, allowing for a more realistic exploration of errors and their propagation in downstream models, such as those used for agricultural or hydrological forecasting. This article provides a review of geostatistical methods used for kriging, exemplified where appropriate by daily rain gauge data from Ethiopia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are competing theoretical expectations and conflicting empirical results concerning the impact of partisanship on spending on active labour market policies (ALMPs). This paper argues that one should distinguish between different ALMPs. Employment incentives and rehabilitation programmes incentivize the unemployed to accept jobs. Direct job creation reduces the supply of labour by creating non-commercial jobs. Training schemes raise the human capital of the unemployed. Using regression analysis this paper shows that the positions of political parties towards these three types of ALMPs are different. Party preferences also depend on the welfare regime in which parties are located. In Scandinavia, left-wing parties support neither employment incentives nor direct job creation schemes. In continental and Liberal welfare regimes, left-wing parties oppose employment incentives and rehabilitation programmes to a lesser extent and they support direct job creation. There is no impact of partisanship on training. These results reconcile the previously contradictory findings concerning the impact of the Left on ALMPs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in iterative parallel data mining algorithms. In particular, the analysis focuses on one of the most influential and popular data mining methods, the k-means algorithm for cluster analysis. The straightforward parallel formulation of the k-means algorithm requires a global reduction operation at each iteration step, which hinders its scalability. This work studies a different parallel formulation of the algorithm where the requirement of global communication can be relaxed while still providing the exact solution of the centralised k-means algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By the mid-1930s the major Hollywood studios had developed extensive networks of distribution subsidiaries across five continents. This article focuses on the operation of American film distributors in Australia – one of Hollywood's largest foreign markets. Drawing on two unique primary datasets, the article compares and investigates film distribution in Sydney's first-run and suburban-run markets. It finds that the subsidiaries of US film companies faced a greater liability of foreignness in the city centre market than in the suburban one. Our data support the argument that film audiences in local or suburban cinema markets were more receptive to Hollywood entertainment than those in metropolitan centres.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Massive Open Online Courses (MOOCs) are a new addition to the open educational provision. They are offered mainly by prestigious universities on various commercial and non-commercial MOOC platforms allowing anyone who is interested to experience the world class teaching practiced in these universities. MOOCs have attracted wide interest from around the world. However, learner demographics in MOOCs suggest that some demographic groups are underrepresented. At present MOOCs seem to be better serving the continuous professional development sector.