411 resultados para best linear unbiased predictor


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis has investigated how to cluster a large number of faces within a multi-media corpus in the presence of large session variation. Quality metrics are used to select the best faces to represent a sequence of faces; and session variation modelling improves clustering performance in the presence of wide variations across videos. Findings from this thesis contribute to improving the performance of both face verification systems and the fully automated clustering of faces from a large video corpus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knowledge economy seeks its nourishment from diversity and dissemination of ideas and creativity of its talent base. This has led to the acknowledgement of place making as a major strategy to attract and retain the knowledge base into the emerging knowledge and innovation spaces. The study seeks to explore the adoption of place making in this context. Literature and practice provide information to understand the evolution of various spatial typologies and the specialised role of place making in such locations. This helps in determining the key facilitators of place making. The paper takes an interdisciplinary approach and develops an integrated conceptual framework considering dimensions and facilitators of place making. Through the lens of the framework, best practices across Europe—i.e., Cambridge Science Park (UK), 22@Barcelona (Spain), Arabianranta (Finland), Strijp-S (Netherlands), and Digital Hub (Ireland)—are scrutinised to highlight various approaches to place making. The findings provide insights and a discussion into the interplay of form, function, image and underlying processes in globally emerging spatial typologies of contemporary knowledge and innovation spaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two beetle-type scanning tunneling microscopes are described. Both designs have the thermal stability of the Besocke beetle and the simplicity of the Wilms beetle. Moreover, sample holders were designed that also allow both semiconductor wafers and metal single crystals to be studied. The coarse approach is a linear motion of the beetle towards the sample using inertial slip–stick motion. Ten wires are required to control the position of the beetle and scanner and measure the tunneling current. The two beetles were built with different sized piezolegs, and the vibrational properties of both beetles were studied in detail. It was found, in agreement with previous work, that the beetle bending mode is the lowest principal eigenmode. However, in contrast to previous vibrational studies of beetle-type scanning tunneling microscopes, we found that the beetles did not have the “rattling” modes that are thought to arise from the beetle sliding or rocking between surface asperities on the raceway. The mass of our beetles is 3–4 times larger than the mass of beetles where rattling modes have been observed. We conjecture that the mass of our beetles is above a “critical beetle mass.” This is defined to be the beetle mass that attenuates the rattling modes by elastically deforming the contact region to the extent that the rattling modes cannot be identified as distinct modes in cross-coupling measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines and quantifies the effect of adding polyelectrolytes to cellulose nanofibre suspensions on the gel point of cellulose nanofibre suspensions, which is the lowest solids concentration at which the suspension forms a continuous network. The lower the gel point, the faster the drainage time to produce a sheet and the higher the porosity of the final sheet formed. Two new techniques were designed to measure the dynamic compressibility and the drainability of nanocellulose–polyelectrolyte suspensions. We developed a master curve which showed that the independent variable controlling the behaviour of nanocellulose suspensions and its composite is the structure of the flocculated suspension which is best quantified as the gel point. This was independent of the type of polyelectrolyte used. At an addition level of 2 mg/g of nanofibre, a reduction in gel point over 50 % was achieved using either a high molecular weight (13 MDa) linear cationic polyacrylamide (CPAM, 40 % charge), a dendrimer polyethylenimine of high molecular weight of 750,000 Da (HPEI) or even a low molecular weight of 2000 Da (LPEI). There was no significant difference in the minimum gel point achieved, despite the difference in polyelectrolyte morphology and molecular weight. In this paper, we show that the gel point controls the flow through the fibre suspension, even when comparing fibre suspensions with solids content above the gel point. A lower gel point makes it easier for water to drain through the fibre network,reducing the pressure required to achieve a given dewatering rate and reducing the filtering time required to form a wet laid sheet. We further show that the lower gel point partially controls the structure of the wet laid sheet after it is dried. Halving the gel point increased the air permeability of the dry sheet by 37, 46 and 25 %, when using CPAM, HPEI and LPEI, respectively. The resistance to liquid flow was reduced by 74 and 90 %, when using CPAM and LPEI. Analysing the paper formed shows that sheet forming process and final sheet properties can be engineered and controlled by adding polyelectrolytes to the nanofibre suspension.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learner Driver Mentor Programs (LDMPs) assist disadvantaged learner drivers to gain supervised on-road driving experience by providing access to vehicles and volunteer mentors. In the absence of existing research investigating the implementation of Best Practice principles in LDMPs, this case study examines successful program operation in the context of a rural town setting. The study is based on an existing Best Practice model for LDMPs, and triangulation of data from a mentor focus group (n = 7), interviews with program stakeholders (n = 9), and an in-depth interview with the site-based program development officer. The data presented is based upon selected findings of the broader evaluation study. Preliminary findings regarding driving session management, support of mentors and mentees, and building and maintaining relationships with program stakeholders, are discussed. Key findings relate to the importance of relationships in engagement with the program and collaborating across sectors to achieve a range of positive outcomes for learners. The findings highlight the need for the program to be relevant and responsive to the requirements of the population and the context in which it is operating.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Arterial compliance has been shown to correlate well with overall cardiovascular outcome and it may also be a potential risk factor for the development of atheromatous disease. This study assesses the utility of 2-D phase contrast Magnetic Resonance (MR) imaging with intra-sequence blood pressure measurement to determine carotid compliance and distensibility. 20 patients underwent 2-D phase contrast MR imaging and also ultrasound-based wall tracking measurements. Values for carotid compliance and distensibility were derived from the two different modalities and compared. Linear regression analysis was utilised to determine the extent of correlation between MR and ultrasound derived parameters. In those variables that could be directly compared, an agreement analysis was undertaken. MR measures of compliance showed a good correlation with measures based on ultrasound wall-tracking (r=0.61, 95% CI 0.34 to 0.81 p=0.0003). Vessels that had undergone carotid endarterectomy previously were significantly less compliant than either diseased or normal contralateral vessels (p=0.04). Agreement studies showed a relatively poor intra-class correlation coefficient (ICC) between diameter-based measures of compliance through either MR or ultrasound (ICC=0.14). MRI based assessment of local carotid compliance appears to be both robust and technically feasible in most subjects. Measures of compliance correlate well with ultrasound-based values and correlate best when cross-sectional area change is used rather than derived diameter changes. If validated by further larger studies, 2-D phase contrast imaging with intra-sequence blood pressure monitoring and off-line radial artery tonometry may provide a useful tool in further assessment of patients with carotid atheroma.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a maximum likelihood method for estimating growth parameters for an aquatic species that incorporates growth covariates, and takes into consideration multiple tag-recapture data. Individual variability in asymptotic length, age-at-tagging, and measurement error are also considered in the model structure. Using distribution theory, the log-likelihood function is derived under a generalised framework for the von Bertalanffy and Gompertz growth models. Due to the generality of the derivation, covariate effects can be included for both models with seasonality and tagging effects investigated. Method robustness is established via comparison with the Fabens, improved Fabens, James and a non-linear mixed-effects growth models, with the maximum likelihood method performing the best. The method is illustrated further with an application to blacklip abalone (Haliotis rubra) for which a strong growth-retarding tagging effect that persisted for several months was detected

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a linear quantile regression analysis method for longitudinal data that combines the between- and within-subject estimating functions, which incorporates the correlations between repeated measurements. Therefore, the proposed method results in more efficient parameter estimation relative to the estimating functions based on an independence working model. To reduce computational burdens, the induced smoothing method is introduced to obtain parameter estimates and their variances. Under some regularity conditions, the estimators derived by the induced smoothing method are consistent and have asymptotically normal distributions. A number of simulation studies are carried out to evaluate the performance of the proposed method. The results indicate that the efficiency gain for the proposed method is substantial especially when strong within correlations exist. Finally, a dataset from the audiology growth research is used to illustrate the proposed methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider rank regression for clustered data analysis and investigate the induced smoothing method for obtaining the asymptotic covariance matrices of the parameter estimators. We prove that the induced estimating functions are asymptotically unbiased and the resulting estimators are strongly consistent and asymptotically normal. The induced smoothing approach provides an effective way for obtaining asymptotic covariance matrices for between- and within-cluster estimators and for a combined estimator to take account of within-cluster correlations. We also carry out extensive simulation studies to assess the performance of different estimators. The proposed methodology is substantially Much faster in computation and more stable in numerical results than the existing methods. We apply the proposed methodology to a dataset from a randomized clinical trial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 2008 US election has been heralded as the first presidential election of the social media era, but took place at a time when social media were still in a state of comparative infancy; so much so that the most important platform was not Facebook or Twitter, but the purpose-built campaign site my.barackobama.com, which became the central vehicle for the most successful electoral fundraising campaign in American history. By 2012, the social media landscape had changed: Facebook and, to a somewhat lesser extent, Twitter are now well-established as the leading social media platforms in the United States, and were used extensively by the campaign organisations of both candidates. As third-party spaces controlled by independent commercial entities, however, their use necessarily differs from that of home-grown, party-controlled sites: from the point of view of the platform itself, a @BarackObama or @MittRomney is technically no different from any other account, except for the very high follower count and an exceptional volume of @mentions. In spite of the significant social media experience which Democrat and Republican campaign strategists had already accumulated during the 2008 campaign, therefore, the translation of such experience to the use of Facebook and Twitter in their 2012 incarnations still required a substantial amount of new work, experimentation, and evaluation. This chapter examines the Twitter strategies of the leading accounts operated by both campaign headquarters: the ‘personal’ candidate accounts @BarackObama and @MittRomney as well as @JoeBiden and @PaulRyanVP, and the campaign accounts @Obama2012 and @TeamRomney. Drawing on datasets which capture all tweets from and at these accounts during the final months of the campaign (from early September 2012 to the immediate aftermath of the election night), we reconstruct the campaigns’ approaches to using Twitter for electioneering from the quantitative and qualitative patterns of their activities, and explore the resonance which these accounts have found with the wider Twitter userbase. A particular focus of our investigation in this context will be on the tweeting styles of these accounts: the mixture of original messages, @replies, and retweets, and the level and nature of engagement with everyday Twitter followers. We will examine whether the accounts chose to respond (by @replying) to the messages of support or criticism which were directed at them, whether they retweeted any such messages (and whether there was any preferential retweeting of influential or – alternatively – demonstratively ordinary users), and/or whether they were used mainly to broadcast and disseminate prepared campaign messages. Our analysis will highlight any significant differences between the accounts we examine, trace changes in style over the course of the final campaign months, and correlate such stylistic differences with the respective electoral positioning of the candidates. Further, we examine the use of these accounts during moments of heightened attention (such as the presidential and vice-presidential debates, or in the context of controversies such as that caused by the publication of the Romney “47%” video; additional case studies may emerge over the remainder of the campaign) to explore how they were used to present or defend key talking points, and exploit or avert damage from campaign gaffes. A complementary analysis of the messages directed at the campaign accounts (in the form of @replies or retweets) will also provide further evidence for the extent to which these talking points were picked up and disseminated by the wider Twitter population. Finally, we also explore the use of external materials (links to articles, images, videos, and other content on the campaign sites themselves, in the mainstream media, or on other platforms) by the campaign accounts, and the resonance which these materials had with the wider follower base of these accounts. This provides an indication of the integration of Twitter into the overall campaigning process, by highlighting how the platform was used as a means of encouraging the viral spread of campaign propaganda (such as advertising materials) or of directing user attention towards favourable media coverage. By building on comprehensive, large datasets of Twitter activity (as of early October, our combined datasets comprise some 3.8 million tweets) which we process and analyse using custom-designed social media analytics tools, and by using our initial quantitative analysis to guide further qualitative evaluation of Twitter activity around these campaign accounts, we are able to provide an in-depth picture of the use of Twitter in political campaigning during the 2012 US election which will provide detailed new insights social media use in contemporary elections. This analysis will then also be able to serve as a touchstone for the analysis of social media use in subsequent elections, in the USA as well as in other developed nations where Twitter and other social media platforms are utilised in electioneering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose an iterative estimating equations procedure for analysis of longitudinal data. We show that, under very mild conditions, the probability that the procedure converges at an exponential rate tends to one as the sample size increases to infinity. Furthermore, we show that the limiting estimator is consistent and asymptotically efficient, as expected. The method applies to semiparametric regression models with unspecified covariances among the observations. In the special case of linear models, the procedure reduces to iterative reweighted least squares. Finite sample performance of the procedure is studied by simulations, and compared with other methods. A numerical example from a medical study is considered to illustrate the application of the method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The method of generalised estimating equations for regression modelling of clustered outcomes allows for specification of a working matrix that is intended to approximate the true correlation matrix of the observations. We investigate the asymptotic relative efficiency of the generalised estimating equation for the mean parameters when the correlation parameters are estimated by various methods. The asymptotic relative efficiency depends on three-features of the analysis, namely (i) the discrepancy between the working correlation structure and the unobservable true correlation structure, (ii) the method by which the correlation parameters are estimated and (iii) the 'design', by which we refer to both the structures of the predictor matrices within clusters and distribution of cluster sizes. Analytical and numerical studies of realistic data-analysis scenarios show that choice of working covariance model has a substantial impact on regression estimator efficiency. Protection against avoidable loss of efficiency associated with covariance misspecification is obtained when a 'Gaussian estimation' pseudolikelihood procedure is used with an AR(1) structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article describes a generalized estimating equations approach that was used to investigate the impact of technology on vessel performance in a trawl fishery during 1988-96, while accounting for spatial and temporal correlations in the catch-effort data. Robust estimation of parameters in the presence of several levels of clustering depended more on the choice of cluster definition than on the choice of correlation structure within the cluster. Models with smaller cluster sizes produced stable results, while models with larger cluster sizes, that may have had complex within-cluster correlation structures and that had within-cluster covariates, produced estimates sensitive to the correlation structure. The preferred model arising from this dataset assumed that catches from a vessel were correlated in the same years and the same areas, but independent in different years and areas. The model that assumed catches from a vessel were correlated in all years and areas, equivalent to a random effects term for vessel, produced spurious results. This was an unexpected finding that highlighted the need to adopt a systematic strategy for modelling. The article proposes a modelling strategy of selecting the best cluster definition first, and the working correlation structure (within clusters) second. The article discusses the selection and interpretation of the model in the light of background knowledge of the data and utility of the model, and the potential for this modelling approach to apply in similar statistical situations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an approach, based on Lean production philosophy, for rationalising the processes involved in the production of specification documents for construction projects. Current construction literature erroneously depicts the process for the creation of construction specifications as a linear one. This traditional understanding of the specification process often culminates in process-wastes. On the contrary, the evidence suggests that though generalised, the activities involved in producing specification documents are nonlinear. Drawing on the outcome of participant observation, this paper presents an optimised approach for representing construction specifications. Consequently, the actors typically involved in producing specification documents are identified, the processes suitable for automation are highlighted and the central role of tacit knowledge is integrated into a conceptual template of construction specifications. By applying the transformation, flow, value (TFV) theory of Lean production the paper argues that value creation can be realised by eliminating the wastes associated with the traditional preparation of specification documents with a view to integrating specifications in digital models such as Building Information Models (BIM). Therefore, the paper presents an approach for rationalising the TFV theory as a method for optimising current approaches for generating construction specifications based on a revised specification writing model.