3 resultados para Companies Size and Activity Sector

em QSpace: Queen's University - Canada


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantile regression (QR) was first introduced by Roger Koenker and Gilbert Bassett in 1978. It is robust to outliers which affect least squares estimator on a large scale in linear regression. Instead of modeling mean of the response, QR provides an alternative way to model the relationship between quantiles of the response and covariates. Therefore, QR can be widely used to solve problems in econometrics, environmental sciences and health sciences. Sample size is an important factor in the planning stage of experimental design and observational studies. In ordinary linear regression, sample size may be determined based on either precision analysis or power analysis with closed form formulas. There are also methods that calculate sample size based on precision analysis for QR like C.Jennen-Steinmetz and S.Wellek (2005). A method to estimate sample size for QR based on power analysis was proposed by Shao and Wang (2009). In this paper, a new method is proposed to calculate sample size based on power analysis under hypothesis test of covariate effects. Even though error distribution assumption is not necessary for QR analysis itself, researchers have to make assumptions of error distribution and covariate structure in the planning stage of a study to obtain a reasonable estimate of sample size. In this project, both parametric and nonparametric methods are provided to estimate error distribution. Since the method proposed can be implemented in R, user is able to choose either parametric distribution or nonparametric kernel density estimation for error distribution. User also needs to specify the covariate structure and effect size to carry out sample size and power calculation. The performance of the method proposed is further evaluated using numerical simulation. The results suggest that the sample sizes obtained from our method provide empirical powers that are closed to the nominal power level, for example, 80%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The embedding of third sector organisations in the policy world is fraught with tensions. Accountability and autonomy become oppositional forces causing an uneasy relationship. Government agencies are concerned that their equity and efficiency goals and objectives be met when they enter partnerships with the third sector for the delivery of programs and services. Third sector agencies question the impact of accountability mechanisms on their independence and identities. Even if the relationship between government and third sector agencies seems to be based on cooperation, concerns about cooptation (for nonprofits) and capturing (for governments) may linger calling the legitimacy of the partnership into question. Two means of improving the relationship between the governing and third sectors have been proposed recently in Canada by the Panel on Accountability and Governance in the Voluntary Sector (PAGVS) and the Joint Tables sponsored by the Voluntary Sector Task Force (VSTF). The two endeavours represent a historic undertaking in Canada aimed at improving and facilitating the relationship between the federal government and the nonprofit sector. The reports borrow on other country models but offer new insights into mediating the relationship, including new models for a regulatory body and a charity compact for Canada. Do these recommendations adequately address concerns of autonomy, accountability and cooptation or capturing? The Canadian reports do offer new insights into resolving the four tensions inherent in partnerships between the governing and third sector but also raise important questions about the nature of these relationships and the evolution of democracy within the Canadian political system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Larger lineups could protect innocent suspects from being misidentified; however, they can also decrease correct identifications. Bertrand (2006) investigated whether the decrease in correct identifications could be prevented by adding more cues, in the form of additional views of lineup members’ faces, to the lineup. Adding these cues was successful to an extent. The current series of studies attempted to replicate Bertrand’s (2006) findings while addressing some methodological issues—namely, the inconsistency in image size as lineup size increased. First, I investigated whether image size could affect face recognition (Chapter 2) and found it could, but that it also affected previously-seen (“old”) versus previously-unseen (“new”) faces differently. Specifically, smaller image sizes at exposure lowered accuracy for old faces, while these same image sizes at recognition lowered accuracy for new faces. Although these results indicate that target recognition would be unaffected by image size at recognition (i.e., during a lineup), lineups are also comprised of previously-unseen faces, in the form of fillers and innocent suspects. Because image size could affect lineup decisions, as it could become more difficult to realize fillers are previously-unseen, I decided to replicate Bertrand (2006) while keeping image size constant in Chapters 3 (simultaneous lineups) and 4 (simultaneous-presentation, sequential decisions). In both Chapters, the integral findings were the same: correct identification rates decreased as lineup size increased from 6- to 24-person lineups, but adding cues had no effect. The inability to replicate Bertrand (2006) could mean that the original finding was due to chance, but alternate explanations also exist, such as the overall size of the array, the degree to which additional cues overlap, and the length of the target exposure. These alternate explanations, along with directions for future research, are discussed in the following Chapters.