954 resultados para Sample selection
Resumo:
Coccolithophores are unicellular marine algae that produce biogenic calcite scales and substantially contribute to marine primary production and carbon export to the deep ocean. Ongoing ocean acidification particularly impairs calcifying organisms, mostly resulting in decreased growth and calcification. Recent studies revealed that the immediate physiological response in the coccolithophore Emiliania huxleyi to ocean acidification may be partially compensated by evolutionary adaptation, yet the underlying molecular mechanisms are currently unknown. Here, we report on the expression levels of 10 candidate genes putatively relevant to pH regulation, carbon transport, calcification and photosynthesis in E. huxleyi populations short-term exposed to ocean acidification conditions after acclimation (physiological response) and after 500 generations of high CO2 adaptation (adaptive response). The physiological response revealed downregulation of candidate genes, well reflecting the concomitant decrease of growth and calcification. In the adaptive response, putative pH regulation and carbon transport genes were up-regulated, matching partial restoration of growth and calcification in high CO2-adapted populations. Adaptation to ocean acidification in E. huxleyi likely involved improved cellular pH regulation, presumably indirectly affecting calcification. Adaptive evolution may thus have the potential to partially restore cellular pH regulatory capacity and thereby mitigate adverse effects of ocean acidification.
Resumo:
An investigation was conducted to evaluate the impact of experimental designs and spatial analyses (single-trial models) of the response to selection for grain yield in the northern grains region of Australia (Queensland and northern New South Wales). Two sets of multi-environment experiments were considered. One set, based on 33 trials conducted from 1994 to 1996, was used to represent the testing system of the wheat breeding program and is referred to as the multi-environment trial (MET). The second set, based on 47 trials conducted from 1986 to 1993, sampled a more diverse set of years and management regimes and was used to represent the target population of environments (TPE). There were 18 genotypes in common between the MET and TPE sets of trials. From indirect selection theory, the phenotypic correlation coefficient between the MET and TPE single-trial adjusted genotype means [r(p(MT))] was used to determine the effect of the single-trial model on the expected indirect response to selection for grain yield in the TPE based on selection in the MET. Five single-trial models were considered: randomised complete block (RCB), incomplete block (IB), spatial analysis (SS), spatial analysis with a measurement error (SSM) and a combination of spatial analysis and experimental design information to identify the preferred (PF) model. Bootstrap-resampling methodology was used to construct multiple MET data sets, ranging in size from 2 to 20 environments per MET sample. The size and environmental composition of the MET and the single-trial model influenced the r(p(MT)). On average, the PF model resulted in a higher r(p(MT)) than the IB, SS and SSM models, which were in turn superior to the RCB model for MET sizes based on fewer than ten environments. For METs based on ten or more environments, the r(p(MT)) was similar for all single-trial models.
Resumo:
This study uses a sample of young Australian twins to examine whether the findings reported in [Ashenfelter, Orley and Krueger, Alan, (1994). 'Estimates of the Economic Return to Schooling from a New Sample of Twins', American Economic Review, Vol. 84, No. 5, pp.1157-73] and [Miller, P.W., Mulvey, C and Martin, N., (1994). 'What Do Twins Studies Tell Us About the Economic Returns to Education?: A Comparison of Australian and US Findings', Western Australian Labour Market Research Centre Discussion Paper 94/4] are robust to choice of sample and dependent variable. The economic return to schooling in Australia is between 5 and 7 percent when account is taken of genetic and family effects using either fixed-effects models or the selection effects model of Ashenfelter and Krueger. Given the similarity of the findings in this and in related studies, it would appear that the models applied by [Ashenfelter, Orley and Krueger, Alan, (1994). 'Estimates of the Economic Return to Schooling from a New Sample of Twins', American Economic Review, Vol. 84, No. 5, pp. 1157-73] are robust. Moreover, viewing the OLS and IV estimators as lower and upper bounds in the manner of [Black, Dan A., Berger, Mark C., and Scott, Frank C., (2000). 'Bounding Parameter Estimates with Nonclassical Measurement Error', Journal of the American Statistical Association, Vol. 95, No.451, pp.739-748], it is shown that the bounds on the return to schooling in Australia are much tighter than in [Ashenfelter, Orley and Krueger, Alan, (1994). 'Estimates of the Economic Return to Schooling from a New Sample of Twins', American Economic Review, Vol. 84, No. 5, pp. 1157-73], and the return is bounded at a much lower level than in the US. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Objective: This study examined a sample of patients in Victoria, Australia, to identify factors in selection for conditional release from an initial hospitalization that occurred within 30 days of entry into the mental health system. Methods: Data were from the Victorian Psychiatric Case Register. All patients first hospitalized and conditionally released between 1990 and 2000 were identified (N = 8,879), and three comparison groups were created. Two groups were hospitalized within 30 days of entering the system: those who were given conditional release and those who were not. A third group was conditionally released from a hospitalization that occurred after or extended beyond 30 days after system entry. Logistic regression identified characteristics that distinguished the first group. Ordinary least-squares regression was used to evaluate the contribution of conditional release early in treatment to reducing inpatient episodes, inpatient days, days per episode, and inpatient days per 30 days in the system. Results: Conditional release early in treatment was used for 11 percent of the sample, or more than a third of those who were eligible for this intervention. Factors significantly associated with selection for early conditional release were those related to a better prognosis ( initial hospitalization at a later age and having greater than an 11th grade education), a lower likelihood of a diagnosis of dementia or schizophrenia, involuntary status at first inpatient admission, and greater community involvement ( being employed and being married). When the analyses controlled for these factors, use of conditional release early in treatment was significantly associated with a reduction in use of subsequent inpatient care.
Resumo:
The aim of our paper is to examine whether Exchange Traded Funds (ETFs) diversify away the private information of informed traders. We apply the spread decomposition models of Glosten and Harris (1998) and Madhavan, Richardson and Roomans (1997) to a sample of ETFs and their control securities. Our results indicate that ETFs have significantly lower adverse selection costs than their control securities. This suggests that private information is diversified away for these securities. Our results therefore offer one explanation for the rapid growth in the ETF market.
Resumo:
This paper presents results of a study examining the methods used to select employees in 579 UK organizations representing a range of different organization sizes and industry sectors. Overall, a smaller proportion of organizations in this sample reported using formalized methods (e.g., assessment centres) than informal methods (e.g., unstructured interviews). The curriculum vitae (CVs) was the most commonly used selection method, followed by the traditional triad of application form, interviews, and references. Findings also indicated that the use of different selection methods was similar in both large organizations and small-to-medium-sized enterprises. Differences were found across industry sector with public and voluntary sectors being more likely to use formalized techniques (e.g., application forms rather than CVs and structured rather than unstructured interviews). The results are discussed in relation to their implications, both in terms of practice and future research.
Resumo:
In this paper we propose a prototype size selection method for a set of sample graphs. Our first contribution is to show how approximate set coding can be extended from the vector to graph domain. With this framework to hand we show how prototype selection can be posed as optimizing the mutual information between two partitioned sets of sample graphs. We show how the resulting method can be used for prototype graph size selection. In our experiments, we apply our method to a real-world dataset and investigate its performance on prototype size selection tasks. © 2012 Springer-Verlag Berlin Heidelberg.
Resumo:
County jurisdictions in America are increasingly exercising self-government in the provision of public community services through the context of second order federalism. In states exercising this form of contemporary governance, county governments with "reformed" policy-making structures and professional management practices, have begun to rival or surpass municipalities in the delivery of local services with regional implications such as environmental protection (Benton 2002, 2003; Marando and Reeves, 1993). ^ The voter referendum, a form of direct democracy, is an important component of county land preservation and environmental protection governmental policies. The recent growth and success of land preservation voter referendums nationwide reflects an increase in citizen participation in government and their desire to protect vacant land and its natural environment from threats of over-development, urbanization and sprawl, loss of open space and farmland, deterioration of ecosystems, and inadequate park and recreational amenities. ^ The study's design employs a sequential, mixed method. First, a quantitative approach employs the Heckman two-step model. It is fitted with variables for the non-random sample of 227 voter referendum counties and all non-voter referendum counties in the U.S. from 1988 to 2009. Second, the qualitative data collected from the in-depth investigation of three South Florida county case studies with twelve public administrator interviews is transformed for integration with the quantitative findings. The purpose of the qualitative method is to complement, explain and enrich the statistical analysis of county demographic, socio-economic, terrain, regional, governance and government, political preference, environmentalism, and referendum-specific factors. ^ The research finds that government factors are significant in terms of the success of land preservation voter referendums; more specifically, the presence of self-government authority (home rule charter), a reformed structure (county administrator/manager or elected executive), and environmental interest groups. In addition, this study concludes that successful counties are often located coastal, exhibit population and housing growth, and have older and more educated citizens who vote democratic in presidential elections. The analysis of case study documents and public administrator interviews finds that pragmatic considerations of timing, local politics and networking of regional stakeholders are also important features of success. Further research is suggested utilizing additional public participation, local government and public administration factors.^
Resumo:
This study examined the relationships between gifted selection criteria used in the Dade County Public Schools of Miami, Florida and performance in sixth grade gifted science classes.^ The goal of the study was to identify significant predictors of performance in sixth grade gifted science classes. Group comparisons of performance were also made. Performance in sixth grade gifted science was defined as the numeric average of nine weeks' grades earned in sixth grade gifted science classes.^ The sample consisted of 100 subjects who were formerly enrolled in sixth grade gifted science classes over two years at a large, multiethnic public middle school in Dade County.^ The predictors analyzed were I.Q. score (all scales combined), full scale I.Q. score, verbal scale I.Q. score, performance scale I.Q. score, combined Stanford Achievement Test (SAT) score (Reading Comprehension plus Math Applications), SAT Reading Comprehension score, and SAT Math Applications score. Combined SAT score and SAT Math Applications score were significantly positively correlated to performance in sixth grade gifted science. Performance scale I.Q. score was significantly negatively correlated to performance in sixth grade gifted science. The other predictors examined were not significantly correlated to performance.^ Group comparison results showed the mean average of nine weeks grades for the full scale I.Q. group was greater than the verbal and performance scale I.Q. groups. Females outperformed males to a highly significant level. Mean g.p.a. for ethnic groups was greatest for Asian students, followed by white non-Hispanic, Hispanic, and black. Students not receiving a lunch subsidy outperformed those receiving subsidies.^ Comparisons of performance based on gifted qualification plan showed the mean g.p.a. for traditional plan and Plan B groups were not different. Mean g.p.a. for students who qualified for gifted using automatic Math Applications criteria was highest, followed by automatic Reading Comprehension criteria and Plan B Matrix score. Both automatic qualification groups outperformed the traditional group. The traditional group outperformed the Plan B Matrix group. No significant differences in mean g.p.a. between the Plan B subgroups and the traditional plan group were found. ^
Resumo:
In this dissertation, I first suggest an extension of the managerial rents model and more specifically the managerial skills typology that it offers. Building on research in international business, I propose adding country-specific skills (CSS) to this typology in addition to firm-specific, industry-specific, and generic skills. I define CSS as managers' abilities that are applicable and specific to a particular national institutional context. Such skills are distinct from the other three types identified and are likely to influence managers' performance and the performance of their firms. So if CSS are distinct skills, what are the implications for strategy and international business research? In an attempt to respond to this question, I conduct two empirical essays in which I examine the implications of this refinement of the typology of managerial skills for CEO selection and firms' mergers and acquisitions (M&A;) strategy. In the first empirical essay, I puzzle at the fact that although CSS constitute a barrier to high-level executive mobility across countries, there have been a growing number of foreign-born CEOs being appointed across the globe. Why are these individuals being selected for the post of CEO? Using information on the appointment of foreign-born and national CEOs from 2005 to 2010 among global 500 companies, I show that internationalization pressures help explain their selection and that two types of firms are likely to appoint foreign leaders: highly internationalized firms and firms that are likely to internationalize. In the second empirical essay, I examine the strategic implications of country-specific skills. Employing the same sample as the one used in the first empirical essay, I demonstrate that given that their mindset is likely to be less focused on firms' home market, foreign-born CEOs may be prone to institute more changes in firms' cross-border M&A; strategy than their domestic counterparts. I also theorize on the moderating influence of CEOs' insiderness.
Resumo:
The present dissertation consists of two studies that combine personnel selection, safety performance, and job performance literatures to answer an important question: are safe workers better workers? Study 1 tested a predictive model of safety performance to examine personality characteristics (conscientiousness and agreeableness), and two novel behavioral constructs (safety orientation and safety judgment) as predictors of safety performance in a sample of forklift loaders/operators (N = 307). Analyses centered on investigating safety orientation as a proximal predictor and determinant of safety performance. Study 2 replicated Study 1 and explored the relationship between safety performance and job performance by testing an integrative model in a sample of machine operators and construction crewmembers (N = 323). Both Study 1 and Study 2 found conscientiousness, agreeableness, and safety orientation to be good predictors of safety performance. While both personality and safety orientation were positively related to safety performance, safety orientation proved to be a more proximal determinant of safety performance. Across studies, results surrounding safety judgment as a predictor of safety performance were inconclusive, suggesting possible issues with measurement of the construct. Study 2 found a strong relationship between safety performance and job performance. In addition, safety performance served as a mediator between predictors (conscientiousness, agreeableness and safety orientation) and job performance. Together these findings suggest that safe workers are indeed better workers, challenging previous viewpoints to the contrary. Further, results implicate the viability of personnel selection as means of promoting safety in organizations.^
Resumo:
This thesis demonstrates a new way to achieve sparse biological sample detection, which uses magnetic bead manipulation on a digital microfluidic device. Sparse sample detection was made possible through two steps: sparse sample capture and fluorescent signal detection. For the first step, the immunological reaction between antibody and antigen enables the binding between target cells and antibody-‐‑ coated magnetic beads, hence achieving sample capture. For the second step, fluorescent detection is achieved via fluorescent signal measurement and magnetic bead manipulation. In those two steps, a total of three functions need to work together, namely magnetic beads manipulation, fluorescent signal measurement and immunological binding. The first function is magnetic bead manipulation, and it uses the structure of current-‐‑carrying wires embedded in the actuation electrode of an electrowetting-‐‑on-‐‑dielectric (EWD) device. The current wire structure serves as a microelectromagnet, which is capable of segregating and separating magnetic beads. The device can achieve high segregation efficiency when the wire spacing is 50µμm, and it is also capable of separating two kinds of magnetic beads within a 65µμm distance. The device ensures that the magnetic bead manipulation and the EWD function can be operated simultaneously without introducing additional steps in the fabrication process. Half circle shaped current wires were designed in later devices to concentrate magnetic beads in order to increase the SNR of sample detection. The second function is immunological binding. Immunological reaction kits were selected in order to ensure the compatibility of target cells, magnetic bead function and EWD function. The magnetic bead choice ensures the binding efficiency and survivability of target cells. The magnetic bead selection and binding mechanism used in this work can be applied to a wide variety of samples with a simple switch of the type of antibody. The last function is fluorescent measurement. Fluorescent measurement of sparse samples is made possible of using fluorescent stains and a method to increase SNR. The improved SNR is achieved by target cell concentration and reduced sensing area. Theoretical limitations of the entire sparse sample detection system is as low as 1 Colony Forming Unit/mL (CFU/mL).
Resumo:
Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.
While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.
For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.