962 resultados para Maximum independent set


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Government of the Hong Kong SAR sponsored a report investigating the Hong Kong construction inudstry and published the investigating committee's findings in 2001 (HK CIRC 2001). Since then the Provisional Construction Industry Coordination Board (PCICB), and its successor, the Construction Industry Council (CIC), also set up by the Government, has made progress with the necessary reforms. Now that seven years have passed, it is time for an independent evaluation of the impact of the CIRC initiative in order to assist the CIC and the Government decision-makers in refining the efforts to improve the industry's performance. This paper reports on the interim results of a study that seeks to provide such an evaluation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Mobile Emissions Assessment System for Urban and Regional Evaluation (MEASURE) model provides an external validation capability for hot stabilized option; the model is one of several new modal emissions models designed to predict hot stabilized emission rates for various motor vehicle groups as a function of the conditions under which the vehicles are operating. The validation of aggregate measurements, such as speed and acceleration profile, is performed on an independent data set using three statistical criteria. The MEASURE algorithms have proved to provide significant improvements in both average emission estimates and explanatory power over some earlier models for pollutants across almost every operating cycle tested.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the outcomes of a research project, which focused on developing a set of surrogate parameters to evaluate urban stormwater quality using simulated rainfall. Use of surrogate parameters has the potential to enhance the rapid generation of urban stormwater quality data based on on-site measurements and thereby reduce resource intensive laboratory analysis. The samples collected from rainfall simulations were tested for a range of physico-chemical parameters which are key indicators of nutrients, solids and organic matter. The analysis revealed that [total dissolved solids (TDS) and dissolved organic carbon (DOC)]; [total solids (TS) and total organic carbon (TOC)]; [turbidity (TTU)]; [electrical conductivity (EC)]; [TTU and EC] as appropriate surrogate parameters for dissolved total nitrogen (DTN), total phosphorus (TP), total suspended solids (TSS), TDS and TS respectively. Relationships obtained for DTN-TDS, DTN-DOC, and TP-TS demonstrated good portability potential. The portability of the relationship developed for TP and TOC was found to be unsatisfactory. The relationship developed for TDS-EC and TS-EC also demonstrated poor portability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Australian universities, journalism educators usually come to the academy from the journalism profession and consequently place a high priority on leading students to develop a career-focussed skill set. The changing nature of the technological, political and economic environments and the professional destinations of journalism graduates place demands on journalism curricula and educators alike. The profession is diverse, such that the better description is of many ‘journalisms’ rather than one ‘journalism’ with consequential pressures being placed on curricula to extend beyond the traditional skill set, where practical ‘writing’ and ‘editing’ skills dominate, to the incorporation of critical theory and the social construction of knowledge. A parallel set of challenges faces academic staff operating in a higher education environment where change is the only constant and research takes precedent over curriculum development. In this paper, three educators at separate universities report on their attempts to implement curriculum change to imbue graduates with better skills and attributes such as enhanced team work, problem solving and critical thinking, to operate in the divergent environment of 21st century journalism. The paper uses narrative case study to illustrate the different approaches. Data collected from formal university student evaluations inform the narratives along with rich but less formal qualitative data including anecdotal student comments and student reflective assessment presentations. Comparison of the three approaches illustrates the dilemmas academic staff face when teaching in disciplines that are impacted by rapid changes in technology requiring new pedagogical approaches. Recommendations for future directions are considered against the background or learning purpose.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to examine the role of three strategies - organisational, business and information system – in post implementation of technological innovations. The findings reported in the paper are that improvements in operational performance can only be achieved by aligning technological innovation effectiveness with operational effectiveness. Design/methodology/approach – A combination of qualitative and quantitative methods was used to apply a two-stage methodological approach. Unstructured and semi structured interviews, based on the findings of the literature, were used to identify key factors used in the survey instrument design. Confirmatory factor analysis (CFA) was used to examine structural relationships between the set of observed variables and the set of continuous latent variables. Findings – Initial findings suggest that organisations looking for improvements in operational performance through adoption of technological innovations need to align with operational strategies of the firm. Impact of operational effectiveness and technological innovation effectiveness are related directly and significantly to improved operational performance. Perception of increase of operational effectiveness is positively and significantly correlated with improved operational performance. The findings suggest that technological innovation effectiveness is also positively correlated with improved operational performance. However, the study found that there is no direct influence of strategiesorganisational, business and information systems (IS) - on improvement of operational performance. Improved operational performance is the result of interactions between the implementation of strategies and related outcomes of both technological innovation and operational effectiveness. Practical implications – Some organisations are using technological innovations such as enterprise information systems to innovate through improvements in operational performance. However, they often focus strategically only on effectiveness of technological innovation or on operational effectiveness. Such a focus will be detrimental in the long-term of the enterprise. This research demonstrated that it is not possible to achieve maximum returns through technological innovations as dimensions of operational effectiveness need to be aligned with technological innovations to improve their operational performance. Originality/value – No single technological innovation implementation can deliver a sustained competitive advantage; rather, an advantage is obtained through the capacity of an organisation to exploit technological innovations’ functionality on a continuous basis. To achieve sustainable results, technology strategy must be aligned with organisational and operational strategies. This research proposes the key performance objectives and dimensions that organisations should focus to achieve a strategic alignment. Research limitations/implications – The principal limitation of this study is that the findings are based on investigation of small sample size. There is a need to explore the appropriateness of influence of scale prior to generalizing the results of this study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The structures of the anhydrous 1:1 proton-transfer compounds of isonipecotamide (4-carbamoylpiperidine) with picric acid and 3,5-dinitrosalicylic acid, namely 4-carbamoylpiperidinium 2,4,6-trinitrophenolate, C6H13N2O8+ C6H2N3O7- (I) and 4-carbamoylpiperidinium 2-carboxy-4,6-dinitrophenolate, C6H13N2O8+ C7H3N2O7-: two forms, the monoclinic alpha-polymorph (II) and the triclinic beta-polymorph (III) have been determined at 200 K. All compounds form hydrogen-bonded structures, one-dimensional in (II), two-dimensional in (I) and three-dimensional in (III). In (I), the cations form centrosymmetric cyclic head-to-tail hydrogen-bonded homodimers [graph set R2/2(14)] through lateral duplex piperidinium N---H...O(amide) interactions. These dimers are extended into a two-dimensional network structure through further interactions with anion phenolate-O and nitro-O acceptors, including a direct symmetric piperidinium N-H...O(phenol),O(nitro) cation--anion association [graph set R2/1(6)]. The monoclinic polymorph (II) has a similar R2/1(6) cation-anion hydrogen-bonding interaction to (I) but with an additional conjoint symmetrical R1/2(4) interaction as well as head-to-tail piperidinium N-H...O(amide) O hydrogen bonds and amide N-H...O(carboxyl) hydrogen bonds, give a network structure which include large R3/4(20) rings. The hydrogen bonding in the triclinic polymorph (III) is markedly different from that of monoclinic (II). The asymmetric unit contains two independent cation-anion pairs which associate through cyclic piperidinium N-H...O,O'(carboxyl) interactions [graph set R2/1(4)]. The cations also show the zig-zag head-to-tail piperidinium N-H...O(amide) hydrogen-bonded chain substructures found in (II) but in addition feature amide N-H...O(nitro) and O(phenolate) and amide N-H...O(nitro) associations. As well there is a centrosymmetric double-amide N-H...O(carboxyl) bridged bis(cation-anion) ring system [graph set R2/4(8)] in the three-dimensional framework. The structures reported here demonstrate the utility of the isonipecotamide cation as a synthon with previously unrecognized potential for structure assembly applications. Furthermore, the structures of the two polymorphic 3,5-dinitrosalicylic acid salts show an unusual dissimilarity in hydrogen-bonding characteristics, considering that both were obtained from identical solvent systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents early results from a pilot project which aims to investigate the relationship between proprietary structure of small and medium- sized Italian family firms and their owners’ orientation towards a “business evaluation process”. Evidence from many studies point out the importance of family business in a worldwide economic environment: in Italy 93% of the businesses are represented by family firms; 98% of them have less than 50 employees (Italian Association of Family Firms, 2004) so we judged family SMEs as a relevant field of investigation. In this study we assume a broad definition of family business as “a firm whose control (50% of shares or voting rights) is closely held by the members of the same family” (Corbetta,1995). “Business evaluation process” is intended here both as “continuous evaluation process” (which is the expression of a well developed managerial attitude) or as an “immediate valuation” (i.e. in the case of new shareholder’s entrance, share exchange among siblings, etc). We set two hypotheses to be tested in this paper: the first is “quantitative” and aims to verify whether the number of owners (independent variable) in a family firm is positively correlated to the business evaluation process. If a family firm is led by only one subject, it is more likely that personal values, culture and feelings may affect his choices more than “purely economic opportunities”; so there is less concern about monitoring economic performance or about the economic value of the firm. As the shareholders’ number increases, economic aspects in managing the firm grow in importance over the personal values and "value orientation" acquires a central role. The second hypothesis investigates if and to what extent the presence of “non- family members” among the owners affects their orientation to the business evaluation process. The “Cramer’s V” test has been used to test the hypotheses; both were not confirmed from these early results; next steps will lead to make an inferential analysis on a representative sample of the population.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A SNP genotyping method was developed for E. faecalis and E. faecium using the 'Minimum SNPs' program. SNP sets were interrogated using allele-specific real-time PCR. SNP-typing sub-divided clonal complexes 2 and 9 of E. faecalis and 17 of E. faecium, members of which cause the majority of nosocomial infections globally.