949 resultados para Destination Positioning, Decision Sets, Longitudinal, Short Breaks
Resumo:
This research compared decision making processes in six Chinese state-owned enterprises during the period 1985 to 1988. The research objectives were: a) To examine changes in the managerial behaviour over a period of 1985 to 1988 with a focus on decision-making; b) Through this examination, to throw light on the means by which government policies on economic reform were implemented at the enterprise level; c) To illustrate problems encountered in the decentralization programme which was a major part of China's economic reform. The research was conducted by means of intensive interviews with more than eighty managers and a survey of documents relating to specific decisions. A total of sixty cases of decision-making were selected from five decision topics: purchasing of inputs, pricing of outputs, recruitment of labour, organizational change and innovation, which occurred in 1985 (or before) and in 1988/89. Data from the interviews were used to investigate environmental conditions, relations between the enterprise and its higher authority, interactions between management and the party system, the role of information, and effectiveness of regulations and government policies on enterprise management. The analysis of the data indicates that the decision processes in the different enterprises have some similarities in regard to actor involvement, the flow of decision activities, interactions with the authorities, information usage and the effect of regulations. Comparison of the same or similar decision contents over time indicates that the achievement of decentralization varied according to the topic of decision. Managerial authority was delegated to enterprises when the authorities relaxed their control over resource allocation. When acquisition of necessary resources is dependent upon the planning system or the decision matter is sensitive, because it involves change to the institutional framework (e.g. the Party), then a high degree of centralization was retained, resulting in a marginal change in managerial behaviour. The economic reform failed to increase decision efficiency and effectiveness of decision-making. The prevailing institutional frameworks were regarded as negative to the change. The research argues that the decision process is likely to be more contingent on the decision content than the organization. Three types of decision process have been conceptualized, each of them related to a certain type of decision content. This argument gives attention to the perspectives of institution and power in a way which facilitates an elaboration of organizational analysis. The problems encountered in the reform of China's industrial enterprises are identified and discussed. General recommendations for policies of further reform are offered, based on the analysis of decision process and managerial behaviour.
Resumo:
Számos korábbi kutatás – köztük a szerzők korábbi vizsgálatai is – azt mutatja, hogy a menedzsmentképességek és a vállalatok versenyképessége között pozitív kapcsolat áll fenn, a jobban teljesítő és a proaktívabb vállalatok rendre felkészültebb, jobb vezetői képességekkel bíró, kockázatvállalóbb vezetőkkel rendelkeznek. Az is megfigyelhető, hogy az ebből a nézőpontból sikeresebben működő vállalatok döntéseiben az átlagosnál is erősebben érvényesül a racionális közelítésmód, melynek alkalmazásával a menedzserek az optimális cselekvési alternatíva kiválasztására törekszenek. A cikkben a szerzők az elmúlt 15 év versenyképességi kutatásainak tapasztalatait összegzik, kiemelt hangsúlyt helyezve a legfrissebb felmérés eredményeire. ________________ The article summarizes the main findings of the Competitiveness Research Program with respect to the skills and capabilities of the Hungarian managers and the decision making approaches they use during their work. The results of the four surveys conducted in 1996, 1999, 2004 and 2009 are fairly stable over time: practice minded behavior, professional expertise, and problem solving skills are on the top of the list of the most developed skills of the Hungarian executives. The rational approach is the most popular among the most widespread decision making models in the authors’ sample which is rather alarming since the present turbulent economic environment may demand more adaptive and intuitive approaches.
Resumo:
Absztrakt: Tanulmányunkban a menedzsment képességek és döntéshozatali közelítésmódok szerepét a versenyképesség alakításában immáron negyedik alkalommal elemezzük. Hogy megértsük, milyen tulajdonságokkal, egyéni képességekkel kell a menedzsmentnek rendelkeznie ahhoz, hogy önmaga is versenyképes legyen, és feltárjuk, melyek a mintában szereplő menedzserek erősségei, illetve gyenge pontjai – a korábbi kutatások hagyományait követve – azt vizsgáltuk, hogy a mintában szereplő menedzserek hogyan értékelik önmagukat bizonyos készségek, képességek szerint, valamint azt is áttekintettük, hogy a menedzserek milyen döntéshozatali közelítésmódokat alkalmaznak. A megkérdezett menedzserekre - akárcsak a korábbi válaszadókra - a gyakorlatorientáltság, a magas szintű szakmai ismeretek birtoklása és a fejlett problémamegoldó képesség jellemző leginkább, illetve a nemzetközi trendekkel némiképp szemben a racionális döntéshozatali megközelítést preferálják. _____ We have been analyzing the role of management skills and decision making approaches in firm level competitiveness for the fourth time already. In order to understand what characteristics and individual capabilities a manager must have to be competitive him/herself, and what the main strengths and weaknesses of the Hungarian managers are, following the methodologies of our earlier studies, self assessment of the skills and capabilities of the managers in our sample were examined. The managers – similarly to the earlier results – are practice oriented, they possess up-to-date professional knowledge, and they have good problem solving skills. Our findings demonstrate that they prefer rational decision making approaches, which contradicts to the international tendencies.
Resumo:
The purpose of this research was to demonstrate the applicability of reduced-size STR (Miniplex) primer sets to challenging samples and to provide the forensic community with new information regarding the analysis of degraded and inhibited DNA. The Miniplex primer sets were validated in accordance with guidelines set forth by the Scientific Working Group on DNA Analysis Methods (SWGDAM) in order to demonstrate the scientific validity of the kits. The Miniplex sets were also used in the analysis of DNA extracted from human skeletal remains and telogen hair. In addition, a method for evaluating the mechanism of PCR inhibition was developed using qPCR. The Miniplexes were demonstrated to be a robust and sensitive tool for the analysis of DNA with as low as 100 pg of template DNA. They also proved to be better than commercial kits in the analysis of DNA from human skeletal remains, with 64% of samples tested producing full profiles, compared to 16% for a commercial kit. The Miniplexes also produced amplification of nuclear DNA from human telogen hairs, with partial profiles obtained from as low as 60 pg of template DNA. These data suggest smaller PCR amplicons may provide a useful alternative to mitochondrial DNA for forensic analysis of degraded DNA from human skeletal remains, telogen hairs, and other challenging samples. In the evaluation of inhibition by qPCR, the effect of amplicon length and primer melting temperature was evaluated in order to determine the binding mechanisms of different PCR inhibitors. Several mechanisms were indicated by the inhibitors tested, including binding of the polymerase, binding to the DNA, and effects on the processivity of the polymerase during primer extension. The data obtained from qPCR illustrated a method by which the type of inhibitor could be inferred in forensic samples, and some methods of reducing inhibition for specific inhibitors were demonstrated. An understanding of the mechanism of the inhibitors found in forensic samples will allow analysts to select the proper methods for inhibition removal or the type of analysis that can be performed, and will increase the information that can be obtained from inhibited samples.
Resumo:
During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.
Resumo:
Given a prime power q, define c (q) as the minimum cardinality of a subset H of F 3 q which satisfies the following property: every vector in this space di ff ers in at most 1 coordinate from a multiple of a vector in H. In this work, we introduce two extremal problems in combinatorial number theory aiming to discuss a known connection between the corresponding coverings and sum-free sets. Also, we provide several bounds on these maps which yield new classes of coverings, improving the previous upper bound on c (q)
Resumo:
This paper presents results of research into the use of the Bellman-Zadeh approach to decision making in a fuzzy environment for solving multicriteria power engineering problems. The application of the approach conforms to the principle of guaranteed result and provides constructive lines in computationally effective obtaining harmonious solutions on the basis of solving associated maxmin problems. The presented results are universally applicable and are already being used to solve diverse classes of power engineering problems. It is illustrated by considering problems of power and energy shortage allocation, power system operation, optimization of network configuration in distribution systems, and energetically effective voltage control in distribution systems. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The present paper proposes a flexible consensus scheme for group decision making, which allows one to obtain a consistent collective opinion, from information provided by each expert in terms of multigranular fuzzy estimates. It is based on a linguistic hierarchical model with multigranular sets of linguistic terms, and the choice of the most suitable set is a prerogative of each expert. From the human viewpoint, using such model is advantageous, since it permits each expert to utilize linguistic terms that reflect more adequately the level of uncertainty intrinsic to his evaluation. From the operational viewpoint, the advantage of using such model lies in the fact that it allows one to express the linguistic information in a unique domain, without losses of information, during the discussion process. The proposed consensus scheme supposes that the moderator can interfere in the discussion process in different ways. The intervention can be a request to any expert to update his opinion or can be the adjustment of the weight of each expert`s opinion. An optimal adjustment can be achieved through the execution of an optimization procedure that searches for the weights that maximize a corresponding soft consensus index. In order to demonstrate the usefulness of the presented consensus scheme, a technique for multicriteria analysis, based on fuzzy preference relation modeling, is utilized for solving a hypothetical enterprise strategy planning problem, generated with the use of the Balanced Scorecard methodology. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
This paper presents results of research related to multicriteria decision making under information uncertainty. The Bell-man-Zadeh approach to decision making in a fuzzy environment is utilized for analyzing multicriteria optimization models (< X, M > models) under deterministic information. Its application conforms to the principle of guaranteed result and provides constructive lines in obtaining harmonious solutions on the basis of analyzing associated maxmin problems. This circumstance permits one to generalize the classic approach to considering the uncertainty of quantitative information (based on constructing and analyzing payoff matrices reflecting effects which can be obtained for different combinations of solution alternatives and the so-called states of nature) in monocriteria decision making to multicriteria problems. Considering that the uncertainty of information can produce considerable decision uncertainty regions, the resolving capacity of this generalization does not always permit one to obtain unique solutions. Taking this into account, a proposed general scheme of multicriteria decision making under information uncertainty also includes the construction and analysis of the so-called < X, R > models (which contain fuzzy preference relations as criteria of optimality) as a means for the subsequent contraction of the decision uncertainty regions. The paper results are of a universal character and are illustrated by a simple example. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
This paper presents new insights and novel algorithms for strategy selection in sequential decision making with partially ordered preferences; that is, where some strategies may be incomparable with respect to expected utility. We assume that incomparability amongst strategies is caused by indeterminacy/imprecision in probability values. We investigate six criteria for consequentialist strategy selection: Gamma-Maximin, Gamma-Maximax, Gamma-Maximix, Interval Dominance, Maximality and E-admissibility. We focus on the popular decision tree and influence diagram representations. Algorithms resort to linear/multilinear programming; we describe implementation and experiments. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Mixed models have become important in analyzing the results of experiments, particularly those that require more complicated models (e.g., those that involve longitudinal data). This article describes a method for deriving the terms in a mixed model. Our approach extends an earlier method by Brien and Bailey to explicitly identify terms for which autocorrelation and smooth trend arising from longitudinal observations need to be incorporated in the model. At the same time we retain the principle that the model used should include, at least, all the terms that are justified by the randomization. This is done by dividing the factors into sets, called tiers, based on the randomization and determining the crossing and nesting relationships between factors. The method is applied to formulate mixed models for a wide range of examples. We also describe the mixed model analysis of data from a three-phase experiment to investigate the effect of time of refinement on Eucalyptus pulp from four different sources. Cubic smoothing splines are used to describe differences in the trend over time and unstructured covariance matrices between times are found to be necessary.
Resumo:
Observational longitudinal research is particularly useful for assessing etiology and prognosis and for providing evidence for clinical decision making. However, there are no structured reporting requirements for studies of this design to assist authors, editors, and readers. The authors developed and tested a checklist of criteria related to threats to the internal and external validity of observational longitudinal studies. The checklist criteria concerned recruitment, data collection, biases, and data analysis and descriptive issues relevant to study rationale, study population, and generalizability. Two raters independently assessed 49 randomly selected articles describing stroke research published from 1999 to 2003 in six journals: American Journal of Epidemiology, Journal of Epidemiology and Community Health, Stroke, Annals of Neurology, Archives of Physical Medicine and Rehabilitation, and American Journal of Physical Medicine and Rehabilitation. On average, 17 of the 33 checklist criteria were reported. Criteria describing the study design were better reported than those related to internal validity. No relation was found between study type (etiologic or prognostic) or word count and quality of reporting. A flow diagram for summarizing participant flow through a study was developed. Editors and authors should consider using a checklist and flow diagram when reporting on observational longitudinal research.
Resumo:
Background and Purpose-This report describes trends in the key indices of cerebrovascular disease over 6 years from the end of the 1980s in a geographically defined segment of the city of Perth, Western Australia. Methods-Identical methods were used to find and assess all cases of suspected stroke in a population of approximately 134 000 residents in a triangular area of the northern suburbs of Perth. Case fatality was measured as vital status at 28 days after the onset of symptoms. Data for first-ever strokes and for all strokes for equivalent periods of 12 months in 1989-1990 and 1995-1996 were compared by age-standardized rates and proportions and Poisson regression. Results-There were 355 strokes in 328 patients and 251 first-ever strokes (71%) for 1989-1990 and 290 events in 281 patients and 213 first-ever strokes (73%) for 1995-1996. In Poisson models including age and period, overall trends in the incidence of both first-ever strokes (rate ratio = 0.75; 95% confidence limits, 0.63, 0.90) and all strokes (rate ratio = 0.73; 95% confidence limits, 0.62, 0.85) were obviously significant, but only the changes in men were independently significant. Case fatality did not change, and the balance between hemorrhagic and occlusive strokes in 1995-1996 was almost indistinguishable from that observed in 1989-1990. Conclusions-Our results, which are the only longitudinal population-based data available for Australia for key indices of stroke, suggest that it is a change in the frequency of stroke, rather than its outcome, that is chiefly responsible nationally for the fall in mortality from cerebrovascular disease.
Resumo:
Dual-energy X-ray absorptiometry (DXA) is a widely used method for measuring bone mineral in the growing skeleton. Because scan analysis in children offers a number of challenges, we compared DXA results using six analysis methods at the total proximal femur (PF) and five methods at the femoral neck (FN), In total we assessed 50 scans (25 boys, 25 girls) from two separate studies for cross-sectional differences in bone area, bone mineral content (BMC), and areal bone mineral density (aBMD) and for percentage change over the short term (8 months) and long term (7 years). At the proximal femur for the short-term longitudinal analysis, there was an approximate 3.5% greater change in bone area and BMC when the global region of interest (ROI) was allowed to increase in size between years as compared with when the global ROI was held constant. Trend analysis showed a significant (p < 0.05) difference between scan analysis methods for bone area and BMC across 7 years. At the femoral neck, cross-sectional analysis using a narrower (from default) ROI, without change in location, resulted in a 12.9 and 12.6% smaller bone area and BMC, respectively (both p < 0.001), Changes in FN area and BMC over 8 months were significantly greater (2.3 %, p < 0.05) using a narrower FN rather than the default ROI, Similarly, the 7-year longitudinal data revealed that differences between scan analysis methods were greatest when the narrower FN ROI was maintained across all years (p < 0.001), For aBMD there were no significant differences in group means between analysis methods at either the PF or FN, Our findings show the need to standardize the analysis of proximal femur DXA scans in growing children.
Resumo:
Associations between self-reported 'low iron', general health and well-being, vitality and tiredness in women, were examined using physical (PCS) and mental (MCS) component summary and vitality (VT) scores from the MOS short-form survey (SF-36). 14,762 young (18-23 years) and 14,072 mid-age (45-50 years) women, randomly selected from the national health insurance commission (Medicare) database, completed a baseline mailed self-report questionnaire and 12,328 mid-age women completed a follow-up questionnaire 2 years later. Young and mid-age women who reported (ever) having had 'low iron' reported significantly lower mean PCS, MCS and VT scores, and greater prevalence of 'constant tiredness' at baseline than women with no history of iron deficiency [Differences: young PCS = -2.2, MCS = -4.8, VT = -8.7; constant tiredness: 67% vs. 45%; mid-age PCS = -1.4, MCS = -3.1, VT = -5.9; constant tiredness: 63% vs. 48%]. After adjusting for number of children, chronic conditions, symptoms and socio-demographic variables, mean PCS, MCS and VT scores for mid-age women at follow-up were significantly lower for women who reported recent iron deficiency (in the last 2 years) than for women who reported past iron deficiency or no history of iron deficiency [Means: PCS - recent = 46.6, past = 47.8, never = 47.7; MCS - recent = 45.4, past = 46.9, never = 47.4; VT - recent = 54.8, past = 57.6, never = 58.6]. The adjusted mean change in PCS, MCS and VT scores between baseline and follow-up were also significantly lower among mid-age women who reported iron deficiency only in the last 2 years (i.e. recent iron deficiency) [Mean change: PCS = -3.2; MCS = -2.1; VT = -4.2]. The results suggest that iron deficiency is associated with decreased general health and well-being and increased fatigue.