918 resultados para Large type books
Resumo:
This work is the result of an action-research-type study of the diversification effort of part of a major U.K. industrial company. Work in contingency theory concerning the impact of environmental factors on organizational design, and the systemic model of viable systems put forward by Stafford Beer form the theoretical basis of the vvork. The two streams of thought are compared and found to offer similar conclusions about the design of effective organizations. These findings are taken as the framework for an analysis both of organization structures for promoting innovation described in the literature, and of those employed by the company for this purpose in recent years. Much attention is given to the use of venture groups, and conclusions are drawn on particular factors which may influence their success or failure. Both theoretical considerations, and the examination of the company' s recent experience suggested that the formation of the policy of diversification, as well as the method of implementation of the police, might affect its outcorre. Attention is therefore focused on the policy-making and planning process, and in particular on possible problems that this process could generate in a multi-division company. The view finally taken of diversification effort is that it should be regarded as a learning system. This view helps to expose some ambiguities in the concepts of success and failure in this area, and demonstrates considerable weaknesses in traditional project evaluation procedures.
Resumo:
We have developed a novel multilocus sequence typing (MLST) scheme and database (http://pubmlst.org/pacnes/) for Propionibacterium acnes based on the analysis of seven core housekeeping genes. The scheme, which was validated against previously described antibody, single locus and random amplification of polymorphic DNA typing methods, displayed excellent resolution and differentiated 123 isolates into 37 sequence types (STs). An overall clonal population structure was detected with six eBURST groups representing the major clades I, II and III, along with two singletons. Two highly successful and global clonal lineages, ST6 (type IA) and ST10 (type IB1), representing 64?% of this current MLST isolate collection were identified. The ST6 clone and closely related single locus variants, which comprise a large clonal complex CC6, dominated isolates from patients with acne, and were also significantly associated with ophthalmic infections. Our data therefore support an association between acne and P. acnes strains from the type IA cluster and highlight the role of a widely disseminated clonal genotype in this condition. Characterization of type I cell surface-associated antigens that are not detected in ST10 or strains of type II and III identified two dermatan-sulphate-binding proteins with putative phase/antigenic variation signatures. We propose that the expression of these proteins by type IA organisms contributes to their role in the pathophysiology of acne and helps explain the recurrent nature of the disease. The MLST scheme and database described in this study should provide a valuable platform for future epidemiological and evolutionary studies of P. acnes.
Resumo:
The pattern of senile plaques was investigated in various brain regions of six SDAT brains. In 91 pattern analyses, the regularly spaced clump was the most common pattern found in 64.8% of analyses. Clumping due to large aggregations of uncored plaques in sulci was also common. Regularly spaced clumps were equally common in the hippocampus and neocortex. The pattern of plaques varied in different tissue sections from the same brain region. Cored and uncored plaques presented a similar range of patterns but their pattern varied when they were both present in the same tissue section. Both clump diameter and the intensity of clumping were positively correlated with cored but unrelated to uncored plaque density. Plaques may develop in regular clumps on subcortical afferents and during development of the disease the clumps may spread laterally and ultimately coalesce.
Resumo:
Large prospective trials designed to assess the relationship between metabolic control and CV outcomes in type 2 diabetes have entered a new phase of scrutiny due to strict requirements imposed by the FDA to assess new anti-diabetic agents. So what have we learned from recently completed trials and what do we expect to learn from on-going trials?
Resumo:
Background - The objective of this study was to investigate the association between ethnicity and health related quality of life (HRQoL) in patients with type 2 diabetes. Methods - The EuroQol EQ-5D measure was administered to 1,978 patients with type 2 diabetes in the UK Asian Diabetes Study (UKADS): 1,486 of south Asian origin (Indian, Pakistani, Bangladeshi or other south Asian) and 492 of white European origin. Multivariate regression using ordinary least square (OLS), Tobit, fractional logit and Censored Least Absolutes Deviations estimators was used to estimate the impact of ethnicity on both visual analogue scale (VAS) and utility scores for the EuroQol EQ-5D. Results - Mean EQ-5D VAS and utility scores were lower among south Asians with diabetes compared to the white European population; the unadjusted effect on the mean EQ-5D VAS score was −7.82 (Standard error [SE] = 1.06, p < 0.01) and on the EQ-5D utility score was −0.06 (SE = 0.02, p < 0.01) (OLS estimator). After controlling for socio-demographic and clinical confounders, the adjusted effect on the EQ-5D VAS score was −9.35 (SE = 2.46, p < 0.01) and on the EQ-5D utility score was 0.06 (SE = 0.04), although the latter was not statistically significant. Conclusions - There was a large and statistically significant association between south Asian ethnicity and lower EQ-5D VAS scores. In contrast, there was no significant difference in EQ-5D utility scores between the south Asian and white European sub-groups. Further research is needed to explain the differences in effects on subjective EQ-5D VAS scores and population-weighted EQ-5D utility scores in this context.
Resumo:
Introduction: The antihyperglycaemic agent metformin is widely used in the treatment of type 2 diabetes. Data from the UK Prospective Diabetes Study and retrospective analyses of large healthcare databases concur that metformin reduces the incidence of myocardial infarction and increases survival in these patients. This apparently vasoprotective effect appears to be independent of the blood glucose-lowering efficacy. Effects of metformin: Metformin has long been known to reduce the development of atherosclerotic lesions in animal models, and clinical studies have shown the drug to reduce surrogate measures such as carotid intima-media thickness. The anti-atherogenic effects of metformin include reductions in insulin resistance, hyperinsulinaemia and obesity. There may be modest favourable effects against dyslipidaemia, reductions in pro-inflammatory cytokines and monocyte adhesion molecules, and improved glycation status, benefiting endothelial function in the macro- and micro-vasculature. Additionally metformin exerts anti-thrombotic effects, contributing to overall reductions in athero-thrombotic risk in type 2 diabetic patients. © 2008 Springer Science+Business Media, LLC.
Resumo:
It is consider the new global models for society of neuronet type. The hierarchical structure of society and mentality of individual are considered. The way for incorporating in model anticipatory (prognostic) ability of individual is considered. Some implementations of approach for real task and further research problems are described. Multivaluedness of models and solutions is discussed. Sensory-motor systems analogy also is discussed. New problems for theory and applications of neural networks are described.
Resumo:
IMPORTANCE: Metformin is widely viewed as the best initial pharmacological option to lower glucose concentrations in patients with type 2 diabetes mellitus. However, the drug is contraindicated in many individuals with impaired kidney function because of concerns of lactic acidosis. OBJECTIVE: To assess the risk of lactic acidosis associated with metformin use in individuals with impaired kidney function. EVIDENCE ACQUISITION: In July 2014, we searched the MEDLINE and Cochrane databases for English-language articles pertaining tometformin, kidney disease, and lactic acidosis in humans between 1950 and June 2014.We excluded reviews, letters, editorials, case reports, small case series, and manuscripts that did not directly pertain to the topic area or that met other exclusion criteria. Of an original 818 articles, 65 were included in this review, including pharmacokinetic/metabolic studies, large case series, retrospective studies, meta-analyses, and a clinical trial. RESULTS: Although metformin is renally cleared, drug levels generally remain within the therapeutic range and lactate concentrations are not substantially increased when used in patients with mild to moderate chronic kidney disease (estimated glomerular filtration rates, 30-60 mL/min per 1.73m2). The overall incidence of lactic acidosis in metformin users varies across studies from approximately 3 per 100 000 person-years to 10 per 100 000 person-years and is generally indistinguishable from the background rate in the overall population with diabetes. Data suggesting an increased risk of lactic acidosis in metformin-treated patients with chronic kidney disease are limited, and no randomized controlled trials have been conducted to test the safety ofmetformin in patients with significantly impaired kidney function. Population-based studies demonstrate that metformin may be prescribed counter to prevailing guidelines suggesting a renal risk in up to 1 in 4 patients with type 2 diabetes mellitus-use which, in most reports, has not been associated with increased rates of lactic acidosis. Observational studies suggest a potential benefit from metformin on macrovascular outcomes, even in patients with prevalent renal contraindications for its use. CONCLUSIONS AND RELEVANCE: Available evidence supports cautious expansion of metformin use in patients with mild to moderate chronic kidney disease, as defined by estimated glomerular filtration rate, with appropriate dosage reductions and careful follow-up of kidney function.
Resumo:
Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013
Resumo:
In recent years, there has been a growing realisation that beyond the realm of legitimate entrepreneurship is a large, hidden enterprise culture composed of entrepreneurs conducting some or all of their trade off-the-books. Until now, however, few have evaluated how many entrepreneurs start-up their ventures trading off-the-books and why they do so. Reporting face-to-face interviews conducted in Ukraine during 2005-2006 with 331 entrepreneurs, the finding is not only that the vast majority (90%) operate partially or wholly off-the-books, but also that they are not all driven by necessity, as a last resort and as a survival strategy into entrepreneurship. Revealing how many are willing rather than reluctant entrepreneurs; and that even those who were initially reluctant and ventured into it out of necessity, became more willing entrepreneurs over time as their business became established - the paper concludes by discussing the implications of these findings for both further research and public policy. © 2010 Wiley Periodicals, Inc.
Resumo:
Cuban Americans, a minority Hispanic subgroup, have a high prevalence of type 2 diabetes. Persons with diabetes experience a higher rate of coronary heart disease (CHD) compared to those without diabetes. The objectives of the National Institute of Diabetes and Digestive and Kidney Disease (NIDDK) are to investigate the risk factors of CHD and the etiology of diabetes among diabetics of minority ethnic populations. No information is available on the etiology of CHD risks for Cuban Americans. ^ This cross-sectional study compared Cuban Americans with (N = 79) and without (N = 80) type 2 diabetes residing in South Florida. Data on risk factors of CHD and type 2 diabetes were collected using sociodemographics, smoking habit, Rose Angina, Modifiable Activity, and Willet's food frequency questionnaires. Anthropometrics and blood pressure (BP) were recorded. Glucose, glycated hemoglobin, lipid profile, homocysteine, and C-reactive protein were assessed in fasting blood. ^ Diabetics reported a significantly higher rate of angina symptoms than non-diabetics (P = 0.008). After adjusting for age and gender, diabetics had significantly (P < 0.001) larger waist circumference and higher systolic BP than non-diabetics. There was no significant difference in major nutrient intakes between the groups. One quarter of subjects, both diabetics and non-diabetics, exceeded the intake of percent calories from total fat and almost 60% had cholesterol intake >200 mg/d and more than 60% had fiber intake <20 gm/d. The pattern of physical activity did not differ between groups though, it was much below the recommended level. After adjusting for age and gender, diabetics had significantly (P < 0.001) higher levels of blood glucose, glycated hemoglobin, triglycerides, and homocysteine than non-diabetics. In contrast, diabetics had significantly (P < 0.01) lower levels of high-density lipoprotein cholesterol (HDL-C). ^ Multivariate logistic regression analyses showed that increasing age, male gender, large waist circumference, lack of acculturation, and high levels of triglycerides were independent risk factors of type 2 diabetes. In contrast, moderate alcohol consumption conferred protection against diabetes. ^ The study identified several risk factors of CHD and diabetes among Cuban Americans. Health care providers are encouraged to practice ethno-specific preventive measures to lower the burden of CHD and diabetes in Cuban Americans. ^
Resumo:
Hearing of the news of the death of Diana, Princess of Wales, in a traffic accident, is taken as an analogue for being a percipient but uninvolved witness to a crime, or a witness to another person's sudden confession to some illegal act. This event (known in the literature as a “reception event”) has previously been hypothesized to cause one to form a special type of memory commonly known as a “flashbulb memory” (FB) (Brown and Kulik, 1977). FB's are hypothesized to be especially resilient against forgetting, highly detailed including peripheral details, clear, and inspiring great confidence in the individual for their accuracy. FB's are dependent for their formation upon surprise, emotional valence, and impact, or consequentiality to the witness of the initiating event. FB's are thought to be enhanced by frequent rehearsal. FB's are very important in the context of criminal investigation and litigation in that investigators and jurors usually place great store in witnesses, regardless of their actual accuracy, who claim to have a clear and complete recollection of an event, and who express this confidently. Therefore, the lives, or at least the freedom, of criminal defendants, and the fortunes of civil litigants hang on the testimony of witnesses professing to have FB's. ^ In this study, which includes a large and diverse sample (N = 305), participants were surveyed within 2–4 days after hearing of the fatal accident, and again at intervals of 2 and 4 weeks, 6, 12, and 18 months. Contrary to the FB hypothesis, I found that participants' FB's degraded over time beginning at least as early as two weeks post event. At about 12 months the memory trace stabilized, resisting further degradation. Repeated interviewing did not have any negative affect upon accuracy, contrary to concerns in the literature. Analysis by correlation and regression indicated no effect or predictive power for participant age, emotionality, confidence, or student status, as related to accuracy of recall; nor was participant confidence in accuracy predicted by emotional impact as hypothesized. Results also indicate that, contrary to the notions of investigators and jurors, witnesses become more inaccurate over time regardless of their confidence in their memories, even for highly emotional events. ^
Resumo:
Large-extent vegetation datasets that co-occur with long-term hydrology data provide new ways to develop biologically meaningful hydrologic variables and to determine plant community responses to hydrology. We analyzed the suitability of different hydrological variables to predict vegetation in two water conservation areas (WCAs) in the Florida Everglades, USA, and developed metrics to define realized hydrologic optima and tolerances. Using vegetation data spatially co-located with long-term hydrological records, we evaluated seven variables describing water depth, hydroperiod length, and number of wet/dry events; each variable was tested for 2-, 4- and 10-year intervals for Julian annual averages and environmentally-defined hydrologic intervals. Maximum length and maximum water depth during the wet period calculated for environmentally-defined hydrologic intervals over a 4-year period were the best predictors of vegetation type. Proportional abundance of vegetation types along hydrological gradients indicated that communities had different realized optima and tolerances across WCAs. Although in both WCAs, the trees/shrubs class was on the drier/shallower end of hydrological gradients, while slough communities occupied the wetter/deeper end, the distribution ofCladium, Typha, wet prairie and Salix communities, which were intermediate for most hydrological variables, varied in proportional abundance along hydrologic gradients between WCAs, indicating that realized optima and tolerances are context-dependent.
Resumo:
Measurement and verification of products and processes during the early design is attracting increasing interest from high value manufacturing industries. Measurement planning is deemed as an effective means to facilitate the integration of the metrology activity into a wider range of production processes. However, the literature reveals that there are very few research efforts in this field, especially regarding large volume metrology. This paper presents a novel approach to accomplish instruments selection, the first stage of measurement planning process, by mapping measurability characteristics between specific measurement assignments and instruments.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.