911 resultados para Multiple discriminant analysis
Resumo:
This exploratory study is concerned with the integrated appraisal of multi-storey dwelling blocks which incorporate large concrete panel systems (LPS). The first step was to look at U.K. multi-storey dwelling stock in general, and under the management of Birmingham City Council in particular. The information has been taken from the databases of three departments in the City of Birmingham, and rearranged in a new database using a suite of PC software called `PROXIMA' for clarity and analysis. One hundred of their stock were built large concrete panel system. Thirteen LPS blocks were chosen for the purpose of this study as case-studies depending mainly on the height and age factors of the block. A new integrated appraisal technique has been created for the LPS dwelling blocks, which takes into account the most physical and social factors affecting the condition and acceptability of these blocks. This appraisal technique is built up in a hierarchical form moving from the general approach to particular elements (a tree model). It comprises two main approaches; physical and social. In the physical approach, the building is viewed as a series of manageable elements and sub-elements to cover every single physical or environmental factor of the block, in which the condition of the block is analysed. A quality score system has been developed which depends mainly on the qualitative and quantitative conditions of each category in the appraisal tree model, and leads to physical ranking order of the study blocks. In the social appraisal approach, the residents' satisfaction and attitude toward their multi-storey dwelling block was analysed in relation to: a. biographical and housing related characteristics; and b. social, physical and environmental factors associated with this sort of dwelling, block and estate in general.The random sample consisted of 268 residents living in the 13 case study blocks. Data collected was analysed using frequency counts, percentages, means, standard deviations, Kendall's tue, r-correlation coefficients, t-test, analysis of variance (ANOVA) and multiple regression analysis. The analysis showed a marginally positive satisfaction and attitude towards living in the block. The five most significant factors associated with the residents' satisfaction and attitude in descending order were: the estate, in general; the service categories in the block, including heating system and lift services; vandalism; the neighbours; and the security system of the block. An important attribute of this method, is that it is relatively inexpensive to implement, especially when compared to alternatives adopted by some local authorities and the BRE. It is designed to save time, money and effort, to aid decision making, and to provide ranked priority to the multi-storey dwelling stock, in addition to many other advantages. A series of solution options to the problems of the block was sought for selection and testing before implementation. The traditional solutions have usually resulted in either demolition or costly physical maintenance and social improvement of the blocks. However, a new solution has now emerged, which is particularly suited to structurally sound units. The solution of `re-cycling' might incorporate the reuse of an entire block or part of it, by removing panels, slabs and so forth from the upper floors in order to reconstruct them as low-rise accommodations.
Resumo:
Decomposition of domestic wastes in an anaerobic environment results in the production of landfill gas. Public concern about landfill disposal and particularly the production of landfill gas has been heightened over the past decade. This has been due in large to the increased quantities of gas being generated as a result of modern disposal techniques, and also to their increasing effect on modern urban developments. In order to avert diasters, effective means of preventing gas migration are required. This, in turn requires accurate detection and monitoring of gas in the subsurface. Point sampling techniques have many drawbacks, and accurate measurement of gas is difficult. Some of the disadvantages of these techniques could be overcome by assessing the impact of gas on biological systems. This research explores the effects of landfill gas on plants, and hence on the spectral response of vegetation canopies. Examination of the landfill gas/vegetation relationship is covered, both by review of the literature and statistical analysis of field data. The work showed that, although vegetation health was related to landfill gas, it was not possible to define a simple correlation. In the landfill environment, contribution from other variables, such as soil characteristics, frequently confused the relationship. Two sites are investigated in detail, the sites contrasting in terms of the data available, site conditions, and the degree of damage to vegetation. Gas migration at the Panshanger site was dominantly upwards, affecting crops being grown on the landfill cap. The injury was expressed as an overall decline in plant health. Discriminant analysis was used to account for the variations in plant health, and hence the differences in spectral response of the crop canopy, using a combination of soil and gas variables. Damage to both woodland and crops at the Ware site was severe, and could be easily related to the presence of gas. Air photographs, aerial video, and airborne thematic mapper data were used to identify damage to vegetation, and relate this to soil type. The utility of different sensors for this type of application is assessed, and possible improvements that could lead to more widespread use are identified. The situations in which remote sensing data could be combined with ground survey are identified. In addition, a possible methodology for integrating the two approaches is suggested.
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.
Resumo:
The ageing process is strongly influenced by nutrient balance, such that modest calorie restriction (CR) extends lifespan in mammals. Irisin, a newly described hormone released from skeletal muscles after exercise, may induce CR-like effects by increasing adipose tissue energy expenditure. Using telomere length as a marker of ageing, this study investigates associations between body composition, plasma irisin levels and peripheral blood mononuclear cell telomere length in healthy, non-obese individuals. Segmental body composition (by bioimpedance), telomere length and plasma irisin levels were assessed in 81 healthy individuals (age 43∈±∈15.8 years, BMI 24.3∈±∈2.9 kg/m2). Data showed significant correlations between log-transformed relative telomere length and the following: age (p∈<∈0.001), height (p∈=∈0.045), total body fat percentage (p∈=∈0.031), abdominal fat percentage (p∈=∈0.038) , visceral fat level (p∈<∈0.001), plasma leptin (p∈=∈0.029) and plasma irisin (p∈=∈0.011), respectively. Multiple regression analysis using backward elimination revealed that relative telomere length can be predicted by age (b∈=∈-0.00735, p∈=∈0.001) and plasma irisin levels (b∈=∈0.04527, p∈=∈0.021). These data support the view that irisin may have a role in the modulation of both energy balance and the ageing process. © 2014 The Author(s).
Resumo:
With business incubators deemed as a potent infrastructural element for entrepreneurship development, business incubation management practice and performance have received widespread attention. However, despite this surge of interest, scholars have questioned the extent to which business incubation delivers added value. Thus, there is a growing awareness among researchers, practitioners and policy makers of the need for more rigorous evaluation of the business incubation output performance. Aligned to this is an increasing demand for benchmarking business incubation input/process performance and highlighting best practice. This paper offers a business incubation assessment framework, which considers input/process and output performance domains with relevant indicators. This tool adds value on different levels. It has been developed in collaboration with practitioners and industry experts and therefore it would be relevant and useful to business incubation managers. Once a large enough database of completed questionnaires has been populated on an online platform managed by a coordinating mechanism, such as a business incubation membership association, business incubator managers can reflect on their practices by using this assessment framework to learn their relative position vis-à-vis their peers against each domain. This will enable them to align with best practice in this field. Beyond implications for business incubation management practice, this performance assessment framework would also be useful to researchers and policy makers concerned with business incubation management practice and impact. Future large-scale research could test for construct validity and reliability. Also, discriminant analysis could help link input and process indicators with output measures.
Resumo:
OBJECTIVE: To investigate laboratory evidence of abnormal angiogenesis, hemorheologic factors, endothelial damage/dysfunction, and age-related macular degeneration (ARMD). DESIGN: Comparative cross-sectional study. PARTICIPANTS: We studied 78 subjects (26 men and 52 women; mean age 74 years; standard deviation [SD] 9.0) with ARMD attending a specialist referral clinic. Subjects were compared with 25 healthy controls (mean age, 71 years; SD, 11). INTERVENTION AND OUTCOME MEASURES: Levels of vascular endothelial growth factor (VEGF, an index of angiogenesis), hemorheologic factors (plasma viscosity, hematocrit, white cell count, hemoglobin, platelets), fibrinogen (an index of rheology and hemostasis), and von Willebrand factor (a marker of endothelial dysfunction) were measured. RESULTS: Median plasma VEGF (225 vs. 195 pg/ml, P = 0.019) and mean von Willebrand factor (124 vs. 99 IU/dl, P = 0.0004) were greater in ARMD subjects than the controls. Mean plasma fibrinogen and plasma viscosity levels were also higher in the subjects (both P < 0.0001). There were no significant differences in other indices between cases and controls. When "dry" (drusen, atrophy, n = 28) and "exudative" (n = 50) ARMD subjects were compared, there was no significant differences in VEGF, fibrinogen, viscosity, or von Willebrand factor levels. There were no significant correlations between the measured parameters. Stepwise multiple regression analysis did not demonstrate any significant clinical predictors (age, gender, smoking, body mass index, history of vascular disease, or hypertension) for plasma VEGF or fibrinogen levels, although smoking status was a predictor of plasma von Willebrand factor levels (P < 0.05). CONCLUSIONS: This study suggests an association between markers of angiogenesis (VEGF), hemorheologic factors, hemostasis, endothelial dysfunction, and ARMD. The interaction between abnormal angiogenesis and the components of Virchow's triad for thrombogenesis may in part contribute to the pathogenesis of ARMD.
Resumo:
2002 Mathematics Subject Classification: 62P10.
Resumo:
It is well established that accent recognition can be as accurate as up to 95% when the signals are noise-free, using feature extraction techniques such as mel-frequency cepstral coefficients and binary classifiers such as discriminant analysis, support vector machine and k-nearest neighbors. In this paper, we demonstrate that the predictive performance can be reduced by as much as 15% when the signals are noisy. Specifically, in this paper we perturb the signals with different levels of white noise, and as the noise become stronger, the out-of-sample predictive performance deteriorates from 95% to 80%, although the in-sample prediction gives overly-optimistic results. ACM Computing Classification System (1998): C.3, C.5.1, H.1.2, H.2.4., G.3.
Resumo:
Background: Allergy is a form of hypersensitivity to normally innocuous substances, such as dust, pollen, foods or drugs. Allergens are small antigens that commonly provoke an IgE antibody response. There are two types of bioinformatics-based allergen prediction. The first approach follows FAO/WHO Codex alimentarius guidelines and searches for sequence similarity. The second approach is based on identifying conserved allergenicity-related linear motifs. Both approaches assume that allergenicity is a linearly coded property. In the present study, we applied ACC pre-processing to sets of known allergens, developing alignment-independent models for allergen recognition based on the main chemical properties of amino acid sequences.Results: A set of 684 food, 1,156 inhalant and 555 toxin allergens was collected from several databases. A set of non-allergens from the same species were selected to mirror the allergen set. The amino acids in the protein sequences were described by three z-descriptors (z1, z2 and z3) and by auto- and cross-covariance (ACC) transformation were converted into uniform vectors. Each protein was presented as a vector of 45 variables. Five machine learning methods for classification were applied in the study to derive models for allergen prediction. The methods were: discriminant analysis by partial least squares (DA-PLS), logistic regression (LR), decision tree (DT), naïve Bayes (NB) and k nearest neighbours (kNN). The best performing model was derived by kNN at k = 3. It was optimized, cross-validated and implemented in a server named AllerTOP, freely accessible at http://www.pharmfac.net/allertop. AllerTOP also predicts the most probable route of exposure. In comparison to other servers for allergen prediction, AllerTOP outperforms them with 94% sensitivity.Conclusions: AllerTOP is the first alignment-free server for in silico prediction of allergens based on the main physicochemical properties of proteins. Significantly, as well allergenicity AllerTOP is able to predict the route of allergen exposure: food, inhalant or toxin. © 2013 Dimitrov et al.; licensee BioMed Central Ltd.
Resumo:
Mainstream gentrification research predominantly examines experiences and motivations of the middle-class gentrifier groups, while overlooking experiences of non-gentrifying groups including the impact of in situ local processes on gentrification itself. In this paper, I discuss gentrification, neighbourhood belonging and spatial distribution of class in Istanbul by examining patterns of belonging both of gentrifiers and non-gentrifying groups in historic neighbourhoods of the Golden Horn/Halic. I use multiple correspondence analysis (MCA), a methodology rarely used in gentrification research, to explore social and symbolic borders between these two groups. I show how gentrification leads to spatial clustering by creating exclusionary practices and eroding social cohesion, and illuminate divisions that are inscribed into the physical space of the neighbourhood.
Resumo:
Aims: Obesity and Type 2 diabetes are associated with accelerated ageing. The underlying mechanisms behind this, however, are poorly understood. In this study, we investigated the association between circulating irisin - a novel my okine involved in energy regulation - and telomere length (TL) (a marker of aging) in healthy individuals and individuals with Type 2 diabetes. Methods: Eighty-two healthy people and 67 subjects with Type 2 diabetes were recruited to this cross-sectional study. Anthropometric measurements including body composition measured by biompedance were recorded. Plasma irisin was measured by ELISA on a fasted blood sample. Relative TL was determined using real-time PCR. Associations between anthropometric measures and irisin and TL were explored using Pearson’s bivariate correlations. Multiple regression was used to explore all the significant predictors of TL using backward elimination. Results: In healthy individuals chronological age was a strong negative predictor of TL (=0.552, p < 0.001). Multiple regression analysis using backward elimination (excluding age) revealed the greater relative TL could be predicted by greater total muscle mass(b = 0.046, p = 0.001), less visceral fat (b = =0.183, p < 0.001)and higher plasma irisin levels (b = 0.01, p = 0.027). There were no significant associations between chronological age, plasmairisin, anthropometric measures and TL in patients with Type 2diabetes (p > 0.1). Conclusion: These data support the view that body composition and plasma irisin may have a role in modulation of energy balance and the aging process in healthy individuals. This relationship is altered in individuals with Type 2 diabetes.
Resumo:
The article attempts to answer the question whether or not the latest bankruptcy prediction techniques are more reliable than traditional mathematical–statistical ones in Hungary. Simulation experiments carried out on the database of the first Hungarian bankruptcy prediction model clearly prove that bankruptcy models built using artificial neural networks have higher classification accuracy than models created in the 1990s based on discriminant analysis and logistic regression analysis. The article presents the main results, analyses the reasons for the differences and presents constructive proposals concerning the further development of Hungarian bankruptcy prediction.
Resumo:
The purpose of this study was to examine the effectiveness of peer counseling on high school students who have previously failed two or more classes in a nine week quarter.^ This study was constructed by comparing students who previously failed and were subsequently given peer counseling with a matched group of students who failed and did not receive peer counseling.^ To test the proposed research question, 324 students from a large urban school system were randomly chosen from a computer generated list of students who failed courses, matched on variables of number of classes failed, grade level and gender. One student from each matched pair was randomly placed in either the experimental or control group. 162 students from Group 1 (experimental) were assigned a peer counselor with their pair assigned to Group 2 (control). Group 1 received peer counseling at least 4 times during the third nine week academic quarter (Quarter 3) while Group 2 did not.^ The Grade Point Averages (GPA) for all students were collected both at the end of Quarter 2 and Quarter 3, at which time peer counseling was terminated. GPA's were also collected nine weeks after counseling was terminated.^ Results were determined by multiple regression, analysis of covariance and t-test. A level of significance was set at.05. There was significant increase in the GPA's of those counseled students immediately after peer counseling and also nine weeks after counseling was terminated, while the group not receiving peer counseling showed no increase. It was noted that there were significantly more school drop-outs from the non-counseled group than the counseled group. The number of classes failed, high school attended, grade level and gender were not found to be significant.^ The conclusion from this study was that peer counseling does impact significantly on the GPA's of students experiencing academic failure. Recommendations from this study were to implement and expand peer counseling programs with more failing students, continue counseling for longer than one quarter, include students in drop-out programs and students from different socio-economic and racial backgrounds, and conduct subsequent evaluation. ^
Resumo:
The problem investigated was negative effects on the ability of a university student to successfully complete a course in religious studies resulting from conflict between the methodologies and objectives of religious studies and the student's system of beliefs. Using Festinger's theory of cognitive dissonance as a theoretical framework, it was hypothesized that completing a course with a high level of success would be negatively affected by (1) failure to accept the methodologies and objectives of religious studies (methodology), (2) holding beliefs about religion that had potential conflicts with the methodologies and objectives (beliefs), (3) extrinsic religiousness, and (4) dogmatism. The causal comparative method was used. The independent variables were measured with four scales employing Likert-type items. An 8-item scale to measure acceptance of the methodologies and objectives of religious studies and a 16-item scale to measure holding of beliefs about religion having potential conflict with the methodologies were developed for this study. These scales together with a 20-item form of Rokeach's Dogmatism Scale and Feagin's 12-item Religious Orientation Scale to measure extrinsic religiousness were administered to 144 undergraduate students enrolled in randomly selected religious studies courses at Florida International University. Level of success was determined by course grade with the 27% of students receiving the highest grades classified as highly successful and the 27% receiving the lowest grades classified as not highly successful. A stepwise discriminant analysis produced a single significant function with methodology and dogmatism as the discriminants. Methodology was the principal discriminating variable. Beliefs and extrinsic religiousness failed to discriminate significantly. It was concluded that failing to accept the methodologies and objectives of religious studies and being highly dogmatic have significant negative effects on a student's success in a religious studies course. Recommendations were made for teaching to diminish these negative effects.
Resumo:
This dissertation examines the consequences of Electronic Data Interchange (EDI) use on interorganizational relations (IR) in the retail industry. EDI is a type of interorganizational information system that facilitates the exchange of business documents in structured, machine processable form. The research model links EDI use and three IR dimensions--structural, behavioral, and outcome. Based on relevant literature from organizational theory and marketing channels, fourteen hypotheses were proposed for the relationships among EDI use and the three IR dimensions.^ Data were collected through self-administered questionnaires from key informants in 97 retail companies (19% response rate). The hypotheses were tested using multiple regression analysis. The analysis supports the following hypothesis: (a) EDI use is positively related to information intensity and formalization, (b) formalization is positively related to cooperation, (c) information intensity is positively related to cooperation, (d) conflict is negatively related to performance and satisfaction, (e) cooperation is positively related to performance, and (f) performance is positively related to satisfaction. The results support the general premise of the model that the relationship between EDI use and satisfaction among channel members has to be viewed within an interorganizational context.^ Research on EDI is still in a nascent stage. By identifying and testing relevant interorganizational variables, this study offers insights for practitioners managing boundary-spanning activities in organizations using or planning to use EDI. Further, the thesis provides avenues for future research aimed at understanding the consequences of this interorganizational information technology. ^