951 resultados para Uses and Gratifications
Resumo:
Studies have shown that increased arterial stiffening can be an indication of cardiovascular diseases like hypertension. In clinical practice, this can be detected by measuring the blood pressure (BP) using a sphygmomanometer but it cannot be used for prolonged monitoring. It has been established that pulse wave velocity (PWV) is a direct measure of arterial stiffening but its usefulness is hampered by the absence of non-invasive techniques to estimate it. Pulse transit time (PTT) is a simple and non-invasive method derived from PWV. However, limited knowledge of PTT in children is found in the present literature. The aims of this study are to identify independent variables that confound PTT measure and describe PTT regression equations for healthy children. Therefore, PTT reference values are formulated for future pathological studies. Fifty-five Caucasian children (39 male) aged 8.4 +/- 2.3 yr (range 5-12 yr) were recruited. Predictive equations for PTT were obtained by multiple regressions with age, vascular path length, BP indexes and heart rate. These derived equations were compared in their PWV equivalent against two previously reported equations and significant agreement was obtained (p < 0.05). Findings herein also suggested that PTT can be useful as a continuous surrogate BP monitor in children.
Resumo:
Plant litter and fine roots are important in maintaining soil organic carbon (C) levels as well as for nutrient cycling. The decomposition of surface-placed litter and fine roots of wheat ( Triticum aestivum ), lucerne ( Medicago sativa ), buffel grass ( Cenchrus ciliaris ), and mulga ( Acacia aneura ), placed at 10-cm and 30-cm depths, was studied in the field in a Rhodic Paleustalf. After 2 years, = 60% of mulga roots and twigs remained undecomposed. The rate of decomposition varied from 4.2 year -1 for wheat roots to 0.22 year -1 for mulga twigs, which was significantly correlated with the lignin concentration of both tops and roots. Aryl+O-aryl C concentration, as measured by 13 C nuclear magnetic resonance spectroscopy, was also significantly correlated with the decomposition parameters, although with a lower R 2 value than the lignin concentration. Thus, lignin concentration provides a good predictor of litter and fine root decomposition in the field.
Resumo:
The c-Jun N-terminal kinases (JNKs) are members of a larger group of serine/ threonine (Ser/Thr) protein kinases from the mitogen-activated protein kinase family. JNKs were originally identified as stress-activated protein kinases in the livers of cycloheximide-challenged rats. Their subsequent purification, cloning, and naming as JNKs have emphasized their ability to phosphorylate and activate the transcription factor c-Jun. Studies of c-Jun and related transcription factor substrates have provided clues about both the preferred substrate phosphorylation sequences and additional docking domains recognized by JNK There are now more than 50 proteins shown to be substrates for JNK These include a range of nuclear substrates, including transcription factors and nuclear hormone receptors, heterogeneous nuclear ribonucleoprotein K and the Pol I-specific transcription factor TIF-IA, which regulates ribosome synthesis. Many nonnuclear substrates have also been characterized, and these are involved in protein degradation (e.g., the E3 ligase Itch), signal transduction (e.g., adaptor and scaffold proteins and protein kinases), apoptotic cell death (e.g., mitochondrial Bcl2 family members), and cell movement (e.g., paxillin, DCX, microtubule-associated proteins, the stathmin family member SCG10, and the intermediate filament protein keratin 8). The range of JNK actions in the cell is therefore likely to be complex. Further characterization of the substrates of JNK should provide clearer explanations of the intracellular actions of the JNKs and may allow new avenues for targeting the JNK pathways with therapeutic agents downstream of JNK itself.
Resumo:
The assertion about the peculiarly intricate and complex character of social phenomena has, in much of social discourse, a virtually uncontested tradition. A significant part of the premise about the complexity of social phenomena is the conviction that it complicates, perhaps even inhibits the development and application of social scientific knowledge. Our paper explores the origins, the basis and the consequences of this assertion and asks in particular whether the classic complexity assertion still deserves to be invoked in analyses that ask about the production and the utilization of social scientific knowledge in modern society. We refer to one of the most prominent and politically influential social scientific theories, John Maynard Keynes' economic theory as an illustration. We conclude that, the practical value of social scientific knowledge is not necessarily dependent on a faithful, in the sense of complete, representation of (complex) social reality. Practical knowledge is context sensitive if not project bound. Social scientific knowledge that wants to optimize its practicality has to attend and attach itself to elements of practical social situations that can be altered or are actionable by relevant actors. This chapter represents an effort to re-examine the relation between social reality, social scientific knowledge and its practical application. There is a widely accepted view about the potential social utility of social scientific knowledge that invokes the peculiar complexity of social reality as an impediment to good theoretical comprehension and hence to its applicability.
Resumo:
Preface
Resumo:
The study here highlights the potential that analytical methods based on Knowledge Discovery in Databases (KDD) methodologies have to aid both the resolution of unstructured marketing/business problems and the process of scholarly knowledge discovery. The authors present and discuss the application of KDD in these situations prior to the presentation of an analytical method based on fuzzy logic and evolutionary algorithms, developed to analyze marketing databases and uncover relationships among variables. A detailed implementation on a pre-existing data set illustrates the method. © 2012 Published by Elsevier Inc.
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.
Resumo:
Current and future IT applications effecting supply chains in Europe and Pacific Asia are investigated. 3PL providers increasingly use IT systems for logistics, to enhance supply chain collaboration with business partners. Advanced systems are not always immediately profi table. Most companies already implement IT systems for processing transactions but motivations vary and barriers remain since 3PL providers incompletely understand clients' IT requirements. Long-term productivity gains require sophisticated IT systems to streamline cycles and improve supply chain visibility to facilitate, plan and make decisions. RFID and advanced integration systems, including Business Process Management, are probably the next trend in IT logistics systems. Copyright © 2012 Inderscience Enterprises Ltd.
Resumo:
This article attempts to repair the neglect of the qualitative uses of some and to suggest an explanation which could cover the full range of usage with this determiner - both quantitative and qualitative - showing how a single underlying meaning, modulated by contextual and pragmatic factors, can give rise to the wide variety of messages expressed by some in actual usage. Both the treatment of some as an existential quantifier and the scalar model which views some as evoking a less-than-expected quantity on a pragmatic scale are shown to be incapable of handling the qualitative uses of this determiner. An original analysis of some and the interaction of its meaning with the defining features of the qualitative uses is proposed, extending the discussion as well to the role of focus and the adverbial modifier quite. The crucial semantic feature of some for the explanation of its capacity to express qualitative readings is argued to be non-identification of a referent assumed to be particular. Under the appropriate conditions, this notion can give rise to qualitative denigration (implying it is not even worth the bother to identify the referent) or qualitative appreciation (implying the referent to be so outstanding that it defies identification). The explanation put forward is also shown to cover some's use as an approximator, thereby enhancing its plausibility even further. © Cambridge University Press 2012.
Resumo:
Recreational fisheries in North America are valued between $47.3 billion and $56.8 billion. Fisheries managers must make strategic decisions based on sound science and knowledge of population ecology, to effectively conserve populations. Competitive fishing, in the form of tournaments, has become an important part of recreational fisheries, and is common on large waterbodies including the Great Lakes. Black Bass, Micropterus spp., are top predators and among the most sought after species in competitive catch-and-release tournaments. This study investigated catch-and-release tournaments as an assessment tool through mark-recapture for Largemouth Bass (>305mm) populations in the Tri Lakes, and Bay of Quinte, part of the eastern basin of Lake Ontario. The population in the Tri Lakes (1999-2002) was estimated to be stable between 21,928-29,780, and the population in the Bay of Quinte (2012-2015) was estimated to be between 31,825-54,029 fish. Survival in the Tri Lakes varied throughout the study period, from 31%-54%; while survival in the Bay of Quinte remained stable at 63%. Differences in survival may be due to differences in fishing pressure, as 34-46% of the Largemouth Bass population on the Tri Lakes is harvested annually and only 19% of catch was attributed to tournament angling. Many biological issues still surround catch-and-release tournaments, particularly concerning displacement from initial capture sites. In the past, the majority of studies have focused on small inland lakes and coastal areas, displacing bass relatively short distances. My study displaced Largemouth and Smallmouth Bass up to 100km, and found very low rates of return; only 1 of 18 Largemouth Bass returned 15 km and 1 of 18 Smallmouth Bass returned 135 km. Both species remained near the release sites for an average of approximately 2 weeks prior to dispersing. Tournament organizers should consider the use of satellite release locations to facilitate dispersal and prevent stockpiling at the release site. Catch-and-release tournaments proved to be a valuable tool in assessing population variables and the effects of long distance displacement through the use of mark recapture and acoustic telemetry on large lake systems.