900 resultados para Research on terrorism : trends, achievements, and failures
Resumo:
This work highlights and analyzes the citations and co-citations by different authors, countries and institutions in series of researches on biofuel. These relations represent a knowledge map which shows the areas of research by different countries and authors. The contributions of different institutions were also shown. With this knowledge map developed, areas of research that still need more attention as well as the most important studies were highlighted. The software used for this analysis is Citespace which is developed by Chaomei Chen. Sources of information are articles retrieved from ISI Web of Science. Biofuel as a renewable form of energy is discussed. Its sources, types, methods of production, effects, market, and producers were discussed. Also plans and strategies that aim at boosting world biofuel market were listed as well as recent researches on it. Knowledge mapping, its types and methods as well as the method and software used for the analysis were also discussed.
Resumo:
Last two decades have seen a rapid change in the global economic and financial situation; the economic conditions in many small and large underdeveloped countries started to improve and they became recognized as emerging markets. This led to growth in the amounts of global investments in these countries, partly spurred by expectations of higher returns, favorable risk-return opportunities, and better diversification alternatives to global investors. This process, however, has not been without problems and it has emphasized the need for more information on these markets. In particular, the liberalization of financial markets around the world, globalization of trade and companies, recent formation of economic and regional blocks, and the rapid development of underdeveloped countries during the last two decades have brought a major challenge to the financial world and researchers alike. This doctoral dissertation studies one of the largest emerging markets, namely Russia. The motivation why the Russian equity market is worth investigating includes, among other factors, its sheer size, rapid and robust economic growth since the turn of the millennium, future prospect for international investors, and a number of important major financial reforms implemented since the early 1990s. Another interesting feature of the Russian economy, which gives motivation to study Russian market, is Russia’s 1998 financial crisis, considered as one of the worst crisis in recent times, affecting both developed and developing economies. Therefore, special attention has been paid to Russia’s 1998 financial crisis throughout this dissertation. This thesis covers the period from the birth of the modern Russian financial markets to the present day, Special attention is given to the international linkage and the 1998 financial crisis. This study first identifies the risks associated with Russian market and then deals with their pricing issues. Finally some insights about portfolio construction within Russian market are presented. The first research paper of this dissertation considers the linkage of the Russian equity market to the world equity market by examining the international transmission of the Russia’s 1998 financial crisis utilizing the GARCH-BEKK model proposed by Engle and Kroner. Empirical results shows evidence of direct linkage between the Russian equity market and the world market both in regards of returns and volatility. However, the weakness of the linkage suggests that the Russian equity market was only partially integrated into the world market, even though the contagion can be clearly seen during the time of the crisis period. The second and the third paper, co-authored with Mika Vaihekoski, investigate whether global, local and currency risks are priced in the Russian stock market from a US investors’ point of view. Furthermore, the dynamics of these sources of risk are studied, i.e., whether the prices of the global and local risk factors are constant or time-varying over time. We utilize the multivariate GARCH-M framework of De Santis and Gérard (1998). Similar to them we find price of global market risk to be time-varying. Currency risk also found to be priced and highly time varying in the Russian market. Moreover, our results suggest that the Russian market is partially segmented and local risk is also priced in the market. The model also implies that the biggest impact on the US market risk premium is coming from the world risk component whereas the Russian risk premium is on average caused mostly by the local and currency components. The purpose of the fourth paper is to look at the relationship between the stock and the bond market of Russia. The objective is to examine whether the correlations between two classes of assets are time varying by using multivariate conditional volatility models. The Constant Conditional Correlation model by Bollerslev (1990), the Dynamic Conditional Correlation model by Engle (2002), and an asymmetric version of the Dynamic Conditional Correlation model by Cappiello et al. (2006) are used in the analysis. The empirical results do not support the assumption of constant conditional correlation and there was clear evidence of time varying correlations between the Russian stocks and bond market and both asset markets exhibit positive asymmetries. The implications of the results in this dissertation are useful for both companies and international investors who are interested in investing in Russia. Our results give useful insights to those involved in minimising or managing financial risk exposures, such as, portfolio managers, international investors, risk analysts and financial researchers. When portfolio managers aim to optimize the risk-return relationship, the results indicate that at least in the case of Russia, one should account for the local market as well as currency risk when calculating the key inputs for the optimization. In addition, the pricing of exchange rate risk implies that exchange rate exposure is partly non-diversifiable and investors are compensated for bearing the risk. Likewise, international transmission of stock market volatility can profoundly influence corporate capital budgeting decisions, investors’ investment decisions, and other business cycle variables. Finally, the weak integration of the Russian market and low correlations between Russian stock and bond market offers good opportunities to the international investors to diversify their portfolios.
Resumo:
Objectives: The objectives of this study is to review the set of criteria of the Institute of Medicine (IOM) for priority-setting in research with addition of new criteria if necessary, and to develop and evaluate the reliability and validity of the final priority score. Methods: Based on the evaluation of 199 research topics, forty-five experts identified additional criteria for priority-setting, rated their relevance, and ranked and weighted them in a three-round modified Delphi technique. A final priority score was developed and evaluated. Internal consistency, test–retest and inter-rater reliability were assessed. Correlation with experts’ overall qualitative topic ratings were assessed as an approximation to validity. Results: All seven original IOM criteria were considered relevant and two new criteria were added (“potential for translation into practice”, and “need for knowledge”). Final ranks and relative weights differed from those of the original IOM criteria: “research impact on health outcomes” was considered the most important criterion (4.23), as opposed to “burden of disease” (3.92). Cronbach’s alpha (0.75) and test–retest stability (interclass correlation coefficient = 0.66) for the final set of criteria were acceptable. The area under the receiver operating characteristic curve for overall assessment of priority was 0.66. Conclusions: A reliable instrument for prioritizing topics in clinical and health services research has been developed. Further evaluation of its validity and impact on selecting research topics is required
Resumo:
The main objective of this research was to study the feasibility of incorporating organosolv semi-chemical triticale fibers as the reinforcing element in recycled high density polyethylene (HDPE). In the first step, triticale fibers were characterized in terms of chemical composition and compared with other biomass species (wheat, rye, softwood, and hardwood). Then, organosolv semi-chemical triticale fibers were prepared by the ethanolamine process. These fibers were characterized in terms of its yield, kappa number, fiber length/diameter ratio, fines, and viscosity; the obtained results were compared with those of eucalypt kraft pulp. In the second step, the prepared fibers were examined as a reinforcing element for recycled HDPE composites. Coupled and non-coupled HDPE composites were prepared and tested for tensile properties. Results showed that with the addition of the coupling agent maleated polyethylene (MAPE), the tensile properties of composites were significantly improved, as compared to non-coupled samples and the plain matrix. Furthermore, the influence of MAPE on the interfacial shear strength (IFSS) was studied. The contributions of both fibers and matrix to the composite strength were also studied. This was possible by the use of a numerical iterative method based on the Bowyer-Bader and Kelly-Tyson equations
Resumo:
Synchronous machines with an AC converter are used mainly in large drives, for example in ship propulsion drives as well as in rolling mill drives in steel industry. These motors are used because of their high efficiency, high overload capacity and good performance in the field weakening area. Present day drives for electrically excited synchronous motors are equipped with position sensors. Most drives for electrically excited synchronous motors will be equipped with position sensors also in future. This kind of drives with good dynamics are mainly used in metal industry. Drives without a position sensor can be used e.g. in ship propulsion and in large pump and blower drives. Nowadays, these drives are equipped with a position sensor, too. The tendency is to avoid a position sensor if possible, since a sensor reduces the reliability of the drive and increases costs (latter is not very significant for large drives). A new control technique for a synchronous motor drive is a combination of the Direct Flux Linkage Control (DFLC) based on a voltage model and a supervising method (e.g. current model). This combination is called Direct Torque Control method (DTC). In the case of the position sensorless drive, the DTC can be implemented by using other supervising methods that keep the stator flux linkage origin centered. In this thesis, a method for the observation of the drift of the real stator flux linkage in the DTC drive is introduced. It is also shown how this method can be used as a supervising method that keeps the stator flux linkage origin centered in the case of the DTC. In the position sensorless case, a synchronous motor can be started up with the DTC control, when a method for the determination of the initial rotor position presented in this thesis is used. The load characteristics of such a drive are not very good at low rotational speeds. Furthermore, continuous operation at a zero speed and at a low rotational speed is not possible, which is partly due to the problems related to the flux linkage estimate. For operation in a low speed area, a stator current control method based on the DFLC modulator (DMCQ is presented. With the DMCC, it is possible to start up and operate a synchronous motor at a zero speed and at low rotational speeds in general. The DMCC is necessary in situations where high torque (e.g. nominal torque) is required at the starting moment, or if the motor runs several seconds at a zero speed or at a low speed range (up to 2 Hz). The behaviour of the described methods is shown with test results. The test results are presented for the direct flux linkage and torque controlled test drive system with a 14.5 kVA, four pole salient pole synchronous motor with a damper winding and electric excitation. The static accuracy of the drive is verified by measuring the torque in a static load operation, and the dynamics of the drive is proven in load transient tests. The performance of the drive concept presented in this work is sufficient e.g. for ship propulsion and for large pump drives. Furthermore, the developed methods are almost independent of the machine parameters.
LOW COST ANALYZER FOR THE DETERMINATION OF PHOSPHORUS BASED ON OPEN-SOURCE HARDWARE AND PULSED FLOWS
Resumo:
The need for automated analyzers for industrial and environmental samples has triggered the research for new and cost-effective strategies of automation and control of analytical systems. The widespread availability of open-source hardware together with novel analytical methods based on pulsed flows have opened the possibility of implementing standalone automated analytical systems at low cost. Among the areas that can benefit from this approach are the analysis of industrial products and effluents and environmental analysis. In this work, a multi-pumping flow system is proposed for the determination of phosphorus in effluents and polluted water samples. The system employs photometric detection based on the formation of molybdovanadophosphoric acid, and the fluidic circuit is built using three solenoid micropumps. The detection is implemented with a low cost LED-photodiode photometric detection system and the whole system is controlled by an open-source Arduino Uno microcontroller board. The optimization of the timing to ensure the color development and the pumping cycle is discussed for the proposed implementation. Experimental results to evaluate the system behavior are presented verifying a linear relationship between the relative absorbance and the phosphorus concentrations for levels as high as 50 mg L-1.
Resumo:
The flow of information within modern information society has increased rapidly over the last decade. The major part of this information flow relies on the individual’s abilities to handle text or speech input. For the majority of us it presents no problems, but there are some individuals who would benefit from other means of conveying information, e.g. signed information flow. During the last decades the new results from various disciplines have all suggested towards the common background and processing for sign and speech and this was one of the key issues that I wanted to investigate further in this thesis. The basis of this thesis is firmly within speech research and that is why I wanted to design analogous test batteries for widely used speech perception tests for signers – to find out whether the results for signers would be the same as in speakers’ perception tests. One of the key findings within biology – and more precisely its effects on speech and communication research – is the mirror neuron system. That finding has enabled us to form new theories about evolution of communication, and it all seems to converge on the hypothesis that all communication has a common core within humans. In this thesis speech and sign are discussed as equal and analogical counterparts of communication and all research methods used in speech are modified for sign. Both speech and sign are thus investigated using similar test batteries. Furthermore, both production and perception of speech and sign are studied separately. An additional framework for studying production is given by gesture research using cry sounds. Results of cry sound research are then compared to results from children acquiring sign language. These results show that individuality manifests itself from very early on in human development. Articulation in adults, both in speech and sign, is studied from two perspectives: normal production and re-learning production when the apparatus has been changed. Normal production is studied both in speech and sign and the effects of changed articulation are studied with regards to speech. Both these studies are done by using carrier sentences. Furthermore, sign production is studied giving the informants possibility for spontaneous speech. The production data from the signing informants is also used as the basis for input in the sign synthesis stimuli used in sign perception test battery. Speech and sign perception were studied using the informants’ answers to questions using forced choice in identification and discrimination tasks. These answers were then compared across language modalities. Three different informant groups participated in the sign perception tests: native signers, sign language interpreters and Finnish adults with no knowledge of any signed language. This gave a chance to investigate which of the characteristics found in the results were due to the language per se and which were due to the changes in modality itself. As the analogous test batteries yielded similar results over different informant groups, some common threads of results could be observed. Starting from very early on in acquiring speech and sign the results were highly individual. However, the results were the same within one individual when the same test was repeated. This individuality of results represented along same patterns across different language modalities and - in some occasions - across language groups. As both modalities yield similar answers to analogous study questions, this has lead us to providing methods for basic input for sign language applications, i.e. signing avatars. This has also given us answers to questions on precision of the animation and intelligibility for the users – what are the parameters that govern intelligibility of synthesised speech or sign and how precise must the animation or synthetic speech be in order for it to be intelligible. The results also give additional support to the well-known fact that intelligibility in fact is not the same as naturalness. In some cases, as shown within the sign perception test battery design, naturalness decreases intelligibility. This also has to be taken into consideration when designing applications. All in all, results from each of the test batteries, be they for signers or speakers, yield strikingly similar patterns, which would indicate yet further support for the common core for all human communication. Thus, we can modify and deepen the phonetic framework models for human communication based on the knowledge obtained from the results of the test batteries within this thesis.
Resumo:
Dietary and microbial factors are thought to contribute to the rapidly increasing prevalence of T1D in many countries worldwide. The impact of these factors on immune regulation and diabetes development in non-obese diabetic (NOD) mice are investigated in this thesis. Diabetes can be prevented in NOD mice through dietary manipulation. Diet affects the composition of intestinal microbiota, which may subsequently influence intestinal immune homeostasis. However, the specific effects of anti-diabetogenic diets on gut immunity and the explicit associations between intestinal immune disruption and type 1 diabetes onset remain unclear. The research presented herein demonstrates that newly weaned NOD mice suffer from a mild level of colitis, which shifts the colonic immune cell balance towards a proinflammatory status. Several aberrations can also be observed in the peritoneal B cells of NOD mice; an increase in activation marker expression, increased trafficking to the pancreatic lymph nodes and significantly higher antigen presenting cell (APC) efficiency towards insulin-specific T cells. A shift towards inflammation is likewise observed in the colon of germ-free NOD mice, but signs of peritoneal B cell activation are lacking in these mice. Remarkably, most of the abnormalities in the colon, peritoneal macrophages and the peritoneal B cell APC activity of NOD mice are abrogated when NOD mice are maintained on a diabetes-preventive, soy-based diet (ProSobee) from the time of weaning. Dietary and microbial factors hence have a significant impact on colonic immune regulation and peritoneal B cell activation and it is suggested that these factors influence diabetes development in NOD mice.
Resumo:
Despite declining trends in morbidity and mortality, cardiovascular diseases have a considerable impact on Finnish public health. A goal in Finnish health policy is to reduce inequalities in health and mortality among population groups. The aim of this study was to assess inequalities in cardiovascular diseases according to socioeconomic status (SES), language groups and other sociodemographic characteristics. The main data source was generated from events in 35-99 year-old men and women registered in the population-based FINMONICA and FINAMI myocardial infarction registers during the years ranging from 1988-2002. Information on population group characteristics was obtained from Statistics Finland. Additional data were derived from the FINMONICA and FINSTROKE stroke registers and the FINRISK Study. SES, measured by income level, was a major determinant of acute coronary syndrome (ACS) mortality. Among middle-aged men, the 28-day mortality rate of the lowest group of six income groups was 5.2 times and incidence 2.7 times as high when compared to the highest income group. Among women, the differences were even larger. Among the unmarried, the incidence of ACS was approximately 1.6 times as high and their prognosis was significantly worse than among married persons - both in men and women and independent of age. Higher age-standardized attack rates of ACS and stroke were found among Finnish-speaking compared to Swedish-speaking men in Turku and these differences could not be completely explained by SES. In these language groups, modest differences were found in traditional risk factor levels possibly explaining part of the found morbidity and mortality inequality. In conclusion, there are considerable differences in the morbidity and mortality of ACS and stroke between socioeconomic and sociodemographic groups, in Finland. Focusing measures to reduce the excess morbidity and mortality, in groups at high risk, could decrease the economic burden of cardiovascular diseases and thus be an important public health goal in Finland.
Resumo:
Ever since Siad Barre’s regime was toppled in the beginning of the 1990’s Somalia has been without an effective central government. As a result Somalia has remained in an anarchic condition of state collapse for nearly two decades. This anarchy has often been put forward as a potential breeding ground for terrorism. As a response to this threat the United States has undertaken several policies, initiatives, and operations in the Horn of Africa generally and in Somalia specifically. In this descriptive study a twofold analysis has been undertaken. First, conditions in present day Somalia as well as Somali history have been analyzed to evaluate the potential Somalia holds as a terrorist base of operations or a recruiting- or staging area. Second, US strategies and actions have been analyzed to evaluate the adequacy of the US response to the threat Somalia poses in terms of terrorism. Material for the analyses have been derived from anthropological, political, and security studies dealing with Somalia. This material has been augmented by a wide range of news coverage, western and non-western. Certain different US policy documents from different levels have been chosen to represent US strategies for the Global War on Terrorism. Because Somali social institutions, such as the clan system, hold great weight in Somali society, Somalia is a difficult area of operations for terrorist networks. In addition the changing nature of Somali alliances and the tangled webs of conflict that characterize present day Somalia aggravate the difficulties that foreign terrorist networks would encounter in Somalia, would they choose to try to utilize it in any great extent. The US has taken potential terrorism threats in Africa and specifically Somalia very seriously. US actions in Somalia have mainly focused on apprehending or neutralizing terror suspects. Such policies, coupled with backing the Ethiopian invasion of Somalia may have actually turned out increasing Somalia’s terror potential.
Resumo:
A rapidly growing gaming industry, which specializes on PC, console, online and other games, attracts attention of investors and analysts, who try to understand what drives changes of the gaming industry companies’ stock prices. This master thesis shows the evidence that, besides long-established types of events (M&A and dividend payments), the companies’ stock price changes depend on industry-specific events. I analyzed specific for gaming industry events - game releases with respect to its subdivisions: new games-sequels, games ratings and subdivision according to a developer of a game (self-developed by publisher or outsourced). The master thesis analyzes stock prices of 55 companies from gaming industry from all over the world. The research period covers 5 year, spreading from April 2008 to April 2013. Executed with an event study method, results of the research show that all the analyzed events types have significant influence on the stock prices of the gaming industry companies. The current master thesis suggests that acquisitions in the industry affect positively bidders’ and targets’ stock prices. Mergers events cause positive stock price reactions as well. But dividends payments and game releases events influence negatively on the stock prices. Game releases’ effect is up to -2.2% of cumulative average abnormal return (CAAR) drop during the first ten days after the game releases. Having researched different kinds of events and identified the direction of their impact, the current paper can be of high value for investors, seeking profits in the gaming industry, and other interested parties.
Resumo:
The purpose of this master’s thesis was to investigate the effects which benefits obtained from reading a newspaper and using its website have on behavioral outcomes such as word-of-mouth behavior and willingness to pay. Several other antecedents of willingness to pay have been used as the control variables. However, their interrelations haven’t been hypothesized. The empirical part focused on a case company – Finnish regional newspaper. Empirical research has been conducted using a quantitative method and data was collected via online survey placed on newspaper’s website during 2010. 1001 responses have been collected. The results showed that benefits obtained both from traditional printed newspaper and from online one have positive effects on the word-of-mouth about this newspaper and its website. However, it has been revealed that benefits obtained from reading the newspaper don’t have effect on the willingness to pay for this newspaper. Additionally, only interpersonal and convenience benefits obtained from using the newspaper’s website influence on the willingness to pay for it. Finally, willingness to pay for the bundle of printed newspaper and its website access is affected positively only by the information/learning benefits obtained from reading the newspaper and by the interpersonal benefits obtained from using the newspaper’s website.
Resumo:
Cloud computing is a practically relevant paradigm in computing today. Testing is one of the distinct areas where cloud computing can be applied. This study addressed the applicability of cloud computing for testing within organizational and strategic contexts. The study focused on issues related to the adoption, use and effects of cloudbased testing. The study applied empirical research methods. The data was collected through interviews with practitioners from 30 organizations and was analysed using the grounded theory method. The research process consisted of four phases. The first phase studied the definitions and perceptions related to cloud-based testing. The second phase observed cloud-based testing in real-life practice. The third phase analysed quality in the context of cloud application development. The fourth phase studied the applicability of cloud computing in the gaming industry. The results showed that cloud computing is relevant and applicable for testing and application development, as well as other areas, e.g., game development. The research identified the benefits, challenges, requirements and effects of cloud-based testing; and formulated a roadmap and strategy for adopting cloud-based testing. The study also explored quality issues in cloud application development. As a special case, the research included a study on applicability of cloud computing in game development. The results can be used by companies to enhance the processes for managing cloudbased testing, evaluating practical cloud-based testing work and assessing the appropriateness of cloud-based testing for specific testing needs.
Resumo:
This study discusses the procedures of value co-creation that persist in gaming industry. The purpose of this study was to identify the procedures that persist in current video gaming industry which answers the main research problem how value is co-created in video gaming industry followed by three sub questions: (i) What is value co-creation in gaming industry? (ii) Who participates in value co-creation in gaming industry? (iii) What are the procedures that are involved in value co-creation in gaming industry? The theoretical background of the study consists of literature relating to the theory of marketing i.e., notion of value, conventional understanding of value creation, value chain, co-creation approach, co-production approach. The research adopted qualitative research approach. As a platform of relationship researcher used web 2.0 tool interface. Data were collected from the social networks and netnography method was applied for analyzing them. Findings show that customer and company both co-create optimum level of value while they interact with each other and within the customers as well. However mostly the C2C interaction, discussions and dialogues threads that emerged around the main discussion facilitated to co-create value. In this manner, companies require exploiting and further motivating, developing and supporting the interactions between customers participating in value creation. Hierarchy of value co-creation processes is the result derived from the identified challenges of value co-creation approach and discussion forums data analysis. Overall three general sets and seven topics were found that explored the phenomenon of customer to customer (C2C) and business to customer (B2C) interaction/debating for value co-creation through user generated contents. These topics describe how gamer contributes and interacts in co-creating value along with companies. A methodical quest in current research literature acknowledged numerous evolving flows of value in this study. These are general management perspective, new product development and innovation, virtual customer environment, service science and service dominant logic. Overall the topics deliver various realistic and conceptual implications for using and handling gamers in social networks for augmenting customers’ value co-creation process.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014