40 resultados para post-processing method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Retaining players and re-attracting switching players has long been a central topic for SNG providers with regard to the post-adoption stage of playing an online game. However, there has not been much research which has explored players’ post-adoption behavior by incorporating the continuance intention and the switching intention. In addition, traditional IS continuance theories were mainly developed to investigate users’ continued use of utilitarian IS, and thus they may fall short when trying to explain the continued use of hedonic IS. Furthermore, compared to the richer literature on IS continuance, far too little attention has been paid to IS switching, leading to a dearth of knowledge on the subject, despite the increased incidence of the switching phenomenon in the IS field. By addressing the limitations of prior literature, this study seeks to examine the determinants of SNG players’ two different post-adoption behaviors, including the continuance intention and the switching intention. This study takes a positivist approach and uses survey research method to test five proposed research models based on Unified Theory of Acceptance and Use of Technology 2; Use and Gratification Theory; Push-Pull-Mooring model; Cognitive Dissonance Theory; and a self-developed model respectively with empirical data collected from the SNG players of one of the biggest SNG providers in China. A total of 3919 valid responses and 541 valid responses were used to examine the continuance intention and the switching intention, respectively. SEM is utilized as the data analysis method. The proposed research models are supported by the empirical data. The continuance intention is determined by enjoyment, fantasy, escapism, social interaction, social presence, social influence, achievement and habit. The switching intention is determined by enjoyment, satisfaction, subjective norms, descriptive norms, alternative attractiveness, the need for variety, change experience, and adaptation cost. This study contributes to IS theories in three important ways. Firstly, it shows IS switching should be included in IS post-adoption research together with IS continuance. Secondly, a modern IS is usually multi-functional and SNG players have multiple reasons for using a SNG, thus a player’s beliefs about the hedonic, social and utilitarian perceptions of their continued use of the SNG exert significant effects on the continuance intention. Thirdly, the determinants of the switch ing intention mainly exert push, pull, and mooring effects. Players’ beliefs about their current SNG and the available alternatives, as well as their individual characteristics are all significant determinants of the switching intention. SNG players combine these effects in order to formulate the switching intention. Finally, this study presents limitations and suggestions for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The steel industry produces, besides steel, also solid mineral by-products or slags, while it emits large quantities of carbon dioxide (CO2). Slags consist of various silicates and oxides which are formed in chemical reactions between the iron ore and the fluxing agents during the high temperature processing at the steel plant. Currently, these materials are recycled in the ironmaking processes, used as aggregates in construction, or landfilled as waste. The utilization rate of the steel slags can be increased by selectively extracting components from the mineral matrix. As an example, aqueous solutions of ammonium salts such as ammonium acetate, chloride and nitrate extract calcium quite selectively already at ambient temperature and pressure conditions. After the residual solids have been separated from the solution, calcium carbonate can be precipitated by feeding a CO2 flow through the solution. Precipitated calcium carbonate (PCC) is used in different applications as a filler material. Its largest consumer is the papermaking industry, which utilizes PCC because it enhances the optical properties of paper at a relatively low cost. Traditionally, PCC is manufactured from limestone, which is first calcined to calcium oxide, then slaked with water to calcium hydroxide and finally carbonated to PCC. This process emits large amounts of CO2, mainly because of the energy-intensive calcination step. This thesis presents research work on the scale-up of the above-mentioned ammonium salt based calcium extraction and carbonation method, named Slag2PCC. Extending the scope of the earlier studies, it is now shown that the parameters which mainly affect the calcium utilization efficiency are the solid-to-liquid ratio of steel slag and the ammonium salt solvent solution during extraction, the mean diameter of the slag particles, and the slag composition, especially the fractions of total calcium, silicon, vanadium and iron as well as the fraction of free calcium oxide. Regarding extraction kinetics, slag particle size, solid-to-liquid ratio and molar concentration of the solvent solution have the largest effect on the reaction rate. Solvent solution concentrations above 1 mol/L NH4Cl cause leaching of other elements besides calcium. Some of these such as iron and manganese result in solution coloring, which can be disadvantageous for the quality of the PCC product. Based on chemical composition analysis of the produced PCC samples, however, the product quality is mainly similar as in commercial products. Increasing the novelty of the work, other important parameters related to assessment of the PCC quality, such as particle size distribution and crystal morphology are studied as well. As in traditional PCC precipitation process, the ratio of calcium and carbonate ions controls the particle shape; a higher value for [Ca2+]/[CO32-] prefers precipitation of calcite polymorph, while vaterite forms when carbon species are present in excess. The third main polymorph, aragonite, is only formed at elevated temperatures, above 40-50 °C. In general, longer precipitation times cause transformation of vaterite to calcite or aragonite, but also result in particle agglomeration. The chemical equilibrium of ammonium and calcium ions and dissolved ammonia controlling the solution pH affects the particle sizes, too. Initial pH of 12-13 during the carbonation favors nonagglomerated particles with a diameter of 1 μm and smaller, while pH values of 9-10 generate more agglomerates of 10-20 μm. As a part of the research work, these findings are implemented in demonstrationscale experimental process setups. For the first time, the Slag2PCC technology is tested in scale of ~70 liters instead of laboratory scale only. Additionally, design of a setup of several hundreds of liters is discussed. For these purposes various process units such as inclined settlers and filters for solids separation, pumps and stirrers for material transfer and mixing as well as gas feeding equipment are dimensioned and developed. Overall emissions reduction of the current industrial processes and good product quality as the main targets, based on the performed partial life cycle assessment (LCA), it is most beneficial to utilize low concentration ammonium salt solutions for the Slag2PCC process. In this manner the post-treatment of the products does not require extensive use of washing and drying equipment, otherwise increasing the CO2 emissions of the process. The low solvent concentration Slag2PCC process causes negative CO2 emissions; thus, it can be seen as a carbon capture and utilization (CCU) method, which actually reduces the anthropogenic CO2 emissions compared to the alternative of not using the technology. Even if the amount of steel slag is too small for any substantial mitigation of global warming, the process can have both financial and environmental significance for individual steel manufacturers as a means to reduce the amounts of emitted CO2 and landfilled steel slag. Alternatively, it is possible to introduce the carbon dioxide directly into the mixture of steel slag and ammonium salt solution. The process would generate a 60-75% pure calcium carbonate mixture, the remaining 25-40% consisting of the residual steel slag. This calcium-rich material could be re-used in ironmaking as a fluxing agent instead of natural limestone. Even though this process option would require less process equipment compared to the Slag2PCC process, it still needs further studies regarding the practical usefulness of the products. Nevertheless, compared to several other CO2 emission reduction methods studied around the world, the within this thesis developed and studied processes have the advantage of existing markets for the produced materials, thus giving also a financial incentive for applying the technology in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Novel word learning has been rarely studied in people with aphasia (PWA), although it can provide a relatively pure measure of their learning potential, and thereby contribute to the development of effective aphasia treatment methods. The main aim of the present thesis was to explore the capacity of PWA for associative learning of word–referent pairings and cognitive-linguistic factors related to it. More specifically, the thesis examined learning and long-term maintenance of the learned pairings, the role of lexical-semantic abilities in learning as well as acquisition of phonological versus semantic information in associative novel word learning. Furthermore, the effect of modality on associative novel word learning and the neural underpinnings of successful learning were explored. The learning experiments utilized the Ancient Farming Equipment (AFE) paradigm that employs drawings of unfamiliar referents and their unfamiliar names. Case studies of Finnishand English-speaking people with chronic aphasia (n = 6) were conducted in the investigation. The learning results of PWA were compared to those of healthy control participants, and active production of the novel words and their semantic definitions was used as learning outcome measures. PWA learned novel word–novel referent pairings, but the variation between individuals was very wide, from more modest outcomes (Studies I–II) up to levels on a par with healthy individuals (Studies III–IV). In incidental learning of semantic definitions, none of the PWA reached the performance level of the healthy control participants. Some PWA maintained part of the learning outcomes up to months post-training, and one individual showed full maintenance of the novel words at six months post-training (Study IV). Intact lexical-semantic processing skills promoted learning in PWA (Studies I–II) but poor phonological short-term memory capacities did not rule out novel word learning. In two PWA with successful learning and long-term maintenance of novel word–novel referent pairings, learning relied on orthographic input while auditory input led to significantly inferior learning outcomes (Studies III–IV). In one of these individuals, this previously undetected modalityspecific learning ability was successfully translated into training with familiar but inaccessible everyday words (Study IV). Functional magnetic resonance imaging revealed that this individual had a disconnected dorsal speech processing pathway in the left hemisphere, but a right-hemispheric neural network mediated successful novel word learning via reading. Finally, the results of Study III suggested that the cognitive-linguistic profile may not always predict the optimal learning channel for an individual with aphasia. Small-scale learning probes seem therefore useful in revealing functional learning channels in post-stroke aphasia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study attempted to provide a project based on the already tested and successful results of foreign business which can help to contain the final price of innovation on desired levels. The research will attempt to dig out most of available information related to aforementioned definitions and thus completing theoretical background. Next author will explain used methodology and the process of evidence collection. After that the study will show the analysis of collected data in order to obtain results which are going to be compared with stated objectives in the final part. The conclusion of the research and proposed possibilities for additional work will be given in the last part. For this study author has chosen the qualitative model because it performs very well for analysis of small scale of data. The case study method was used because it gave author an opportunity to make an in-depth analysis of the collected information about particular organization so it became possible to analyze system's details in comparison. The results have been early considered valid and applicable to other studies. As the result thesis has proposed undertakings which reflect researches aimed on solving problems with provision of services and development of communications. In addition thesis has proposed formulation of database of postal service for Russian Post when (by request) customer possess an account where he or she can access postal services via PC or info table in postal office and order delivery of postal products which will be given private identification code. Project's payoff period has been calculated as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many opportunities to utilise coconut in Nzema to support farmers. Coconut oil that is mainly used for food preparation in Nzema can be utilized as fuel to support overcoming of the energy crisis in the Ghana. Coconut oil in Nzema is not used in both transportation and electricity generation. A few of the waste husk and shell are mainly used as fuel in homes for heating but greater amount is left to rot or burn the coconut plantation. In addition, some portion of the granulated coconut kernel is sometime used as feed for piggery feed and the rest of the granulated kernel are left as waste on the oil processing site. In this thesis, the author identified alternative utilization of cocoanut, for instance the use of coconut husk and shell for charcoal production, and the use of coconut trunks as construction materials. It is envisaged that exploring these alternatives will not only reduce carbon emission in the country but will also contribute significantly to the sustainability of the local agro-industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The general aim of the thesis was to study university students’ learning from the perspective of regulation of learning and text processing. The data were collected from the two academic disciplines of medical and teacher education, which share the features of highly scheduled study, a multidisciplinary character, a complex relationship between theory and practice and a professional nature. Contemporary information society poses new challenges for learning, as it is not possible to learn all the information needed in a profession during a study programme. Therefore, it is increasingly important to learn how to think and learn independently, how to recognise gaps in and update one’s knowledge and how to deal with the huge amount of constantly changing information. In other words, it is critical to regulate one’s learning and to process text effectively. The thesis comprises five sub-studies that employed cross-sectional, longitudinal and experimental designs and multiple methods, from surveys to eye tracking. Study I examined the connections between students’ study orientations and the ways they regulate their learning. In total, 410 second-, fourth- and sixth-year medical students from two Finnish medical schools participated in the study by completing a questionnaire measuring both general study orientations and regulation strategies. The students were generally deeply oriented towards their studies. However, they regulated their studying externally. Several interesting and theoretically reasonable connections between the variables were found. For instance, self-regulation was positively correlated with deep orientation and achievement orientation and was negatively correlated with non-commitment. However, external regulation was likewise positively correlated with deep orientation and achievement orientation but also with surface orientation and systematic orientation. It is argued that external regulation might function as an effective coping strategy in the cognitively loaded medical curriculum. Study II focused on medical students’ regulation of learning and their conceptions of the learning environment in an innovative medical course where traditional lectures were combined wth problem-based learning (PBL) group work. First-year medical and dental students (N = 153) completed a questionnaire assessing their regulation strategies of learning and views about the PBL group work. The results indicated that external regulation and self-regulation of the learning content were the most typical regulation strategies among the participants. In line with previous studies, self-regulation wasconnected with study success. Strictly organised PBL sessions were not considered as useful as lectures, although the students’ views of the teacher/tutor and the group were mainly positive. Therefore, developers of teaching methods are challenged to think of new solutions that facilitate reflection of one’s learning and that improve the development of self-regulation. In Study III, a person-centred approach to studying regulation strategies was employed, in contrast to the traditional variable-centred approach used in Study I and Study II. The aim of Study III was to identify different regulation strategy profiles among medical students (N = 162) across time and to examine to what extent these profiles predict study success in preclinical studies. Four regulation strategy profiles were identified, and connections with study success were found. Students with the lowest self-regulation and with an increasing lack of regulation performed worse than the other groups. As the person-centred approach enables us to individualise students with diverse regulation patterns, it could be used in supporting student learning and in facilitating the early diagnosis of learning difficulties. In Study IV, 91 student teachers participated in a pre-test/post-test design where they answered open-ended questions about a complex science concept both before and after reading either a traditional, expository science text or a refutational text that prompted the reader to change his/her beliefs according to scientific beliefs about the phenomenon. The student teachers completed a questionnaire concerning their regulation and processing strategies. The results showed that the students’ understanding improved after text reading intervention and that refutational text promoted understanding better than the traditional text. Additionally, regulation and processing strategies were found to be connected with understanding the science phenomenon. A weak trend showed that weaker learners would benefit more from the refutational text. It seems that learners with effective learning strategies are able to pick out the relevant content regardless of the text type, whereas weaker learners might benefit from refutational parts that contrast the most typical misconceptions with scientific views. The purpose of Study V was to use eye tracking to determine how third-year medical studets (n = 39) and internal medicine residents (n = 13) read and solve patient case texts. The results revealed differences between medical students and residents in processing patient case texts; compared to the students, the residents were more accurate in their diagnoses and processed the texts significantly faster and with a lower number of fixations. Different reading patterns were also found. The observed differences between medical students and residents in processing patient case texts could be used in medical education to model expert reasoning and to teach how a good medical text should be constructed. The main findings of the thesis indicate that even among very selected student populations, such as high-achieving medical students or student teachers, there seems to be a lot of variation in regulation strategies of learning and text processing. As these learning strategies are related to successful studying, students enter educational programmes with rather different chances of managing and achieving success. Further, the ways of engaging in learning seldom centre on a single strategy or approach; rather, students seem to combine several strategies to a certain degree. Sometimes, it can be a matter of perspective of which way of learning can be considered best; therefore, the reality of studying in higher education is often more complicated than the simplistic view of self-regulation as a good quality and external regulation as a harmful quality. The beginning of university studies may be stressful for many, as the gap between high school and university studies is huge and those strategies that were adequate during high school might not work as well in higher education. Therefore, it is important to map students’ learning strategies and to encourage them to engage in using high-quality learning strategies from the beginning. Instead of separate courses on learning skills, the integration of these skills into course contents should be considered. Furthermore, learning complex scientific phenomena could be facilitated by paying attention to high-quality learning materials and texts and other support from the learning environment also in the university. Eye tracking seems to have great potential in evaluating performance and growing diagnostic expertise in text processing, although more research using texts as stimulus is needed. Both medical and teacher education programmes and the professions themselves are challenging in terms of their multidisciplinary nature and increasing amounts of information and therefore require good lifelong learning skills during the study period and later in work life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current hearing-assistive technology performs poorly in noisy multi-talker conditions. The goal of this thesis was to establish the feasibility of using EEG to guide acoustic processing in such conditions. To attain this goal, this research developed a model via the constructive research method, relying on literature review. Several approaches have revealed improvements in the performance of hearing-assistive devices under multi-talker conditions, namely beamforming spatial filtering, model-based sparse coding shrinkage, and onset enhancement of the speech signal. Prior research has shown that electroencephalography (EEG) signals contain information that concerns whether the person is actively listening, what the listener is listening to, and where the attended sound source is. This thesis constructed a model for using EEG information to control beamforming, model-based sparse coding shrinkage, and onset enhancement of the speech signal. The purpose of this model is to propose a framework for using EEG signals to control sound processing to select a single talker in a noisy environment containing multiple talkers speaking simultaneously. On a theoretical level, the model showed that EEG can control acoustical processing. An analysis of the model identified a requirement for real-time processing and that the model inherits the computationally intensive properties of acoustical processing, although the model itself is low complexity placing a relatively small load on computational resources. A research priority is to develop a prototype that controls hearing-assistive devices with EEG. This thesis concludes highlighting challenges for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this master’s thesis, I examine the development of writer-characters and metafiction from John Irving’s The World According to Garp to Last Night in Twisted River and how this development relates to the development of late twentieth century postmodern literary theory to twenty-first century post-postmodern literary theory. The purpose of my study is to determine how the prominently postmodern feature metafiction, created through the writer-character’s stories-within-stories, has changed in form and function in the two novels published thirty years apart from one another, and what possible features this indicates for future post-postmodern theory. I establish my theoretical framework on the development of metafiction largely on late twentieth-century models of author and authorship as discussed by Roland Barthes, Wayne Booth and Michel Foucault. I base my close analysis of metafiction mostly on Linda Hutcheon’s model of overt and covert metafiction. At the end of my study, I examine Irving’s later novel through Suzanne Rohr’s models of reality constitution and fictional reality. The analysis of the two novels focuses on excerpts that feature the writer-characters, their stories-within-stories and the novels’ other characters and the narrators’ evaluations of these two. I draw examples from both novels, but I illustrate my choice of focus on the novels at the beginning of each section. Through this, I establish a method of analysis that best illustrates the development as a continuum from pre-existing postmodern models and theories to the formation of new post-postmodern theory. Based on my findings, the thesis argues that twenty-first century literary theory has moved away from postmodern overt deconstruction of the narrative and its meaning. New post-postmodern literary theory reacquires the previously deconstructed boundaries that define reality and truth and re-establishes them as having intrinsic value that cannot be disputed. In establishing fictional reality as self-governing and non-intrudable, post-postmodern theory takes a stance against postmodern nihilism, which indicates the re-founded, non-questionable value of the text’s reality. To continue mapping other possible features of future post-postmodern theory, I recommend further analysis solely on John Irving’s novels’ published in the twenty-first century.