887 resultados para Level-Set Method
Resumo:
We investigated the effect of fish oil (FO) supplementation on tumor growth, cyclooxygenase 2 (COX-2), peroxisome proliferator-activated receptor gamma (PPARγ), and RelA gene and protein expression in Walker 256 tumor-bearing rats. Male Wistar rats (70 days old) were fed with regular chow (group W) or chow supplemented with 1 g/kg body weight FO daily (group WFO) until they reached 100 days of age. Both groups were then inoculated with a suspension of Walker 256 ascitic tumor cells (3×107 cells/mL). After 14 days the rats were killed, total RNA was isolated from the tumor tissue, and relative mRNA expression was measured using the 2-ΔΔCT method. FO significantly decreased tumor growth (W=13.18±1.58 vsWFO=5.40±0.88 g, P<0.05). FO supplementation also resulted in a significant decrease in COX-2 (W=100.1±1.62 vsWFO=59.39±5.53, P<0.001) and PPARγ (W=100.4±1.04vs WFO=88.22±1.46, P<0.05) protein expression. Relative mRNA expression was W=1.06±0.022 vsWFO=0.31±0.04 (P<0.001) for COX-2, W=1.08±0.02vs WFO=0.52±0.08 (P<0.001) for PPARγ, and W=1.04±0.02 vs WFO=0.82±0.04 (P<0.05) for RelA. FO reduced tumor growth by attenuating inflammatory gene expression associated with carcinogenesis.
Resumo:
Laser beam welding (LBW) is applicable for a wide range of industrial sectors and has a history of fifty years. However, it is considered an unusual method with applications typically limited to welding of thin sheet metal. With a new generation of high power lasers there has been a renewed interest in thick section LBW (also known as keyhole laser welding). There was a growing body of publications during 2001-2011 that indicates an increasing interest in laser welding for many industrial applications, and in last ten years, an increasing number of studies have examined the ways to increase the efficiency of the process. Expanding the thickness range and efficiency of LBW makes the process a possibility for industrial applications dealing with thick metal welding: shipbuilding, offshore structures, pipelines, power plants and other industries. The advantages provided by LBW, such as high process speed, high productivity, and low heat input, may revolutionize these industries and significantly reduce the process costs. The research to date has focused on either increasing the efficiency via optimizing process parameters, or on the process fundamentals, rather than on process and workpiece modifications. The argument of this thesis is that the efficiency of the laser beam process can be increased in a straightforward way in the workshop conditions. Throughout this dissertation, the term “efficiency” is used to refer to welding process efficiency, specifically, an increase in efficiency refers an increase in weld’s penetration depth without increasing laser power level or decreasing welding speed. These methods are: modifications of the workpiece – edge surface roughness and air gap between the joining plates; modification of the ambient conditions – local reduction of the pressure in the welding zone; modification of the welding process – preheating of the welding zone. Approaches to improve the efficiency are analyzed and compared both separately and combined. These experimentally proven methods confirm previous findings and contribute additional evidence which expand the opportunities for laser beam welding applications. The focus of this research was primarily on the effects of edge surface roughness preparation and pre-set air gap between the plates on weld quality and penetration depth. To date, there has been no reliable evidence that such modifications of the workpiece give a positive effect on the welding efficiency. Other methods were tested in combination with the two methods mentioned above. The most promising - combining with reduced pressure method - resulted in at least 100% increase in efficiency. The results of this thesis support the idea that joining those methods in one modified process will provide the modern engineering with a sufficient tool for many novel applications with potential benefits to a range of industries.
Resumo:
Laser cutting implementation possibilities into paper making machine was studied as the main objective of the work. Laser cutting technology application was considered as a replacement tool for conventional cutting methods used in paper making machines for longitudinal cutting such as edge trimming at different paper making process and tambour roll slitting. Laser cutting of paper was tested in 70’s for the first time. Since then, laser cutting and processing has been applied for paper materials with different level of success in industry. Laser cutting can be employed for longitudinal cutting of paper web in machine direction. The most common conventional cutting methods include water jet cutting and rotating slitting blades applied in paper making machines. Cutting with CO2 laser fulfils basic requirements for cutting quality, applicability to material and cutting speeds in all locations where longitudinal cutting is needed. Literature review provided description of advantages, disadvantages and challenges of laser technology when it was applied for cutting of paper material with particular attention to cutting of moving paper web. Based on studied laser cutting capabilities and problem definition of conventional cutting technologies, preliminary selection of the most promising application area was carried out. Laser cutting (trimming) of paper web edges in wet end was estimated to be the most promising area where it can be implemented. This assumption was made on the basis of rate of web breaks occurrence. It was found that up to 64 % of total number of web breaks occurred in wet end, particularly in location of so called open draws where paper web was transferred unsupported by wire or felt. Distribution of web breaks in machine cross direction revealed that defects of paper web edge was the main reason of tearing initiation and consequent web break. The assumption was made that laser cutting was capable of improvement of laser cut edge tensile strength due to high cutting quality and sealing effect of the edge after laser cutting. Studies of laser ablation of cellulose supported this claim. Linear energy needed for cutting was calculated with regard to paper web properties in intended laser cutting location. Calculated linear cutting energy was verified with series of laser cutting. Practically obtained laser energy needed for cutting deviated from calculated values. This could be explained by difference in heat transfer via radiation in laser cutting and different absorption characteristics of dry and moist paper material. Laser cut samples (both dry and moist (dry matter content about 25-40%)) were tested for strength properties. It was shown that tensile strength and strain break of laser cut samples are similar to corresponding values of non-laser cut samples. Chosen method, however, did not address tensile strength of laser cut edge in particular. Thus, the assumption of improving strength properties with laser cutting was not fully proved. Laser cutting effect on possible pollution of mill broke (recycling of trimmed edge) was carried out. Laser cut samples (both dry and moist) were tested on the content of dirt particles. The tests revealed that accumulation of dust particles on the surface of moist samples can take place. This has to be taken into account to prevent contamination of pulp suspension when trim waste is recycled. Material loss due to evaporation during laser cutting and amount of solid residues after cutting were evaluated. Edge trimming with laser would result in 0.25 kg/h of solid residues and 2.5 kg/h of lost material due to evaporation. Schemes of laser cutting implementation and needed laser equipment were discussed. Generally, laser cutting system would require two laser sources (one laser source for each cutting zone), set of beam transfer and focusing optics and cutting heads. In order to increase reliability of system, it was suggested that each laser source would have double capacity. That would allow to perform cutting employing one laser source working at full capacity for both cutting zones. Laser technology is in required level at the moment and do not require additional development. Moreover, capacity of speed increase is high due to availability high power laser sources what can support the tendency of speed increase of paper making machines. Laser cutting system would require special roll to maintain cutting. The scheme of such roll was proposed as well as roll integration into paper making machine. Laser cutting can be done in location of central roll in press section, before so-called open draw where many web breaks occur, where it has potential to improve runability of a paper making machine. Economic performance of laser cutting was done as comparison of laser cutting system and water jet cutting working in the same conditions. It was revealed that laser cutting would still be about two times more expensive compared to water jet cutting. This is mainly due to high investment cost of laser equipment and poor energy efficiency of CO2 lasers. Another factor is that laser cutting causes material loss due to evaporation whereas water jet cutting almost does not cause material loss. Despite difficulties of laser cutting implementation in paper making machine, its implementation can be beneficial. The crucial role in that is possibility to improve cut edge strength properties and consequently reduce number of web breaks. Capacity of laser cutting to maintain cutting speeds which exceed current speeds of paper making machines what is another argument to consider laser cutting technology in design of new high speed paper making machines.
Resumo:
Traditionally metacognition has been theorised, methodologically studied and empirically tested from the standpoint mainly of individuals and their learning contexts. In this dissertation the emergence of metacognition is analysed more broadly. The aim of the dissertation was to explore socially shared metacognitive regulation (SSMR) as part of collaborative learning processes taking place in student dyads and small learning groups. The specific aims were to extend the concept of individual metacognition to SSMR, to develop methods to capture and analyse SSMR and to validate the usefulness of the concept of SSMR in two different learning contexts; in face-to-face student dyads solving mathematical word problems and also in small groups taking part in inquiry-based science learning in an asynchronous computer-supported collaborative learning (CSCL) environment. This dissertation is comprised of four studies. In Study I, the main aim was to explore if and how metacognition emerges during problem solving in student dyads and then to develop a method for analysing the social level of awareness, monitoring, and regulatory processes emerging during the problem solving. Two dyads comprised of 10-year-old students who were high-achieving especially in mathematical word problem solving and reading comprehension were involved in the study. An in-depth case analysis was conducted. Data consisted of over 16 (30–45 minutes) videotaped and transcribed face-to-face sessions. The dyads solved altogether 151 mathematical word problems of different difficulty levels in a game-format learning environment. The interaction flowchart was used in the analysis to uncover socially shared metacognition. Interviews (also stimulated recall interviews) were conducted in order to obtain further information about socially shared metacognition. The findings showed the emergence of metacognition in a collaborative learning context in a way that cannot solely be explained by individual conception. The concept of socially-shared metacognition (SSMR) was proposed. The results highlighted the emergence of socially shared metacognition specifically in problems where dyads encountered challenges. Small verbal and nonverbal signals between students also triggered the emergence of socially shared metacognition. Additionally, one dyad implemented a system whereby they shared metacognitive regulation based on their strengths in learning. Overall, the findings suggested that in order to discover patterns of socially shared metacognition, it is important to investigate metacognition over time. However, it was concluded that more research on socially shared metacognition, from larger data sets, is needed. These findings formed the basis of the second study. In Study II, the specific aim was to investigate whether socially shared metacognition can be reliably identified from a large dataset of collaborative face-to-face mathematical word problem solving sessions by student dyads. We specifically examined different difficulty levels of tasks as well as the function and focus of socially shared metacognition. Furthermore, the presence of observable metacognitive experiences at the beginning of socially shared metacognition was explored. Four dyads participated in the study. Each dyad was comprised of high-achieving 10-year-old students, ranked in the top 11% of their fourth grade peers (n=393). Dyads were from the same data set as in Study I. The dyads worked face-to-face in a computer-supported, game-format learning environment. Problem-solving processes for 251 tasks at three difficulty levels taking place during 56 (30–45 minutes) lessons were video-taped and analysed. Baseline data for this study were 14 675 turns of transcribed verbal and nonverbal behaviours observed in four study dyads. The micro-level analysis illustrated how participants moved between different channels of communication (individual and interpersonal). The unit of analysis was a set of turns, referred to as an ‘episode’. The results indicated that socially shared metacognition and its function and focus, as well as the appearance of metacognitive experiences can be defined in a reliable way from a larger data set by independent coders. A comparison of the different difficulty levels of the problems suggested that in order to trigger socially shared metacognition in small groups, the problems should be more difficult, as opposed to moderately difficult or easy. Although socially shared metacognition was found in collaborative face-to-face problem solving among high-achieving student dyads, more research is needed in different contexts. This consideration created the basis of the research on socially shared metacognition in Studies III and IV. In Study III, the aim was to expand the research on SSMR from face-to-face mathematical problem solving in student dyads to inquiry-based science learning among small groups in an asynchronous computer-supported collaborative learning (CSCL) environment. The specific aims were to investigate SSMR’s evolvement and functions in a CSCL environment and to explore how SSMR emerges at different phases of the inquiry process. Finally, individual student participation in SSMR during the process was studied. An in-depth explanatory case study of one small group of four girls aged 12 years was carried out. The girls attended a class that has an entrance examination and conducts a language-enriched curriculum. The small group solved complex science problems in an asynchronous CSCL environment, participating in research-like processes of inquiry during 22 lessons (á 45–minute). Students’ network discussion were recorded in written notes (N=640) which were used as study data. A set of notes, referred to here as a ‘thread’, was used as the unit of analysis. The inter-coder agreement was regarded as substantial. The results indicated that SSMR emerges in a small group’s asynchronous CSCL inquiry process in the science domain. Hence, the results of Study III were in line with the previous Study I and Study II and revealed that metacognition cannot be reduced to the individual level alone. The findings also confirm that SSMR should be examined as a process, since SSMR can evolve during different phases and that different SSMR threads overlapped and intertwined. Although the classification of SSMR’s functions was applicable in the context of CSCL in a small group, the dominant function was different in the asynchronous CSCL inquiry in the small group in a science activity than in mathematical word problem solving among student dyads (Study II). Further, the use of different analytical methods provided complementary findings about students’ participation in SSMR. The findings suggest that it is not enough to code just a single written note or simply to examine who has the largest number of notes in the SSMR thread but also to examine the connections between the notes. As the findings of the present study are based on an in-depth analysis of a single small group, further cases were examined in Study IV, as well as looking at the SSMR’s focus, which was also studied in a face-to-face context. In Study IV, the general aim was to investigate the emergence of SSMR with a larger data set from an asynchronous CSCL inquiry process in small student groups carrying out science activities. The specific aims were to study the emergence of SSMR in the different phases of the process, students’ participation in SSMR, and the relation of SSMR’s focus to the quality of outcomes, which was not explored in previous studies. The participants were 12-year-old students from the same class as in Study III. Five small groups consisting of four students and one of five students (N=25) were involved in the study. The small groups solved ill-defined science problems in an asynchronous CSCL environment, participating in research-like processes of inquiry over a total period of 22 hours. Written notes (N=4088) detailed the network discussions of the small groups and these constituted the study data. With these notes, SSMR threads were explored. As in Study III, the thread was used as the unit of analysis. In total, 332 notes were classified as forming 41 SSMR threads. Inter-coder agreement was assessed by three coders in the different phases of the analysis and found to be reliable. Multiple methods of analysis were used. Results showed that SSMR emerged in all the asynchronous CSCL inquiry processes in the small groups. However, the findings did not reveal any significantly changing trend in the emergence of SSMR during the process. As a main trend, the number of notes included in SSMR threads differed significantly in different phases of the process and small groups differed from each other. Although student participation was seen as highly dispersed between the students, there were differences between students and small groups. Furthermore, the findings indicated that the amount of SSMR during the process or participation structure did not explain the differences in the quality of outcomes for the groups. Rather, when SSMRs were focused on understanding and procedural matters, it was associated with achieving high quality learning outcomes. In turn, when SSMRs were focused on incidental and procedural matters, it was associated with low level learning outcomes. Hence, the findings imply that the focus of any emerging SSMR is crucial to the quality of the learning outcomes. Moreover, the findings encourage the use of multiple research methods for studying SSMR. In total, the four studies convincingly indicate that a phenomenon of socially shared metacognitive regulation also exists. This means that it was possible to define the concept of SSMR theoretically, to investigate it methodologically and to validate it empirically in two different learning contexts across dyads and small groups. In-depth micro-level case analysis in Studies I and III showed the possibility to capture and analyse in detail SSMR during the collaborative process, while in Studies II and IV, the analysis validated the emergence of SSMR in larger data sets. Hence, validation was tested both between two environments and within the same environments with further cases. As a part of this dissertation, SSMR’s detailed functions and foci were revealed. Moreover, the findings showed the important role of observable metacognitive experiences as the starting point of SSMRs. It was apparent that problems dealt with by the groups should be rather difficult if SSMR is to be made clearly visible. Further, individual students’ participation was found to differ between students and groups. The multiple research methods employed revealed supplementary findings regarding SSMR. Finally, when SSMR was focused on understanding and procedural matters, this was seen to lead to higher quality learning outcomes. Socially shared metacognition regulation should therefore be taken into consideration in students’ collaborative learning at school similarly to how an individual’s metacognition is taken into account in individual learning.
Resumo:
This thesis explores how the project charter development, project scope management, and project time management are executed in a Finnish movie production. The deviations and analogies between a case movie production and best practices suggested in PMBOK are presented. Empirical material from the case is gathered with two semi-structured interviews with a producer and a line producer. The interview data is categorized according to PMBOK knowledge areas. The analysis is complemented with movie industry specific norms found in popular movie production guides. The described and observed methods are linked together and the relationship between them is discussed. The project charter development, which is referred as a green light process in the movie industry, is mostly analogous between all areas. The deviations are in the level of formality. The green lighting in the case movie was accomplished without bureaucratic reports described in movie production guides. The empirical material shows that project management conventions and movie industry employ similar methods especially in scope management. Project management practices introduce a work breakdown structure (WBS) method, and movie production accomplishes the same task by developing a shooting script. Time management of the case movie deviates on most parts from the methods suggested in PMBOK. The major deviation is resource management. PMBOK suggests creating a resource breakdown structure. The case movie production accomplished this through budgeting process. Furthermore the popular movie production guides also disregard resource management as sovereign process. However the activity listing is quite analogous between the case movie and PMBOK. The final key observation is that although there is a broad set of effective and detailed movie industry specific methods, a comprehensive methodology that would cover the whole production process, such as Prince2 or Scrum, seems to be missing from the movie industry.
Resumo:
Abstract An accurate, reliable and fast multianalyte/multiclass ultra-performance liquid chromatography–tandem mass spectrometry (UPLC–MS/MS) method was developed and validated for the simultaneous analysis of 23 pharmaceuticals, belonging to different classes amphenicols, sulfonamides, tetracyclines, in honey samples. The method developed consists of ultrasonic extraction followed by UPLC–ESI–MS/MS with electrospray ionization in both positive mode and negative mode. The influence of the extraction solvents and mobile phase composition on the sensitivity of the method, and the optimum conditions for sample weight and extraction temperature in terms of analyte recovery were extensively studied. The identification of antibiotics is fulfilled by simultaneous use of chromatographic separation using an Acquity BEH C18 (100 mm x 2.1 mm, 1.7 µm) analytical column with a gradient elution of mobile phases and tandem mass spectrometry with an electrospray ionization. Finally, the method developed was applied to the determination of target analytes in honey samples obtained from the local markets and several beekeepers in Muğla, Turkey. Ultrasonic-extraction of pharmaceuticals from honey samples is a well-established technique by UPLC–ESI–MS/MS, the uniqueness of this study lies in the simultaneous determination of a remarkable number of compounds belonging to 23 drug at the sub-nanogram per kilogram level.
Resumo:
Climatic impacts of energy-peat extraction are of increasing concern due to EU emissions trading requirements. A new excavation-drier peat extraction method has been developed to reduce the climatic impact and increase the efficiency of peat extraction. To quantify and compare the soil GHG fluxes of the excavation drier and the traditional milling methods, as well as the areas from which the energy peat is planned to be extracted in the future (extraction reserve area types), soil CO2, CH4 and N2O fluxes were measured during 2006–2007 at three sites in Finland. Within each site, fluxes were measured from drained extraction reserve areas, extraction fields and stockpiles of both methods and additionally from the biomass driers of the excavation-drier method. The Life Cycle Assessment (LCA), described at a principal level in ISO Standards 14040:2006 and 14044:2006, was used to assess the long-term (100 years) climatic impact from peatland utilisation with respect to land use and energy production chains where utilisation of coal was replaced with peat. Coal was used as a reference since in many cases peat and coal can replace each other in same power plants. According to this study, the peat extraction method used was of lesser significance than the extraction reserve area type in regards to the climatic impact. However, the excavation-drier method seems to cause a slightly reduced climatic impact as compared with the prevailing milling method.
Resumo:
This thesis explores how the project charter development, project scope management, and project time management are executed in a Finnish movie production. The deviations and analogies between a case movie production and best practices suggested in PMBOK are presented. Empirical material from the case is gathered with two semi-structured interviews with a producer and a line producer. The interview data is categorized according to PMBOK knowledge areas. The analysis is complemented with movie industry specific norms found in popular movie production guides. The described and observed methods are linked together and the relationship between them is discussed. The project charter development, which is referred as a green light process in the movie industry, is mostly analogous between all areas. The deviations are in the level of formality. The green lighting in the case movie was accomplished without bureaucratic reports described in movie production guides. The empirical material shows that project management conventions and movie industry employ similar methods especially in scope management. Project management practices introduce a work breakdown structure (WBS) method, and movie production accomplishes the same task by developing a shooting script. Time management of the case movie deviates on most parts from the methods suggested in PMBOK. The major deviation is resource management. PMBOK suggests creating a resource breakdown structure. The case movie production accomplished this through budgeting process. Furthermore the popular movie production guides also disregard resource management as sovereign process. However the activity listing is quite analogous between the case movie and PMBOK. The final key observation is that although there is a broad set of effective and detailed movie industry specific methods, a comprehensive methodology that would cover the whole production process, such as Prince2 or Scrum, seems to be missing from the movie industry.
Resumo:
The importance of industrial maintenance has been emphasized during the last decades; it is no longer a mere cost item, but one of the mainstays of business. Market conditions have worsened lately, investments in production assets have decreased, and at the same time competition has changed from taking place between companies to competition between networks. Companies have focused on their core functions and outsourced support services, like maintenance, above all to decrease costs. This new phenomenon has led to increasing formation of business networks. As a result, a growing need for new kinds of tools for managing these networks effectively has arisen. Maintenance costs are usually a notable part of the life-cycle costs of an item, and it is important to be able to plan the future maintenance operations for the strategic period of the company or for the whole life-cycle period of the item. This thesis introduces an itemlevel life-cycle model (LCM) for industrial maintenance networks. The term item is used as a common definition for a part, a component, a piece of equipment etc. The constructed LCM is a working tool for a maintenance network (consisting of customer companies that buy maintenance services and various supplier companies). Each network member is able to input their own cost and profit data related to the maintenance services of one item. As a result, the model calculates the net present values of maintenance costs and profits and presents them from the points of view of all the network members. The thesis indicates that previous LCMs for calculating maintenance costs have often been very case-specific, suitable only for the item in question, and they have also been constructed for the needs of a single company, without the network perspective. The developed LCM is a proper tool for the decision making of maintenance services in the network environment; it enables analysing the past and making scenarios for the future, and offers choices between alternative maintenance operations. The LCM is also suitable for small companies in building active networks to offer outsourcing services for large companies. The research introduces also a five-step constructing process for designing a life-cycle costing model in the network environment. This five-step designing process defines model components and structure throughout the iteration and exploitation of user feedback. The same method can be followed to develop other models. The thesis contributes to the literature of value and value elements of maintenance services. It examines the value of maintenance services from the perspective of different maintenance network members and presents established value element lists for the customer and the service provider. These value element lists enable making value visible in the maintenance operations of a networked business. The LCM added with value thinking promotes the notion of maintenance from a “cost maker” towards a “value creator”.
Resumo:
Lichens are symbiotic organisms, which consist of the fungal partner and the photosynthetic partner, which can be either an alga or a cyanobacterium. In some lichen species the symbiosis is tripartite, where the relationship includes both an alga and a cyanobacterium alongside the primary symbiont, fungus. The lichen symbiosis is an evolutionarily old adaptation to life on land and many extant fungal species have evolved from lichenised ancestors. Lichens inhabit a wide range of habitats and are capable of living in harsh environments and on nutrient poor substrates, such as bare rocks, often enduring frequent cycles of drying and wetting. Most lichen species are desiccation tolerant, and they can survive long periods of dehydration, but can rapidly resume photosynthesis upon rehydration. The molecular mechanisms behind lichen desiccation tolerance are still largely uncharacterised and little information is available for any lichen species at the genomic or transcriptomic level. The emergence of the high-throughput next generation sequencing (NGS) technologies and the subsequent decrease in the cost of sequencing new genomes and transcriptomes has enabled non-model organism research on the whole genome level. In this doctoral work the transcriptome and genome of the grey reindeer lichen, Cladonia rangiferina, were sequenced, de novo assembled and characterised using NGS and traditional expressed sequence tag (EST) technologies. RNA extraction methods were optimised to improve the yield and quality of RNA extracted from lichen tissue. The effects of rehydration and desiccation on C. rangiferina gene expression on whole transcriptome level were studied and the most differentially expressed genes were identified. The secondary metabolites present in C. rangiferina decreased the quality – integrity, optical characteristics and utility for sensitive molecular biological applications – of the extracted RNA requiring an optimised RNA extraction method for isolating sufficient quantities of high-quality RNA from lichen tissue in a time- and cost-efficient manner. The de novo assembly of the transcriptome of C. rangiferina was used to produce a set of contiguous unigene sequences that were used to investigate the biological functions and pathways active in a hydrated lichen thallus. The de novo assembly of the genome yielded an assembly containing mostly genes derived from the fungal partner. The assembly was of sufficient quality, in size similar to other lichen-forming fungal genomes and included most of the core eukaryotic genes. Differences in gene expression were detected in all studied stages of desiccation and rehydration, but the largest changes occurred during the early stages of rehydration. The most differentially expressed genes did not have any annotations, making them potentially lichen-specific genes, but several genes known to participate in environmental stress tolerance in other organisms were also identified as differentially expressed.
Resumo:
Value of online business has grown to over one trillion USD. This thesis is about search engine optimization, which focus is to increase search engine rankings. Search engine optimization is an important branch of online marketing because the first page of search engine results is generating majority of the search traffic. Current articles about search engine optimization and Google are indicating that with the proper use of quality content, there is potential to improve search engine rankings. However, the existing search engine optimization literature is not noticing content at a sufficient level. To decrease that difference, the content-centered method for search engine optimization is constructed, and content in search engine optimization is studied. This content-centered method consists of three search engine optimization tactics: 1) content, 2) keywords, and 3) links. Two propositions were used for testing these tactics in a real business environment and results are suggesting that the content-centered method is improving search engine rankings. Search engine optimization is constantly changing because Google is adjusting its search algorithm regularly. Still, some long-term trends can be recognized. Google has said that content is growing its importance as a ranking factor in the future. The content-centered method is taking advance of this new trend in search engine optimization to be relevant for years to come.
Resumo:
Value of online business has grown to over one trillion USD. This thesis is about search engine optimization, which focus is to increase search engine rankings. Search engine optimization is an important branch of online marketing because the first page of search engine results is generating majority of the search traffic. Current articles about search engine optimization and Google are indicating that with the proper use of quality content, there is potential to improve search engine rankings. However, the existing search engine optimization literature is not noticing content at a sufficient level. To decrease that difference, the content-centered method for search engine optimization is constructed, and content in search engine optimization is studied. This content-centered method consists of three search engine optimization tactics: 1) content, 2) keywords, and 3) links. Two propositions were used for testing these tactics in a real business environment and results are suggesting that the content-centered method is improving search engine rankings. Search engine optimization is constantly changing because Google is adjusting its search algorithm regularly. Still, some long-term trends can be recognized. Google has said that content is growing its importance as a ranking factor in the future. The content-centered method is taking advance of this new trend in search engine optimization to be relevant for years to come.
Resumo:
Considerable research has focused on the success of early intervention programs for children. However, minimal research has focused on the effect these programs have on the parents of targeted children. Many current early intervention programs champion family-focused and inclusive programming, but few have evaluated parent participation in early interventions and fewer still have evaluated the impact of these programs on beliefs and attitudes and parenting practices. Since parents will continue to play a key role in their child's developmental course long after early intervention programs end, it is vital to examine whether these programs empower parents to take action to make changes in the lives of their children. The goal of this study was to understand parental influences on the early development of literacy, and in particular how parental attitudes, beliefs and self efficacy impact parent and child engagement in early literacy intervention activities. A mixed method procedure using quantitative and qualitative strategies was employed. A quasi-experimental research design was used. The research sample, sixty parents who were part of naturally occurring community interventions in at- risk neighbourhoods in a south-western Ontario city participated in the quantitative phase. Largely individuals whose home language was other than English, these participants were divided amongst three early literacy intervention groups, a Prescriptive Interventionist type group, a Participatory Empowering type group and a drop-in parent- child neighbourhood Control group. Measures completed pre and post a six session literacy intervention, on all three literacy and evidence of change in parental empowerment. Parents in all three groups, on average, held beliefs about early literacy that were positive and that were compatible with current approaches to language development and emergent literacy. No significant change in early literacy beliefs and attitudes for pre to post intervention was found. Similarly, there was no significant difference between groups on empowerment scores, but there was a significant change post intervention in one group's empowerment score. There was a drop in the empowerment score for the Prescriptive Interventionist type group, suggesting a drop in empowerment level. The qualitative aspect of this study involved six in-depth interviews completed with a sub-set of the sixty research participants. Four similar themes emerged across the groups: learning takes place across time and place; participation is key; success is achieved by taking small steps; and learning occurs in multiple ways. The research findings have important implications for practitioners and policy makers who target at risk populations with early intervention programming and wish to sustain parental empowerment. Study results show the value parents place on early learning and point to the importance of including parents in the development and delivery of early intervention programs. groups, were analyzed for evidence of change in parental attitudes and beliefs about early literacy and evidence of change in parental empowerment. Parents in all three groups, on average, held beliefs about early literacy that were positive and that were compatible with current approaches to language development and emergent literacy. No significant change in early literacy beliefs and attitudes for pre to post intervention was found. Similarly, there was no significant difference between groups on empowerment scores, but there was a significant change post intervention in one group's empowerment score. There was a drop in the empowerment score for the Prescriptive Interventionist type group, suggesting a drop in empowerment level. The qualitative aspect of this study involved six in-depth interviews completed with a sub-set of the sixty research participants. Four similar themes emerged across the groups: learning takes place across time and place; participation is key; success is achieved by taking small steps; and learning occurs in multiple ways. The research findings have important implications for practitioners and policy makers who target at risk populations with early intervention programming and wish to sustain parental empowerment. Study results show the value parents place on early learning and point to the importance of including parents in the development and delivery of early intervention programs.
Resumo:
The primary purpose of the current investigation was to develop an elevated muscle fluid level using a human in-vivo model. The secondary purpose was to determine if an increased muscle fluid content could alter the acute muscle damage response following a bout of eccentric exercise. Eight healthy, recreationally active males participated in a cross-over design involving two randomly assigned trials. A hydration trial (HYD) consisting of a two hour infusion of a hypotonic (0.45%) saline at a rate of 20mL/minVl .73m"^ and a control trial (CON), separated by four weeks. Following the infusion (HYD) or rest period (CON), participants completed a single leg isokinetic eccentric exercise protocol of the quadriceps, consisting of 10 sets of 10 repetitions with a one minute rest between each set. Muscle biopsies were collected prior to the exercise, immediately following and at three hours post exercise. Muscle analysis included determination of wet-dry ratios and quantification of muscle damage using toluidine blue staining and light microscopy. Blood samples were collected prior to, immediately post, three and 24 hours post exercise to determine changes in creatine kinase (CK), lactate dehydrogenase (LD), interleukin-6 (IL-6) and Creactive protein (CRP) levels. Results demonstrated an increased muscle fluid volume in the HYD condition following the infusion when compared to the CON condition. Isometric peak torque was significantly reduced following the exercise in both the HYD and CON conditions. There were no significant differences in the number of areas of muscle damage at any of the time points in either condition, with no differences between conditions. CK levels were significantly greater 24hour post exercise compared to pre, immediately and three hours post similarly in both conditions. LD in the HYD condition followed a similar trend as CK with 24 hour levels higher than pre, immediately post and three hours post and LD levels were significantly greater 24 hours post compared to pre levels in the CON condition, with no differences between conditions. A significant main effect for time was observed for CRP (p<0.05) for time, such that CRP levels increased consistently at each subsequent time point. However, CRP and IL-6 levels were not different at any of the measured time points when comparing the two conditions. Although the current investigation was able to successfully increase muscle fluid volume and an increased CK, LD and CRP were observed, no muscle damage was observed following the eccentric exercise protocol in the CON or HYD conditions. Therefore, the hypotonic infusion used in the HYD condition proved to be a viable method to acutely increase muscle fluid content in in-vivo human skeletal muscle.
Resumo:
This study assessed the effectiveness of a reciprocal teaching program as a method of teaching reading comprehension, using narrative text material in a t.ypical grade seven classroom. In order to determine the effectiveness of the reciprocal teaching program, this method was compared to two other reading instruction approaches that, unlike rcciprocal teaching, did not include social interaction components. Two intact grade scven classes, and a grade seven teacher, participated in this study. Students were appropriately assigned to three treatment groups by reading achievement level as determined from a norm-referenced test. Training proceeded for a five week intervention period during regularly scheduled English periods. Throughout the program curriculum-based tests were administered. These tests were designed to assess comprehension in two distinct ways; namely, character analysis components as they relate to narrative text, and strategy use components as they contribute to student understanding of narrative and expository text. Pre, post, and maintenance tests were administered to measure overall training effects. Moreover, during intervention, training probes were administered in the last period of each week to evaluate treatment group performance. AU curriculum-based tests were coded and comparisons of pre, post, maintenance tests and training probes were presented in graph form. Results showed that the reciprocal group achieved some improvement in reading comprehension scores in the strategy use component of the tests. No improvements were observed for the character analysis components of the curriculum-based tests and the norm-referenced tests. At pre and post intervention, interviews requiring students to respond to questions that addressed metacomprehension awareness of study strategies were administered. The intelviews were coded and comparisons were made between the two intelVicws. No significant improvements were observed regarding student awareness of ten identified study strategies . This study indicated that reciprocal teaching is a viable approach that can be utilized to help students acquire more effective comprehension strategies. However, the maximum utility of the technique when administered to a population of grade seven students performing at average to above average levels of reading achievement has yet to be determined. In order to explore this issue, the refinement of training materials and curriculum-based measurements need to be explored. As well, this study revealed that reciprocal teaching placed heavier demands on the classroom teacher when compared to other reading instruction methods. This may suggest that innovative and intensive teacher training techniques are required before it is feasible to use this method in the classroom.