984 resultados para PBL tutorial search term
Resumo:
The Problem Based Learning (PBL) can be used as a strategy for methodological change in conventional learning environments. In this paper, the integration of laboratory work in PBL grounded activities during an introductory organic chemistry course is described. The most decisive issues of their implementation are discussed. The results show how this methodology favours the laboratory work contextualization in subject-matter and promotes the Science-Technology-Society-Environment relationships. Besides, it contributes to competence development like planning and organization skills, information search and selection, cooperative work, etc., the same way as the tutorial action improvement.
Resumo:
A proposta deste trabalho é apreender concepções de estudantes e tutores sobre a avaliação formativa nas sessões tutoriais de um currículo PBL, identificando as dificuldades enfrentadas no desenvolvimento dessa prática. Um questionário Likert foi aplicado a 11 tutores e 45 discentes do sétimo período do curso de Medicina da Universidade Estadual de Montes Claros e uma entrevista de aprofundamento foi realizada com a totalidade dos tutores e 20 estudantes. Os entrevistados percebem a proposta formativa da avaliação na sessão tutorial, definindo-a como processual, reflexiva, dialógica, diagnóstica, e enfatizam a possibilidade de feedback como fator motivador e determinante para solucionar as deficiências detectadas e reforçar as potencialidades percebidas. São identificadas dificuldades relacionadas ao desempenho dos docentes, como falta de preparo, ao desempenho dos estudantes (falta de sinceridade, maturidade) e outras decorrentes da inadequação dos critérios utilizados nos instrumentos avaliativos. Os resultados apontam a necessidade de programas de desenvolvimento docente e discente em avaliação, assim como maior compromisso das instituições que utilizam a metodologia Aprendizagem Baseada em Problemas na busca contínua e reflexiva da coerência com os pressupostos pedagógicos estabelecidos pelo currículo.
Resumo:
This study is dedicated to search engine marketing (SEM). It aims for developing a business model of SEM firms and to provide explicit research of trustworthy practices of virtual marketing companies. Optimization is a general term that represents a variety of techniques and methods of the web pages promotion. The research addresses optimization as a business activity, and it explains its role for the online marketing. Additionally, it highlights issues of unethical techniques utilization by marketers which created relatively negative attitude to them on the Internet environment. Literature insight combines in the one place both technical and economical scientific findings in order to highlight technological and business attributes incorporated in SEM activities. Empirical data regarding search marketers was collected via e-mail questionnaires. 4 representatives of SEM companies were engaged in this study to accomplish the business model design. Additionally, the fifth respondent was a representative of the search engine portal, who provided insight on relations between search engines and marketers. Obtained information of the respondents was processed qualitatively. Movement of commercial organizations to the online market increases demand on promotional programs. SEM is the largest part of online marketing, and it is a prerogative of search engines portals. However, skilled users, or marketers, are able to implement long-term marketing programs by utilizing web page optimization techniques, key word consultancy or content optimization to increase web site visibility to search engines and, therefore, user’s attention to the customer pages. SEM firms are related to small knowledge-intensive businesses. On the basis of data analysis the business model was constructed. The SEM model includes generalized constructs, although they represent a wider amount of operational aspects. Constructing blocks of the model includes fundamental parts of SEM commercial activity: value creation, customer, infrastructure and financial segments. Also, approaches were provided on company’s differentiation and competitive advantages evaluation. It is assumed that search marketers should apply further attempts to differentiate own business out of the large number of similar service providing companies. Findings indicate that SEM companies are interested in the increasing their trustworthiness and the reputation building. Future of the search marketing is directly depending on search engines development.
Resumo:
Value of online business has grown to over one trillion USD. This thesis is about search engine optimization, which focus is to increase search engine rankings. Search engine optimization is an important branch of online marketing because the first page of search engine results is generating majority of the search traffic. Current articles about search engine optimization and Google are indicating that with the proper use of quality content, there is potential to improve search engine rankings. However, the existing search engine optimization literature is not noticing content at a sufficient level. To decrease that difference, the content-centered method for search engine optimization is constructed, and content in search engine optimization is studied. This content-centered method consists of three search engine optimization tactics: 1) content, 2) keywords, and 3) links. Two propositions were used for testing these tactics in a real business environment and results are suggesting that the content-centered method is improving search engine rankings. Search engine optimization is constantly changing because Google is adjusting its search algorithm regularly. Still, some long-term trends can be recognized. Google has said that content is growing its importance as a ranking factor in the future. The content-centered method is taking advance of this new trend in search engine optimization to be relevant for years to come.
Resumo:
Value of online business has grown to over one trillion USD. This thesis is about search engine optimization, which focus is to increase search engine rankings. Search engine optimization is an important branch of online marketing because the first page of search engine results is generating majority of the search traffic. Current articles about search engine optimization and Google are indicating that with the proper use of quality content, there is potential to improve search engine rankings. However, the existing search engine optimization literature is not noticing content at a sufficient level. To decrease that difference, the content-centered method for search engine optimization is constructed, and content in search engine optimization is studied. This content-centered method consists of three search engine optimization tactics: 1) content, 2) keywords, and 3) links. Two propositions were used for testing these tactics in a real business environment and results are suggesting that the content-centered method is improving search engine rankings. Search engine optimization is constantly changing because Google is adjusting its search algorithm regularly. Still, some long-term trends can be recognized. Google has said that content is growing its importance as a ranking factor in the future. The content-centered method is taking advance of this new trend in search engine optimization to be relevant for years to come.
Resumo:
Les habitudes de consommation de substances psychoactives, le stress, l’obésité et les traits cardiovasculaires associés seraient en partie reliés aux mêmes facteurs génétiques. Afin d’explorer cette hypothèse, nous avons effectué, chez 119 familles multi-générationnelles québécoises de la région du Saguenay-Lac-St-Jean, des études d’association et de liaison pangénomiques pour les composantes génétiques : de la consommation usuelle d’alcool, de tabac et de café, de la réponse au stress physique et psychologique, des traits anthropométriques reliés à l’obésité, ainsi que des mesures du rythme cardiaque (RC) et de la pression artérielle (PA). 58000 SNPs et 437 marqueurs microsatellites ont été utilisés et l’annotation fonctionnelle des gènes candidats identifiés a ensuite été réalisée. Nous avons détecté des corrélations phénotypiques significatives entre les substances psychoactives, le stress, l’obésité et les traits hémodynamiques. Par exemple, les consommateurs d’alcool et de tabac ont montré un RC significativement diminué en réponse au stress psychologique. De plus, les consommateurs de tabac avaient des PA plus basses que les non-consommateurs. Aussi, les hypertendus présentaient des RC et PA systoliques accrus en réponse au stress psychologique et un indice de masse corporelle (IMC) élevé, comparativement aux normotendus. D’autre part, l’utilisation de tabac augmenterait les taux corporels d’épinéphrine, et des niveaux élevés d’épinéphrine ont été associés à des IMC diminués. Ainsi, en accord avec les corrélations inter-phénotypiques, nous avons identifié plusieurs gènes associés/liés à la consommation de substances psychoactives, à la réponse au stress physique et psychologique, aux traits reliés à l’obésité et aux traits hémodynamiques incluant CAMK4, CNTN4, DLG2, DAG1, FHIT, GRID2, ITPR2, NOVA1, NRG3 et PRKCE. Ces gènes codent pour des protéines constituant un réseau d’interactions, impliquées dans la plasticité synaptique, et hautement exprimées dans le cerveau et ses tissus associés. De plus, l’analyse des sentiers de signalisation pour les gènes identifiés (P = 0,03) a révélé une induction de mécanismes de Potentialisation à Long Terme. Les variations des traits étudiés seraient en grande partie liées au sexe et au statut d’hypertension. Pour la consommation de tabac, nous avons noté que le degré et le sens des corrélations avec l’obésité, les traits hémodynamiques et le stress sont spécifiques au sexe et à la pression artérielle. Par exemple, si des variations ont été détectées entre les hommes fumeurs et non-fumeurs (anciens et jamais), aucune différence n’a été observée chez les femmes. Nous avons aussi identifié de nombreux traits reliés à l’obésité dont la corrélation avec la consommation de tabac apparaît essentiellement plus liée à des facteurs génétiques qu’au fait de fumer en lui-même. Pour le sexe et l’hypertension, des différences dans l’héritabilité de nombreux traits ont également été observées. En effet, des analyses génétiques sur des sous-groupes spécifiques ont révélé des gènes additionnels partageant des fonctions synaptiques : CAMK4, CNTN5, DNM3, KCNAB1 (spécifique à l’hypertension), CNTN4, DNM3, FHIT, ITPR1 and NRXN3 (spécifique au sexe). Ces gènes codent pour des protéines interagissant avec les protéines de gènes détectés dans l’analyse générale. De plus, pour les gènes des sous-groupes, les résultats des analyses des sentiers de signalisation et des profils d’expression des gènes ont montré des caractéristiques similaires à celles de l’analyse générale. La convergence substantielle entre les déterminants génétiques des substances psychoactives, du stress, de l’obésité et des traits hémodynamiques soutiennent la notion selon laquelle les variations génétiques des voies de plasticité synaptique constitueraient une interface commune avec les différences génétiques liées au sexe et à l’hypertension. Nous pensons, également, que la plasticité synaptique interviendrait dans de nombreux phénotypes complexes influencés par le mode de vie. En définitive, ces résultats indiquent que des approches basées sur des sous-groupes et des réseaux amélioreraient la compréhension de la nature polygénique des phénotypes complexes, et des processus moléculaires communs qui les définissent.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
This report presents the canonical Hamiltonian formulation of relative satellite motion. The unperturbed Hamiltonian model is shown to be equivalent to the well known Hill-Clohessy-Wilshire (HCW) linear formulation. The in°uence of perturbations of the nonlinear Gravitational potential and the oblateness of the Earth; J2 perturbations are also modelled within the Hamiltonian formulation. The modelling incorporates eccentricity of the reference orbit. The corresponding Hamiltonian vector ¯elds are computed and implemented in Simulink. A numerical method is presented aimed at locating periodic or quasi-periodic relative satellite motion. The numerical method outlined in this paper is applied to the Hamiltonian system. Although the orbits considered here are weakly unstable at best, in the case of eccentricity only, the method ¯nds exact periodic orbits. When other perturbations such as nonlinear gravitational terms are added, drift is signicantly reduced and in the case of the J2 perturbation with and without the nonlinear gravitational potential term, bounded quasi-periodic solutions are found. Advantages of using Newton's method to search for periodic or quasi-periodic relative satellite motion include simplicity of implementation, repeatability of solutions due to its non-random nature, and fast convergence. Given that the use of bounded or drifting trajectories as control references carries practical di±culties over long-term missions, Principal Component Analysis (PCA) is applied to the quasi-periodic or slowly drifting trajectories to help provide a closed reference trajectory for the implementation of closed loop control. In order to evaluate the e®ect of the quality of the model used to generate the periodic reference trajectory, a study involving closed loop control of a simulated master/follower formation was performed. 2 The results of the closed loop control study indicate that the quality of the model employed for generating the reference trajectory used for control purposes has an important in°uence on the resulting amount of fuel required to track the reference trajectory. The model used to generate LQR controller gains also has an e®ect on the e±ciency of the controller.
Resumo:
Este estudo investiga o poder preditivo fora da amostra, um mês à frente, de um modelo baseado na regra de Taylor para previsão de taxas de câmbio. Revisamos trabalhos relevantes que concluem que modelos macroeconômicos podem explicar a taxa de câmbio de curto prazo. Também apresentamos estudos que são céticos em relação à capacidade de variáveis macroeconômicas preverem as variações cambiais. Para contribuir com o tema, este trabalho apresenta sua própria evidência através da implementação do modelo que demonstrou o melhor resultado preditivo descrito por Molodtsova e Papell (2009), o “symmetric Taylor rule model with heterogeneous coefficients, smoothing, and a constant”. Para isso, utilizamos uma amostra de 14 moedas em relação ao dólar norte-americano que permitiu a geração de previsões mensais fora da amostra de janeiro de 2000 até março de 2014. Assim como o critério adotado por Galimberti e Moura (2012), focamos em países que adotaram o regime de câmbio flutuante e metas de inflação, porém escolhemos moedas de países desenvolvidos e em desenvolvimento. Os resultados da nossa pesquisa corroboram o estudo de Rogoff e Stavrakeva (2008), ao constatar que a conclusão da previsibilidade da taxa de câmbio depende do teste estatístico adotado, sendo necessária a adoção de testes robustos e rigorosos para adequada avaliação do modelo. Após constatar não ser possível afirmar que o modelo implementado provém previsões mais precisas do que as de um passeio aleatório, avaliamos se, pelo menos, o modelo é capaz de gerar previsões “racionais”, ou “consistentes”. Para isso, usamos o arcabouço teórico e instrumental definido e implementado por Cheung e Chinn (1998) e concluímos que as previsões oriundas do modelo de regra de Taylor são “inconsistentes”. Finalmente, realizamos testes de causalidade de Granger com o intuito de verificar se os valores defasados dos retornos previstos pelo modelo estrutural explicam os valores contemporâneos observados. Apuramos que o modelo fundamental é incapaz de antecipar os retornos realizados.
Resumo:
This paper presents an interior point method for the long-term generation scheduling of large-scale hydrothermal systems. The problem is formulated as a nonlinear programming one due to the nonlinear representation of hydropower production and thermal fuel cost functions. Sparsity exploitation techniques and an heuristic procedure for computing the interior point method search directions have been developed. Numerical tests in case studies with systems of different dimensions and inflow scenarios have been carried out in order to evaluate the proposed method. Three systems were tested, with the largest being the Brazilian hydropower system with 74 hydro plants distributed in several cascades. Results show that the proposed method is an efficient and robust tool for solving the long-term generation scheduling problem.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In this paper a method for solving the Short Term Transmission Network Expansion Planning (STTNEP) problem is presented. The STTNEP is a very complex mixed integer nonlinear programming problem that presents a combinatorial explosion in the search space. In this work we present a constructive heuristic algorithm to find a solution of the STTNEP of excellent quality. In each step of the algorithm a sensitivity index is used to add a circuit (transmission line or transformer) to the system. This sensitivity index is obtained solving the STTNEP problem considering as a continuous variable the number of circuits to be added (relaxed problem). The relaxed problem is a large and complex nonlinear programming and was solved through an interior points method that uses a combination of the multiple predictor corrector and multiple centrality corrections methods, both belonging to the family of higher order interior points method (HOIPM). Tests were carried out using a modified Carver system and the results presented show the good performance of both the constructive heuristic algorithm to solve the STTNEP problem and the HOIPM used in each step.
Resumo:
In this paper, a method for solving the short term transmission network expansion planning problem is presented. This is a very complex mixed integer nonlinear programming problem that presents a combinatorial explosion in the search space. In order to And a solution of excellent quality for this problem, a constructive heuristic algorithm is presented in this paper. In each step of the algorithm, a sensitivity index is used to add a circuit (transmission line or transformer) or a capacitor bank (fixed or variable) to the system. This sensitivity index is obtained solving the problem considering the numbers of circuits and capacitors banks to be added (relaxed problem), as continuous variables. The relaxed problem is a large and complex nonlinear programming and was solved through a higher order interior point method. The paper shows results of several tests that were performed using three well-known electric energy systems in order to show the possibility and the advantages of using the AC model. ©2007 IEEE.
Resumo:
The high active and reactive power level demanded by the distribution systems, the growth of consuming centers, and the long lines of the distribution systems result in voltage variations in the busses compromising the quality of energy supplied. To ensure the energy quality supplied in the distribution system short-term planning, some devices and actions are used to implement an effective control of voltage, reactive power, and power factor of the network. Among these devices and actions are the voltage regulators (VRs) and capacitor banks (CBs), as well as exchanging the conductors sizes of distribution lines. This paper presents a methodology based on the Non-Dominated Sorting Genetic Algorithm (NSGA-II) for optimized allocation of VRs, CBs, and exchange of conductors in radial distribution systems. The Multiobjective Genetic Algorithm (MGA) is aided by an inference process developed using fuzzy logic, which applies specialized knowledge to achieve the reduction of the search space for the allocation of CBs and VRs.
Resumo:
One way to organize knowledge and make its search and retrieval easier is to create a structural representation divided by hierarchically related topics. Once this structure is built, it is necessary to find labels for each of the obtained clusters. In many cases the labels must be built using all the terms in the documents of the collection. This paper presents the SeCLAR method, which explores the use of association rules in the selection of good candidates for labels of hierarchical document clusters. The purpose of this method is to select a subset of terms by exploring the relationship among the terms of each document. Thus, these candidates can be processed by a classical method to generate the labels. An experimental study demonstrates the potential of the proposed approach to improve the precision and recall of labels obtained by classical methods only considering the terms which are potentially more discriminative. © 2012 - IOS Press and the authors. All rights reserved.