923 resultados para Convergence And Extension
Resumo:
In educational observed difficulty in train teachers to meet the medium and higher education needs, and one reason for this is the different experiences in the training of educators in relation to those found in the classroom. So often arise criticisms related to relevance and efficiency of degree courses, as regards the performance of its natural mission, which weakens the teacher training. Thus, improving the quality of education is very dependent on the initiatives of teachers, creating teaching alternatives to strengthen their performance in school. From this reflection, it is concluded that teacher training needs new educational proposals that qualify, and so can promote the formation of his students more adequately. Among the educational proposals as alternatives to initial teacher training may use the scientific theater (TC). Considering this possibility, this work has been proposed as investigate and discuss the influence of TC combined with experimentation in the initial training of future teachers in Chemistry who participate in the Groups Fanatics chemistry Theatre (UERN) and Chemistry on Stage (UFRN). Therefore, there was, in a first stage, theatrical essays based on the theater of the oppressed, and written dramaturgical scripts, a collaborative proposal. To incorporate experimentation in chemistry to theater rehearsals, there was a systematic literature search and after content analysis, were selected categories, materials and reagents easily accessible, easy procedures and implementation with low risk of accidents and easy care chemical waste. In the second part we identified: a) the beliefs of student teachers in the use of TC ally trial for initial training of chemistry teachers; b) the influence of TC ally to trial on learning of chemical concepts of high school students who attended the shows; c) the reasons for using or not TC ally to trial by chemistry teachers who participated in the TC group and currently work in the classroom. In this study, questionnaires and interviews were used, compounds, respectively, by a Likert scale and open questions. Quantitative data were analyzed by classical statistics the media, using as centrality measures the average, the concordance argument and the average deviation. Qualitative data were discussed according to content analysis, with categories that emerged from reading the answers. These analyzes concluded that the licensees have a positive view on the use of scientific theater for disclosure of the chemical for use in the learning of chemical concepts, pedagogical and disciplinary knowledge, and to form promotion strategy for research and extension at the University. They credit improvements in their initial training on the use of scientific theater combined with experimentation. The TC provides motivation for the construction of conceptual thinking in an informal way of chemical communication, allowing the student to expand their knowledge, not only favoring the phenomenological approach, but also the construction of chemical knowledge and the internalization of scientific concepts.
Resumo:
The substantial increase in the number of applications offered through the computer networks, as well as in the volume of traffic forwarded through the network, have hampered to assure adequate service level to users. The Quality of Service (QoS) offer, honoring specified parameters in Service Level Agreements (SLA), established between the service providers and their clients, composes a traditional and extensive computer networks’ research area. Several schemes proposals for the provision of QoS were presented in the last three decades, but the acting scope of these proposals is always limited due to some factors, including the limited development of the network hardware and software, generally belonging to a single manufacturer. The advent of Software Defined Networking (SDN), along with the maturation of its main materialization, the OpenFlow protocol, allowed the decoupling between network hardware and software, through an architecture which provides a control plane and a data plane. This eases the computer networks scenario, allowing that new abstractions are applied in the hardware composing the data plane, through the development of new software pieces which are executed in the control plane. This dissertation investigates the QoS offer through the use and extension of the SDN architecture. Based on the proposal of two new modules, one to perform the data plane monitoring, SDNMon, and the second, MP-ROUTING, developed to determine the use of multiple paths in the forwarding of data referring to a flow, we demonstrated in this work that some QoS metrics specified in the SLAs, such as bandwidth, can be honored. Both modules were implemented and evaluated through a prototype. The evaluation results referring to several aspects of both proposed modules are presented in this dissertation, showing the obtained accuracy of the monitoring module SDNMon and the QoS gains due to the utilization of multiple paths defined by the MP-Routing, when forwarding data flow through the SDN.
Resumo:
A pesquisa considera a difusão de celulares e smartphones e as consequências deste fato em possibilidades para o ensino-aprendizagem. Aparatos de comunicação sempre estiveram ligados ao processo de ensino-aprendizagem. Entretanto, com o desenvolvimento mais intenso, nas últimas décadas, das Tecnologias de Informação e Comunicação (TIC), essa relação vem ganhando novos contornos. Surge a Internet, a evolução das máquinas computacionais e, recentemente, a explosão dos dispositivos móveis, fornecendo novos produtos e serviços convergentes. Nesse contexto, celulares e smartphones tem sido utilizados e recomendados para apoio e complemento do processo de ensino-aprendizagem: a chamada Aprendizagem Móvel. Esse ramo cresce devido à rápida expansão e barateamento dessas tecnologias na sociedade. Para verificar cientificamente essa relação foi realizada uma pesquisa de natureza qualitativa, do tipo exploratória, com dois projetos de Aprendizagem Móvel em andamento no Brasil, o Palma – Programa de Alfabetização na Língua Materna e o Escola Com Celular – ECC. Assim, a partir dos dados provenientes da pesquisa, identificamos alguns aspectos relacionados ao uso de celulares e smartphones para o processo de ensino-aprendizagem que contribuem na compreensão desse campo ainda em construção no Brasil. O uso desses dispositivos como suporte para processos de ensino-aprendizagem nos projetos estudados é delineado pelos aspectos tecnologia, dispositivo, público e contexto e novas tecnologias e Aprendizagem Móvel. O aspecto dispositivo desdobra-se em dimensões como disseminação, multifuncionalidade e acessibilidade que embasam os projetos, ainda favorece características apontadas como importantes para o processo de ensino-aprendizagem na atualidade, como mobilidade e portabilidade. Os projetos pesquisados demonstram potencial e metodologia adequada aos contextos para os quais foram criados e aplicados. Entretanto, a pesquisa indicou que ao mesmo tempo em que celulares e smartphones representam o ápice da convergência tecnológica e são considerados extremamente populares e acessíveis na sociedade contemporânea, com possibilidades concretas como nos projetos estudados, não conseguiram conquistar uma posição sólida como suporte para o ensino-aprendizagem. Tal indicação se deve, de acordo com o corpus, à carência de alguns fatores, como: fomento, as práticas se mostram extremamente dependentes da iniciativa pública ou privada para sua extensão e continuidade; sensibilização para o uso de tecnologias disponíveis, não consideram o aparelho dos próprios alunos e um planejamento que inclua, capacite e incentive o uso desses dispositivos. Além disso, a pesquisa também destaca a necessidade de uma visão crítica do uso e papel da tecnologia nesses processos.
Resumo:
Dissertação apresentada para obtenção do grau de mestre no âmbito do Mestrado em Educação Social e Intervenção Comunitária da Escola Superior de Educação do Instituto Politécnico de Santarém.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Through this paper we will look at links between architecture education, research and practice, using a current project as a vehicle to cover aspects of building, pilot and live project. The first aspect, the building project consists of the refurbishment and extension of a Parnell Cottage for a private client and is located near Cloyne, in East Cork, Ireland. The pilot project falls within the NEES Project, investigating the use of materials and services based on natural or recycled materials to improve the energy performance of new and existing buildings. The live project aims to hold a series of on site workshops and seminars for students of Architecture, Architects and interested parties, demonstrating the integration of the NEES best practice materials and techniques within the built project. The workshops, seminars and key project documents will be digitally recorded for dissemination through a web based publication. The small scale of the building project allowed for flexibility in the early conceptual design stages and the integration of the research and educational aspects.
Resumo:
El artículo rescata la importancia del trabajo de campo geográfico en una región con conflictos socioambientales, como el del el agua en las Sierras Chicas de Córdoba. Se hace eje en una experiencia pedagógica, la Práctica Sociocomunitaria (PSC), llevada a cabo por profesores, alumnos y ayudantes de la materia Geografía Rural, de la carrera de Licenciatura en Geografía de la Facultad de Filosofía y Humanidades (FFyH), en la localidad de La Granja, departamento Colón, de la provincia de Córdoba. La PSC es una experiencia que acerca a los alumnos al campo social de los conflictos territoriales. Se trata de una modalidad que va más allá de un proyecto de extensión, ya que involucra a todos los alumnos del grado que cursan la materia. Y es también una manera de aunar, en nuestro caso desde el quehacer geográfico, las funciones de docencia, investigación y extensión propias de los universitarios. Se pretende, a través de la PSC, acercar a los alumnos de Geografía Rural al trabajo de campo, con organizaciones sociales de base local motorizadas que conocen los problemas de su localidad en profundidad y que trabajan junto con nuestro equipo de investigación. A la vez, el contacto, las reflexiones grupales e individuales, los debates con los estudiantes universitarios, aportarán al colectivo social, una ampliación de la esfera de conocimientos de la realidad sobre la que viven y luchan. El artículo comienza definiendo qué se entiende por PSC. Luego, atendiendo específicamente a nuestra práctica, desarrollamos lo que para nosotros son las dos lógicas que sustentan el trabajo en terreno. Una, referida a la construcción del conocimiento, a los modos diversos de aprender y de saber. Otra, vinculada a la comprensión del conflicto socioterritorial, en relación con el escenario donde se realiza la práctica: la Mesa del Agua y Ambiente de La Granja. Incluimos un apartado sobre la descripción de la experiencia y los resultados, para finalizar con algunas reflexiones pensadas en función de la continuidad de la práctica.
Resumo:
El artículo rescata la importancia del trabajo de campo geográfico en una región con conflictos socioambientales, como el del el agua en las Sierras Chicas de Córdoba. Se hace eje en una experiencia pedagógica, la Práctica Sociocomunitaria (PSC), llevada a cabo por profesores, alumnos y ayudantes de la materia Geografía Rural, de la carrera de Licenciatura en Geografía de la Facultad de Filosofía y Humanidades (FFyH), en la localidad de La Granja, departamento Colón, de la provincia de Córdoba. La PSC es una experiencia que acerca a los alumnos al campo social de los conflictos territoriales. Se trata de una modalidad que va más allá de un proyecto de extensión, ya que involucra a todos los alumnos del grado que cursan la materia. Y es también una manera de aunar, en nuestro caso desde el quehacer geográfico, las funciones de docencia, investigación y extensión propias de los universitarios. Se pretende, a través de la PSC, acercar a los alumnos de Geografía Rural al trabajo de campo, con organizaciones sociales de base local motorizadas que conocen los problemas de su localidad en profundidad y que trabajan junto con nuestro equipo de investigación. A la vez, el contacto, las reflexiones grupales e individuales, los debates con los estudiantes universitarios, aportarán al colectivo social, una ampliación de la esfera de conocimientos de la realidad sobre la que viven y luchan. El artículo comienza definiendo qué se entiende por PSC. Luego, atendiendo específicamente a nuestra práctica, desarrollamos lo que para nosotros son las dos lógicas que sustentan el trabajo en terreno. Una, referida a la construcción del conocimiento, a los modos diversos de aprender y de saber. Otra, vinculada a la comprensión del conflicto socioterritorial, en relación con el escenario donde se realiza la práctica: la Mesa del Agua y Ambiente de La Granja. Incluimos un apartado sobre la descripción de la experiencia y los resultados, para finalizar con algunas reflexiones pensadas en función de la continuidad de la práctica.
Resumo:
Estudiosos de todo el mundo se están centrando en el estudio del fenómeno de las ciudades inteligentes. La producción bibliográfica española sobre este tema ha crecido exponencialmente en los últimos años. Las nuevas ciudades inteligentes se fundamentan en nuevas visiones de desarrollo urbano que integran múltiples soluciones tecnológicas ligadas al mundo de la información y de la comunicación, todas ellas actuales y al servicio de las necesidades de la ciudad. La literatura en español sobre este tema proviene de campos tan diferentes como la Arquitectura, la Ingeniería, las Ciencias Políticas y el Derecho o las Ciencias Empresariales. La finalidad de las ciudades inteligentes es la mejora de la vida de sus ciudadanos a través de la implementación de tecnologías de la información y de la comunicación que resuelvan las necesidades de sus habitantes, por lo que los investigadores del campo de las Ciencias de la Comunicación y de la Información tienen mucho que decir. Este trabajo analiza un total de 120 textos y concluye que el fenómeno de las ciudades inteligentes será uno de los ejes centrales de la investigación multidisciplinar en los próximos años en nuestro país.
Resumo:
Preparedness has become a central component to contemporary approaches to flood risk management as there is a growing recognition that our reliance on engineered flood defences is unsustainable within the context of more extreme and unpredictable weather events. Whilst many researchers have focused their attention on exploring the key factors influencing flood-risk preparedness at the individual level, little consideration has been attributed to how we understand preparedness conceptually and practically in the first instance. This paper seeks to address this particular gap by identifying and analysing the diverse range of conceptualisations of preparedness and typologies of preparedness measures that exist within the literature in order to identify areas of convergence and divergence. In doing so, we demonstrate that a considerable degree of confusion remains in terms of how preparedness is defined, conceptualised and categorised. We conclude by reflecting on the implications this has from an academic perspective, but also in terms of the more practical aspects of flood risk management.
Resumo:
Impactive contact between a vibrating string and a barrier is a strongly nonlinear phenomenon that presents several challenges in the design of numerical models for simulation and sound synthesis of musical string instruments. These are addressed here by applying Hamiltonian methods to incorporate distributed contact forces into a modal framework for discrete-time simulation of the dynamics of a stiff, damped string. The resulting algorithms have spectral accuracy, are unconditionally stable, and require solving a multivariate nonlinear equation that is guaranteed to have a unique solution. Exemplifying results are presented and discussed in terms of accuracy, convergence, and spurious high-frequency oscillations.
Resumo:
We consider a linear precoder design for an underlay cognitive radio multiple-input multiple-output broadcast channel, where the secondary system consisting of a secondary base-station (BS) and a group of secondary users (SUs) is allowed to share the same spectrum with the primary system. All the transceivers are equipped with multiple antennas, each of which has its own maximum power constraint. Assuming zero-forcing method to eliminate the multiuser interference, we study the sum rate maximization problem for the secondary system subject to both per-antenna power constraints at the secondary BS and the interference power constraints at the primary users. The problem of interest differs from the ones studied previously that often assumed a sum power constraint and/or single antenna employed at either both the primary and secondary receivers or the primary receivers. To develop an efficient numerical algorithm, we first invoke the rank relaxation method to transform the considered problem into a convex-concave problem based on a downlink-uplink result. We then propose a barrier interior-point method to solve the resulting saddle point problem. In particular, in each iteration of the proposed method we find the Newton step by solving a system of discrete-time Sylvester equations, which help reduce the complexity significantly, compared to the conventional method. Simulation results are provided to demonstrate fast convergence and effectiveness of the proposed algorithm.
Resumo:
A theory was developed to allow the separate determination of the effects of the interparticle friction and interlocking of particles on the shearing resistance and deformational behavior of granular materials. The derived parameter, angle of solid friction, is independent of the type of shear test, stress history, porosity and the level of confining pressure, and depends solely upon the nature of the particle surface. The theory was tested against published data concerning the performance of plane strain, triaxial compression and extension tests on cohesionless soils. The theory also was applied to isotropically consolidated undrained triaxial tests on three crushed limestones prepared by the authors using vibratory compaction. The authors concluded that, (1) the theory allowed the determination of solid friction between particles which was found to depend solely on the nature of the particle surface, (2) the separation of frictional and volume change components of shear strength of granular materials qualitatively corroborated the postulated mechanism of deformation (sliding and rolling of groups of particles over other similar groups with resulting dilatancy of specimen), (3) the influence of void ratio, gradation confining pressure, stress history and type of shear test on shear strength is reflected in values of the omega parameter, and (4) calculation of the coefficient of solid friction allows the establishment of the lower limit of the shear strength of a granular material.
Resumo:
The study aimed to understand how the methodology of hatching contributes to the sustainability of economic enterprises in solidarity. For analysis, we developed a study on the social economy and the incubation methodology, based on the program of teaching, research and extension - Technological Incubator of Popular Cooperatives and Entrepreneurship Solidarity (PITCPES), and as the survey of the Cooperative of Fruit of Abaetetuba - COFRUTA. We started from the exploratory-descriptive approach in a qualitative and quantitative, in order to demonstrate the process of sustainability under the dimensions of different kinds such as: the economic dimension, social dimension, the political dimension, size and scale management training. Based on the analysis of these different dimensions was reached results as: first the recognition that the incubator contributes to the sustainability of COFRUTA, especially with regard to planning, control and the need to diversify production. However, there was suggestion of cooperative for training and technical assistance is continued, to the extent that the performance of projects under the base leaves gaps for the learning and application of social technologies required to the Incubator. It also concluded that the dissertation contributes to the team of the incubator can assess their strengths and weaknesses in their performance
Resumo:
Introdução: O judô é um esporte que implica uma grande variedade de gestos, ações e aptidões físicas, entre as quais, capacidade de controlo postural, equilíbrio, flexibilidade e força. Quando observada as áreas mais afetas na pratica do judô a região do joelho é das que possui maior incidência. O objetivo deste estudo foi avaliar os efeitos da aplicação do Dynamic Tape (DT), um tape biomecânico, na funcionalidade do quadriceps de atletas de judô masculino com dor não específica no joelho em termos de equilíbrio, força, flexibilidade e dor. Metodologia: A amostra foi constituída por 37 indivíduos, tendo os participantes sido submetidos a testes, primeiramente sem Dynamic Tape (SDT) e posteriormente com Dynamic Tape (CDT). Os testes aplicados foram o Standing Stork Test (SST), o Y Balance Test (YBT), o Four Square Step Test (FSST),o Single Leg Hop Test (SLHT), e o Teste de flexão do membro inferior (TFMI) e o Teste de extensão do membros (TEMI) e a escala numérica de dor (END) no final de todos os testes. Resultados: Não foram observadas diferenças significativas para o teste SST (p=0,6794), porém os teste YBT, SLHT, TFMI, TEMI e END (p<0,0001), assim como FSST (p=0,0026) entre os momentos CDT e SDT demonstraram diferenças estatísticamente significativas, produzindo a aplicação do DT efeitos positivos. Na performance do atleta. Conclusão: A aplicação do DT não foi capaz de melhorar de forma significativa o equilíbrio estático, no então demonstrou influenciar o equilíbrio semi-dinâmico, dinâmico, a flexibilidade e a dor.