982 resultados para Structured methods


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper estimates the impact of the use of structured methods on the quality of education of the students in primary public school in Brazil. Structure methods encompass a range of pedagogical and managerial instruments applied to the education system. In recent years, several municipalities in the State of São Paulo have contracted out private educational providers to implement these structured methods in their schooling system. Their pedagogical proposal involves structuring curriculum contents, elaboration and use of teachers and students textbooks, and training and supervision of the teachers and instructors. Using a difference in differences estimation strategy, we find that the fourth and eighth grader students in the municipalities with structured methods performed better in Portuguese and Math than students in municipalities not exposed to the methods. We find no differences in approval rates. However, a robustness check is not able to discard the possibility that unobserved municipal characteristics may affect the results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper estimates the impact of the use of structured methods on the quality of education for students in primary public school in Brazil. Structured methods encompass a range of pedagogical and managerial instruments applied in the educational system. In recent years, several municipalities in the state of Sao Paulo have contracted out private educational providers to implement these structured methods in their schooling systems. Their pedagogical proposal involves structuring of curriculum content, development of teacher and student textbooks, and the training and supervision of teachers anti instructors. Using a difference-in-differences estimation strategy, we find that the 4th- and 8th-grade students in the municipalities with structured methods performed better in Portuguese and mathematics than did students in municipalities not exposed to these methods. We find no differences in passing rates. A robustness test supports the assumption that there is no unobserved municipal characteristics associated with proficiency changes over time that may affect the results. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Att medarbetare är av existentiell betydelse för en organisations överlevnad är sedan länge känt. Därmed är rekrytering en av de viktigaste funktionerna inom HR för att attrahera rätt kompetens till organisationen då en misslyckad rekrytering vanligen leder till bortkastad tid och dyra rekryteringsprocesser. Något som kommit att få allt större betydelse vid urvalet och bedömningen av nya medarbetare är kandidaters personlighet. Forskning visar att personlighetsdrag spelar en stor roll när det kommer till framtida arbetsprestationer och förmågan att göra rätt bedömningar av människor är därför central. Teorier om synen på personlighet, kompetens, kompetensbaserad rekrytering och urval samt bedömningsmetoder och personlighetsbedömning används för att analysera studiens resultat. Denna kvalitativa studies övergripande syfte var att öka förståelsen för hur rekryterare bedömer en kandidats personlighet vid en rekryteringsprocess inom bemanningsföretag. Bemanningsföretag är företag som ständigt arbetar med rekrytering och uthyrning av personal, varför ett proaktivt bemanningsarbete krävs för att skapa en konkurrensfördel på marknaden. Inför denna fallstudie kontaktades tre av den svenska bemanningsbranschens största aktörer varav två rekryterare på vardera företag deltog i semistrukturerade intervjuer. Personlighet ansågs generellt som något viktigt som samtliga rekryterare lade stor vikt vid under hela processens gång, från utformandet av kravprofilen till avslutande bedömning. Bedömningen skedde genom såväl test som intervju och referenstagning. Samtliga poängterade vikten av att alltid göra en helhetsbedömning av kandidaten och att det därmed var svårt att vikta exempelvis formella kompetenser mot personliga egenskaper. Resultatet visar att bemanningsbranschens arbete med personlighetsbedömning vid rekrytering utgår ifrån strukturerade bedömningsmetoder. Deras gedigna och proaktiva arbete med rekrytering lever upp till påståendet om att personalen är en organisations viktigaste resurs och vikten av att förstå innebörden av personlighetens betydelse i uttrycket ”rätt person på rätt plats”.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract:

Requirements engineering is a crucial phase in software development. Software development in a virtual domain adds another dimension to the process of requirements engineering. There has been growing interest in virtual teams, and more specifically in virtual software development. While structured software development methods are the obvious first choice for project managers to ensure a virtual software development team remains on track, the social and cultural aspects of requirements engineering cannot be ignored. These social aspects are especially important across different cultures, and have been shown to affect the success of an information system. The discussion in this paper is centred around the requirements engineering processes of a virtual team in a Thai Software House. This paper explains the issues and challenges of requirements engineering in a virtual domain from a social and cultural perspective. Project managers need to encourage a balance between structured methods and social aspects in requirements engineering for virtual team members. Cultural and social aspects influence the relationship between the virtual team and the client.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta dissertação de mestrado teve como objetivo principal propor um método estruturado suportado for ferramentas gerenciais que permitisse orientar e sistematizar o desenvolvimento de um projeto de pesquisa na disponibilização de uma nova tecnologia para o mercado. O “Projeto Etanol de 2ª Geração”, etanol produzido a partir de biomassas lignocelulósicas, aqui selecionado para estudo de caso, foi extraído da carteira de projetos do Centro de Tecnologia Canavieira (CTC). O método estruturado sugerido é constituído, fundamentalmente, por oito requisitos arranjados de forma cronológica ao longo do desenvolvimento do projeto, que visam auxiliar na prospecção, entendimento, avaliação, valoração, priorização, planejamento e implantação de, por exemplo, uma tecnologia inovadora, otimizando tempo, capital e recursos humanos aplicados. Um dos principais pontos do método proposto refere-se à escolha adequada das ferramentas gerenciais a serem utilizadas em cada requisito (brainstorm, análise de patentes, painel de especialistas, análise SWOT, dentre outras). O êxito na aplicação do método requer o entendimento de todos os (potenciais) efeitos, inclusive os colaterais, no processo como um todo. Ou seja, uma vez que toda ferramenta gerencial apresenta pontos fortes e fracos, o importante é adaptá-las ao sistema de negócio e não vice-versa. A partir do gerenciamento do projeto por um gestor com domínio das ferramentas gerenciais, a escolha destas ocorre de forma dinâmica, onde a cada passo de avaliação novas ferramentas (simples e/ou complexas) podem ser incluídas ou excluídas da matriz do método. Neste trabalho ficou demonstrada a importância de se trabalhar com métodos estruturados e flexíveis, que permitem retroalimentação de informações geradas internamente durante o desenvolvimento da pesquisa ou advindas de fontes externas. O projeto Etanol de 2ª Geração do CTC vem aplicando o método proposto em seu desenvolvimento e obtendo grande êxito em seus resultados, uma vez que a equipe envolvida permanece focada no objetivo principal, obedecendo prazos e recursos inicialmente definidos, com constância do propósito do projeto, sem retrocesso ou recomeço.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabalho constitui-se num estudo sobre o curso de formação de professores de Química da UFPA, contando sua história a partir dos desenhos curriculares que o nortearam nos seus 30 anos de existência e inserindo tal história no contexto maior da construção da ciência moderna. O estudo prossegue analisando, à luz da literatura, o projeto político-pedagógico recém aprovado pelo Colegiado do Curso visando detectar possíveis avanços e conclui respondendo, a partir do quadro delineado pela história do Curso e pelo momento presente, à questão: em função das demandas impostas pela sociedade moderna, como deve ser formado, hoje, um professor de química? Foram usados como principais métodos de pesquisa, coleta de depoimentos mediante entrevistas semi-estruturadas e pesquisa documental e bibliográfica, com maior ênfase a esta última.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conventional structured methods of software engineering are often based on the use of functional decomposition coupled with the Waterfall development process model. This approach is argued to be inadequate for coping with the evolutionary nature of large software systems. Alternative development paradigms, including the operational paradigm and the transformational paradigm, have been proposed to address the inadequacies of this conventional view of software developement, and these are reviewed. JSD is presented as an example of an operational approach to software engineering, and is contrasted with other well documented examples. The thesis shows how aspects of JSD can be characterised with reference to formal language theory and automata theory. In particular, it is noted that Jackson structure diagrams are equivalent to regular expressions and can be thought of as specifying corresponding finite automata. The thesis discusses the automatic transformation of structure diagrams into finite automata using an algorithm adapted from compiler theory, and then extends the technique to deal with areas of JSD which are not strictly formalisable in terms of regular languages. In particular, an elegant and novel method for dealing with so called recognition (or parsing) difficulties is described,. Various applications of the extended technique are described. They include a new method of automatically implementing the dismemberment transformation; an efficient way of implementing inversion in languages lacking a goto-statement; and a new in-the-large implementation strategy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this project, the main focus is to apply image processing techniques in computer vision through an omnidirectional vision system to agricultural mobile robots (AMR) used for trajectory navigation problems, as well as localization matters. To carry through this task, computational methods based on the JSEG algorithm were used to provide the classification and the characterization of such problems, together with Artificial Neural Networks (ANN) for pattern recognition. Therefore, it was possible to run simulations and carry out analyses of the performance of JSEG image segmentation technique through Matlab/Octave platforms, along with the application of customized Back-propagation algorithm and statistical methods as structured heuristics methods in a Simulink environment. Having the aforementioned procedures been done, it was practicable to classify and also characterize the HSV space color segments, not to mention allow the recognition of patterns in which reasonably accurate results were obtained. ©2010 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.