974 resultados para Data interpretation, statistical
Resumo:
There is an increasing demand for DNA analysis because of the sensitivity of the method and the ability to uniquely identify and distinguish individuals with a high degree of certainty. But this demand has led to huge backlogs in evidence lockers since the current DNA extraction protocols require long processing time. The DNA analysis procedure becomes more complicated when analyzing sexual assault casework samples where the evidence contains more than one contributor. Additional processing to separate different cell types in order to simplify the final data interpretation further contributes to the existing cumbersome protocols. The goal of the present project is to develop a rapid and efficient extraction method that permits selective digestion of mixtures. ^ Selective recovery of male DNA was achieved with as little as 15 minutes lysis time upon exposure to high pressure under alkaline conditions. Pressure cycling technology (PCT) is carried out in a barocycler that has a small footprint and is semi-automated. Typically less than 10% male DNA is recovered using the standard extraction protocol for rape kits, almost seven times more male DNA was recovered from swabs using this novel method. Various parameters including instrument setting and buffer composition were optimized to achieve selective recovery of sperm DNA. Some developmental validation studies were also done to determine the efficiency of this method in processing samples exposed to various conditions that can affect the quality of the extraction and the final DNA profile. ^ Easy to use interface, minimal manual interference and the ability to achieve high yields with simple reagents in a relatively short time make this an ideal method for potential application in analyzing sexual assault samples.^
Resumo:
On the health area, care is seen as an important concept for professionals. Recently there is a trend, the epistemological point of view, to think carefully as category allowing discuss and re (thinking) practices of humane health care. The objective of this study is to investigate the understanding of care in nursing staff in a Basic Health Unit (BHU) in the city of Natal. The participants were two nurses and two nursing technicians working on the BHU of Guarapes district, located on the west side of Natal - RN. As Theoretical support for data interpretation, was used the concept of care taking, using Heidegger’s theories as reference, through Ayre’s ideas. This is a qualitative research using autobiographical as a methodology. The methodological strategy approach used were focus group and individual interviews. The data were discussed by three angles: knowledge (concepts and theories behind the practical model), technical (the way to do), and ethics (values that bases practices) by hermeneutics Gadamer's dialectic. The results showed the necessity to expand the discussion on care, in practice the understanding of care seems to get narrower concerns with technical success by professionals. The care, as a meeting between beings, the desired humanized care, seems to be understood in parts in theory, but also distances itself from reality in professional practice.
Resumo:
Through numerous technological advances in recent years along with the popularization of computer devices, the company is moving towards a paradigm “always connected”. Computer networks are everywhere and the advent of IPv6 paves the way for the explosion of the Internet of Things. This concept enables the sharing of data between computing machines and objects of day-to-day. One of the areas placed under Internet of Things are the Vehicular Networks. However, the information generated individually for a vehicle has no large amount and does not contribute to an improvement in transit, once information has been isolated. This proposal presents the Infostructure, a system that has to facilitate the efforts and reduce costs for development of applications context-aware to high-level semantic for the scenario of Internet of Things, which allows you to manage, store and combine the data in order to generate broader context. To this end we present a reference architecture, which aims to show the major components of the Infostructure. Soon after a prototype is presented which is used to validate our work reaches the level of contextualization desired high level semantic as well as a performance evaluation, which aims to evaluate the behavior of the subsystem responsible for managing contextual information on a large amount of data. After statistical analysis is performed with the results obtained in the evaluation. Finally, the conclusions of the work and some problems such as no assurance as to the integrity of the sensory data coming Infostructure, and future work that takes into account the implementation of other modules so that we can conduct tests in real environments are presented.
Resumo:
During expedition 202 of research vessel SONNE in 2009, 39 sea-floor surface sediments were sampled over a wide area across the North Pacific and the Bering Sea, which are well suited as reference archives of modern environmental processes. In this study, we used the samples to infer the documentation of land-ocean linkages of terrigenous sediment supply. We followed an integrated approach of grain-size analysis, bulk mineralogy, and clay mineralogy in combination with statistical data evaluation (end-member modelling of grain-size data, fuzzy-cluster analysis of mineralogical data), in order to identify the significant sources and modes of sediment transport in an overregional context. We also compiled literature data on clay mineralogy and updated those with the new data. Today, two processes of terrigenous sediment supply prevail in the study area: far-distant aeolian sediment supply to the pelagic North Pacific as well as hemipelagic sediment dispersal from nearby land sources by ocean currents along the continental margins and island arcs of the study area. The aeolian particles show the finest grain sizes (clay and fine silt), while the hemipelagic sediments have high abundances of sortable silt, particles >10 microns.
Resumo:
Teacher training processes, initial and continuing, and professional practice of teachers who teach Mathematics in the early years are highlighted in the literature as complex, but also are regarded as the way to overcome many difficulties in teaching this component curriculum in the school stage in question. The aim of the study was to investigate how the training needs in Mathematics are represented by a group of teachers in the early years of elementary school of public health system of the city of Uberlândia, State of Minas Gerais. The research, qualitative approach, had as object of study the training needs, in Mathematics, of teachers in the early years. The research involved 16 teachers from two schools in the municipal public schools of that city. Data were collected through questionnaires, non-participant observations, semi-structured interviews followed by group and individual. Analyses were performed by means of thematic categories, founded by content analysis. Data interpretation allowed to understand training needs in mathematics that are presented to the collaborating group from their professional practice, considering the knowledge and skills necessary to teaching. It is understood that the teachers of the study group have major limitations in relation to the specific content and didactic knowledge of Mathematics content, however, the concern is that demonstrated not always being aware of it. Moreover, the difficulties experienced in teaching practice proven to be overcome by sources and non-formal training activities, primarily through more experienced colleagues in the profession. Thus, it becomes difficult to think the initial and continuing training courses for teachers without the training needs of the teaching practice is appreciated as an object of study.
Resumo:
Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.
In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.
Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.
Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.
Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.
To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.
The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.
This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.
Resumo:
The dissertation consists of three chapters related to the low-price guarantee marketing strategy and energy efficiency analysis. The low-price guarantee is a marketing strategy in which firms promise to charge consumers the lowest price among their competitors. Chapter 1 addresses the research question "Does a Low-Price Guarantee Induce Lower Prices'' by looking into the retail gasoline industry in Quebec where there was a major branded firm which started a low-price guarantee back in 1996. Chapter 2 does a consumer welfare analysis of low-price guarantees to drive police indications and offers a new explanation of the firms' incentives to adopt a low-price guarantee. Chapter 3 develops the energy performance indicators (EPIs) to measure energy efficiency of the manufacturing plants in pulp, paper and paperboard industry.
Chapter 1 revisits the traditional view that a low-price guarantee results in higher prices by facilitating collusion. Using accurate market definitions and station-level data from the retail gasoline industry in Quebec, I conducted a descriptive analysis based on stations and price zones to compare the price and sales movement before and after the guarantee was adopted. I find that, contrary to the traditional view, the stores that offered the guarantee significantly decreased their prices and increased their sales. I also build a difference-in-difference model to quantify the decrease in posted price of the stores that offered the guarantee to be 0.7 cents per liter. While this change is significant, I do not find the response in comeptitors' prices to be significant. The sales of the stores that offered the guarantee increased significantly while the competitors' sales decreased significantly. However, the significance vanishes if I use the station clustered standard errors. Comparing my observations and the predictions of different theories of modeling low-price guarantees, I conclude the empirical evidence here supports that the low-price guarantee is a simple commitment device and induces lower prices.
Chapter 2 conducts a consumer welfare analysis of low-price guarantees to address the antitrust concerns and potential regulations from the government; explains the firms' potential incentives to adopt a low-price guarantee. Using station-level data from the retail gasoline industry in Quebec, I estimated consumers' demand of gasoline by a structural model with spatial competition incorporating the low-price guarantee as a commitment device, which allows firms to pre-commit to charge the lowest price among their competitors. The counterfactual analysis under the Bertrand competition setting shows that the stores that offered the guarantee attracted a lot more consumers and decreased their posted price by 0.6 cents per liter. Although the matching stores suffered a decrease in profits from gasoline sales, they are incentivized to adopt the low-price guarantee to attract more consumers to visit the store likely increasing profits at attached convenience stores. Firms have strong incentives to adopt a low-price guarantee on the product that their consumers are most price-sensitive about, while earning a profit from the products that are not covered in the guarantee. I estimate that consumers earn about 0.3% more surplus when the low-price guarantee is in place, which suggests that the authorities should not be concerned and regulate low-price guarantees. In Appendix B, I also propose an empirical model to look into how low-price guarantees would change consumer search behavior and whether consumer search plays an important role in estimating consumer surplus accurately.
Chapter 3, joint with Gale Boyd, describes work with the pulp, paper, and paperboard (PP&PB) industry to provide a plant-level indicator of energy efficiency for facilities that produce various types of paper products in the United States. Organizations that implement strategic energy management programs undertake a set of activities that, if carried out properly, have the potential to deliver sustained energy savings. Energy performance benchmarking is a key activity of strategic energy management and one way to enable companies to set energy efficiency targets for manufacturing facilities. The opportunity to assess plant energy performance through a comparison with similar plants in its industry is a highly desirable and strategic method of benchmarking for industrial energy managers. However, access to energy performance data for conducting industry benchmarking is usually unavailable to most industrial energy managers. The U.S. Environmental Protection Agency (EPA), through its ENERGY STAR program, seeks to overcome this barrier through the development of manufacturing sector-based plant energy performance indicators (EPIs) that encourage U.S. industries to use energy more efficiently. In the development of the energy performance indicator tools, consideration is given to the role that performance-based indicators play in motivating change; the steps necessary for indicator development, from interacting with an industry in securing adequate data for the indicator; and actual application and use of an indicator when complete. How indicators are employed in EPA’s efforts to encourage industries to voluntarily improve their use of energy is discussed as well. The chapter describes the data and statistical methods used to construct the EPI for plants within selected segments of the pulp, paper, and paperboard industry: specifically pulp mills and integrated paper & paperboard mills. The individual equations are presented, as are the instructions for using those equations as implemented in an associated Microsoft Excel-based spreadsheet tool.
Resumo:
Terrestrial ecosystems, occupying more than 25% of the Earth's surface, can serve as
`biological valves' in regulating the anthropogenic emissions of atmospheric aerosol
particles and greenhouse gases (GHGs) as responses to their surrounding environments.
While the signicance of quantifying the exchange rates of GHGs and atmospheric
aerosol particles between the terrestrial biosphere and the atmosphere is
hardly questioned in many scientic elds, the progress in improving model predictability,
data interpretation or the combination of the two remains impeded by
the lack of precise framework elucidating their dynamic transport processes over a
wide range of spatiotemporal scales. The diculty in developing prognostic modeling
tools to quantify the source or sink strength of these atmospheric substances
can be further magnied by the fact that the climate system is also sensitive to the
feedback from terrestrial ecosystems forming the so-called `feedback cycle'. Hence,
the emergent need is to reduce uncertainties when assessing this complex and dynamic
feedback cycle that is necessary to support the decisions of mitigation and
adaptation policies associated with human activities (e.g., anthropogenic emission
controls and land use managements) under current and future climate regimes.
With the goal to improve the predictions for the biosphere-atmosphere exchange
of biologically active gases and atmospheric aerosol particles, the main focus of this
dissertation is on revising and up-scaling the biotic and abiotic transport processes
from leaf to canopy scales. The validity of previous modeling studies in determining
iv
the exchange rate of gases and particles is evaluated with detailed descriptions of their
limitations. Mechanistic-based modeling approaches along with empirical studies
across dierent scales are employed to rene the mathematical descriptions of surface
conductance responsible for gas and particle exchanges as commonly adopted by all
operational models. Specically, how variation in horizontal leaf area density within
the vegetated medium, leaf size and leaf microroughness impact the aerodynamic attributes
and thereby the ultrane particle collection eciency at the leaf/branch scale
is explored using wind tunnel experiments with interpretations by a porous media
model and a scaling analysis. A multi-layered and size-resolved second-order closure
model combined with particle
uxes and concentration measurements within and
above a forest is used to explore the particle transport processes within the canopy
sub-layer and the partitioning of particle deposition onto canopy medium and forest
oor. For gases, a modeling framework accounting for the leaf-level boundary layer
eects on the stomatal pathway for gas exchange is proposed and combined with sap
ux measurements in a wind tunnel to assess how leaf-level transpiration varies with
increasing wind speed. How exogenous environmental conditions and endogenous
soil-root-stem-leaf hydraulic and eco-physiological properties impact the above- and
below-ground water dynamics in the soil-plant system and shape plant responses
to droughts is assessed by a porous media model that accommodates the transient
water
ow within the plant vascular system and is coupled with the aforementioned
leaf-level gas exchange model and soil-root interaction model. It should be noted
that tackling all aspects of potential issues causing uncertainties in forecasting the
feedback cycle between terrestrial ecosystem and the climate is unrealistic in a single
dissertation but further research questions and opportunities based on the foundation
derived from this dissertation are also brie
y discussed.
Resumo:
There is an increasing demand for DNA analysis because of the sensitivity of the method and the ability to uniquely identify and distinguish individuals with a high degree of certainty. But this demand has led to huge backlogs in evidence lockers since the current DNA extraction protocols require long processing time. The DNA analysis procedure becomes more complicated when analyzing sexual assault casework samples where the evidence contains more than one contributor. Additional processing to separate different cell types in order to simplify the final data interpretation further contributes to the existing cumbersome protocols. The goal of the present project is to develop a rapid and efficient extraction method that permits selective digestion of mixtures. Selective recovery of male DNA was achieved with as little as 15 minutes lysis time upon exposure to high pressure under alkaline conditions. Pressure cycling technology (PCT) is carried out in a barocycler that has a small footprint and is semi-automated. Typically less than 10% male DNA is recovered using the standard extraction protocol for rape kits, almost seven times more male DNA was recovered from swabs using this novel method. Various parameters including instrument setting and buffer composition were optimized to achieve selective recovery of sperm DNA. Some developmental validation studies were also done to determine the efficiency of this method in processing samples exposed to various conditions that can affect the quality of the extraction and the final DNA profile. Easy to use interface, minimal manual interference and the ability to achieve high yields with simple reagents in a relatively short time make this an ideal method for potential application in analyzing sexual assault samples.
Resumo:
Multi-frequency Eddy Current (EC) inspection with a transmit-receive probe (two horizontally offset coils) is used to monitor the Pressure Tube (PT) to Calandria Tube (CT) gap of CANDU® fuel channels. Accurate gap measurements are crucial to ensure fitness of service; however, variations in probe liftoff, PT electrical resistivity, and PT wall thickness can generate systematic measurement errors. Validated mathematical models of the EC probe are very useful for data interpretation, and may improve the gap measurement under inspection conditions where these parameters vary. As a first step, exact solutions for the electromagnetic response of a transmit-receive coil pair situated above two parallel plates separated by an air gap were developed. This model was validated against experimental data with flat-plate samples. Finite element method models revealed that this geometrical approximation could not accurately match experimental data with real tubes, so analytical solutions for the probe in a double-walled pipe (the CANDU® fuel channel geometry) were generated using the Second-Order Vector Potential (SOVP) formalism. All electromagnetic coupling coefficients arising from the probe, and the layered conductors were determined and substituted into Kirchhoff’s circuit equations for the calculation of the pickup coil signal. The flat-plate model was used as a basis for an Inverse Algorithm (IA) to simultaneously extract the relevant experimental parameters from EC data. The IA was validated over a large range of second layer plate resistivities (1.7 to 174 µΩ∙cm), plate wall thickness (~1 to 4.9 mm), probe liftoff (~2 mm to 8 mm), and plate-to plate gap (~0 mm to 13 mm). The IA achieved a relative error of less than 6% for the extracted FP resistivity and an accuracy of ±0.1 mm for the LO measurement. The IA was able to achieve a plate gap measurement with an accuracy of less than ±0.7 mm error over a ~2.4 mm to 7.5 mm probe liftoff and ±0.3 mm at nominal liftoff (2.42±0.05 mm), providing confidence in the general validity of the algorithm. This demonstrates the potential of using an analytical model to extract variable parameters that may affect the gap measurement accuracy.
Resumo:
Safety on public transport is a major concern for the relevant authorities. We
address this issue by proposing an automated surveillance platform which combines data from video, infrared and pressure sensors. Data homogenisation and integration is achieved by a distributed architecture based on communication middleware that resolves interconnection issues, thereby enabling data modelling. A common-sense knowledge base models and encodes knowledge about public-transport platforms and the actions and activities of passengers. Trajectory data from passengers is modelled as a time-series of human activities. Common-sense knowledge and rules are then applied to detect inconsistencies or errors in the data interpretation. Lastly, the rationality that characterises human behaviour is also captured here through a bottom-up Hierarchical Task Network planner that, along with common-sense, corrects misinterpretations to explain passenger behaviour. The system is validated using a simulated bus saloon scenario as a case-study. Eighteen video sequences were recorded with up to six passengers. Four metrics were used to evaluate performance. The system, with an accuracy greater than 90% for each of the four metrics, was found to outperform a rule-base system and a system containing planning alone.
Resumo:
Die Wahrung der Reproduzierbarkeit empirisch erhobener Messdaten fordert eine konsequente Anwendung einer einheitlichen und standardisierten Methode. Am IFL des KIT wurde deshalb eine Prozessbeschreibung entworfen, die speziell für die Messung von Fördermitteln der Intralogistik angewandt werden kann. Neben der Vorbereitung und Durchführung der Messungen müssen die erhobenen Daten anschließend statistisch ausgewertet und interpretiert werden. Die Methode wird in diesem Beitrag vorgestellt und am Beispiel von Leistungs- und Energiemessungen angewandt.
Resumo:
Chronic non-communicable diseases represent a major public health problem, requiring more effective investigation and control by government agencies. The aim of this study was to correlate the mortality rate for oral cancer in Brazilian State capitals from 1998 to 2002 with socioeconomic factors collected in the 2000 census, using an ecological study design. Data were obtained from the Mortality Information System from 1998 to 2002. Social factors were taken from the Brazilian Human Development Atlases. After data collection, statistical analysis was performed using Pearson's correlation index. The findings included positive and significant correlations among the socioeconomic indicators (Municipal Human Development Index - MHDI, MHDI-income, MHDI-education, MHDI-life expectancy, and per capita income), and negative and significant correlations with the socioeconomic indicators Gini Index and infant mortality. Despite the study’s limitations and probable underreporting in less developed State capitals, the study found significant statistic correlations between the selected socioeconomic indicators and the oral cancer mortality rate___________________________RESUMO As doenças crônico-degenerativas representam um grande problema de saúde pública, necessitando de levantamento e controle mais efetivos destas enfermidades por parte dos órgãos públicos. O objetivo deste estudo foi correlacionar os índices de mortalidade por câncer oral nas capitais do Brasil no período de 1998 a 2002 com indicadores sócio-econômicos do Censo Demográfico de 2000 , por meio de um estudo do tipo ecológico. Os dados foram extraídos do Sistema de Informação de Mortalidade (Ministério da Saúde/DATASUS), para os anos de 1998-2002. Os indicadores sócio-econômicos foram obtidos a partir do Atlas do Desenvolvimento Humano no Brasil. Após coleta dos dados, a análise estatística foi realizada usando-se o índice de correlação de Pearson. Observaram-se corre- lações positivas e significativas entre os indicadores sócio-econômicos (Índice de Desenvolvimento HumanoMunicipal – IDH-M, IDH-M renda, IDH-M educação, IDH-M longevidade e renda per capita), e correlação negativa e significante para os indicadores sócio-econômicos índice de Gini e mortalidade infantil. Apesar das limitações do estudo e da provável problemática de sub-registros nas capitais menos desenvolvidas, o presente trabalho encontrou correlações estatisticamente significantes entre os indicadores sócio-econômicos selecionados e o índice de mortalidade por câncer oral
Processo de planejamento estratégico em universidade pública: o caso da Universidade Federal do Pará
Resumo:
The goal of this research is to check if the strategic planning developed between 2001 and 2009 into the State University of Para (Universidade Federal do Pará - UFPA) was consolidated into its Academic Centers as a management practice. To this end, we identified the planning formalization degree of the Academic Centers, the conceived tools for the planning, the conception and the methodological process used in the tools elaboration, as also its implementation. The research used a qualitative approach: it is descriptive and it uses the case study technique. The data were gathered from primary and secondary sources, through bibliography, documents, and field researches through semi-structure interviews. The analysis and data interpretation were done by each investigated Academic Center from the analytics categories guided by the specifics goals. We used theoretic fundamental based principles and the university as a study empiric reference based on its structure analysis, organizational processes and institutional strategic plan. We inspected how the strategic planning process was developed throughout the fixed period and how the investigated Academic Centers are from the collected documents and interviews. The theoretical foundation was built from three axis: the Brazilian undergraduate and posgraduate education system; the university itself including its singularity and complexity as an organization; and the planning as a strategic management process. The main results show us that the UFPA has updated regulatory milestones, presenting organizational structure, laws, instructions, manuals and deployed management model that give the strategic planning development conditions beyond its administration, i. e., into its Academic Centers. The centers also present those established milestones and develop the basic planning processes of the institution. Those processes are conceived based on the institutional strategic planning and the managers mainly use the procedural orientation defined by the university management, from where the conceptual foundation is originated and propagated. According to the literature and to the research done in this work, we can conclude that the Academic Centers from the UFPA developed the strategic planning practice. This planning is organized and founded and guided the plans and decisions which avoided the disordered management and, according to the managers, allowed the advances and performance improvement. We can conclude that the UFPA built an important foundation with respect to the management professionalization. On the other hand, we can not conclude that the management practice is consolidated since there are weaknesses into the structuring of the technical teams and there is not any management tool for the implementation of the elaborated plans
Resumo:
This research examined the personnel policies of the Federal University of Pará (UFPA), aimed at the middle area, implemented by the President's Office of Personnel Management (PROGEP), through the Performance Management and Development from 2006 to 2009 period, in which Institutional Plan was implemented for Technical and administrative (PIDT) with a view to ascertaining whether these actions were developed in line with the ideas of managerialism or New Public Management (NPM). The study opted for qualitative research using interview as a tool to collect data. The informants were managers PROGEP / UFPA who acted in that period. Data interpretation was based on analysis of content from the collation of speeches and documents produced during the period with the managerial categories. Data analysis revealed that the management of people, UFPA has the characteristics of a hybrid management, observing the period studied two models of management: a bureaucratic, rational, focused on processes, contemporary face of public organizations, and other managerialist, adopted by PROGEP in obedience to the mandatory policies of the federal government, being much more present the characteristics of a personnel policy-oriented processes. Concludes that the personnel policy of the UFPA has not been fully tuned to managerialism in the surveyed period