958 resultados para Estudo dos métodos e tempos
Resumo:
This study examines the factors that influence public managers in the adoption of advanced practices related to Information Security Management. This research used, as the basis of assertions, Security Standard ISO 27001:2005 and theoretical model based on TAM (Technology Acceptance Model) from Venkatesh and Davis (2000). The method adopted was field research of national scope with participation of eighty public administrators from states of Brazil, all of them managers and planners of state governments. The approach was quantitative and research methods were descriptive statistics, factor analysis and multiple linear regression for data analysis. The survey results showed correlation between the constructs of the TAM model (ease of use, perceptions of value, attitude and intention to use) and agreement with the assertions made in accordance with ISO 27001, showing that these factors influence the managers in adoption of such practices. On the other independent variables of the model (organizational profile, demographic profile and managers behavior) no significant correlation was identified with the assertions of the same standard, witch means the need for expansion researches using such constructs. It is hoped that this study may contribute positively to the progress on discussions about Information Security Management, Adoption of Safety Standards and Technology Acceptance Model
Resumo:
In an environment of constant change, technological developments, market competition and more informed consumers, the search for a lasting relationship through the conquest of loyalty has become the objective of companies. However, several authors suggest that this loyalty can be affected by negative comments available on the internet. Therefore, this dissertation has as objective to examine if the complaints are available on the internet impact the loyalty to a brand of mobile phone. The research used as the basis the Expanded NCSB model suggest by Johnson et al. (2001), studying five prominent drives of loyalty: image/brand reputation, affective commitment, calculative commitment, perceived value and trust, beyond the satisfaction construct as moderator variable. The research method adopted was the experimental design which included 285 undergraduate students, with the trial which included 285 undergraduate students, with the field study of the mobile industry, specifically, the brands of cell phones. The research approach was quantitative and methods were descriptive statistics, factor analysis, cluster analysis, linear regression and non-parametric test of Wilcoxon for data analysis. Of the 16 hypothesis stemmed from the research model proposed, 12 were confirmed. The results showed that the complaint available on the internet, here represented by the available on the site Reclame Aqui, may impact consumer perceptions about brand loyalty, as well as its antecedents, being that these complaints can affect all the consumers, regardless of historical satisfaction with the brand. It also noted the positive relationship between the independent variables trust, image/brand reputation, perceived value, affective commitment and calculative commitment and the dependent variable - loyalty, even when considering the data obtained after exposure to the complaint. However, no unanimous conclusion that the relationship between these variables was strongest in the group with satisfactory experience. At the first moment of the research, the trust was the most important variable for the formation of loyalty. However, after exposure to treatment, the image/brand reputation, was more relevant. Contributions of the study, limitations and recommendations for future researches are approached in the present investigation
Resumo:
The current study presents the characteristics of self-efficacy of students of Administration course, who work and do not work. The study was conducted through a field research, descriptive, addressed quantitatively using statistical procedures. Was studied a population composed of 394 students distributed in three Higher Education Institutions, in the metropolitan region of Belém, in the State of Pará. The sampling was not probabilistic by accessibility, with a sample of 254 subjects. The instrument for data collection was a questionnaire composed of a set of questions divided into three sections: the first related to sociodemographic data, the second section was built to identify the work situation of the respondent and the third section was built with issues related to General Perceived Self-Efficacy Scale proposed by Schwarzer and Jerusalem (1999). Sociodemographic data were processed using methods of descriptive statistics. This procedure allowed characterizing the subjects of the sample. To identify the work situation, the analysis of frequency and percentage was used, which allowed to classify in percentage, the respondents who worked and those that did not work, and the data related to the scale of self-efficacy were processed quantitatively by the method of multivariate statistics using the software of program Statistical Package for Social Sciences for Windows - SPSS, version 17 from the process of Exploratory Factor Analysis. This procedure allowed characterizing the students who worked and the students who did not worked. The results were discussed based on Social Cognitive Theory from the construct of self-efficacy of Albert Bandura (1977). The study results showed a young sample, composed the majority of single women with work experience, and indicated that the characteristics of self-efficacy of students who work and students who do not work are different. The self-efficacy beliefs of students who do not work are based on psychological expectations, whereas the students who work demonstrated that their efficacy beliefs are sustained by previous experiences. A student who does not work proved to be reliant in their abilities to achieve a successful performance in their activities, believing it to be easy to achieve your goals and to face difficult situations at work, simply by invest a necessary effort and trust in their abilities. One who has experience working proved to be reliant in their abilities to conduct courses of action, although know that it is not easy to achieve your goals, and in unexpected situations showed its ability to solve difficult problems
Resumo:
Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model
Resumo:
Natural ventilation is an efficient bioclimatic strategy, one that provides thermal comfort, healthful and cooling to the edification. However, the disregard for quality environment, the uncertainties involved in the phenomenon and the popularization of artificial climate systems are held as an excuse for those who neglect the benefits of passive cooling. The unfamiliarity with the concept may be lessened if ventilation is observed in every step of the project, especially in the initial phase in which decisions bear a great impact in the construction process. The tools available in order to quantify the impact of projected decisions consist basically of the renovation rate calculations or computer simulations of fluids, commonly dubbed CFD, which stands for Computational Fluid Dynamics , both somewhat apart from the project s execution and unable to adapt for use in parametric studies. Thus, we chose to verify, through computer simulation, the representativeness of the results with a method of simplified air reconditioning rate calculation, as well as making it more compatible with the questions relevant to the first phases of the project s process. The case object consists of a model resulting from the recommendations of the Código de Obras de Natal/ RN, customized according to the NBR 15220. The study has shown the complexity in aggregating a CFD tool to the process and the need for a method capable of generating data at the compatible rate to the flow of ideas and are discarded during the project s development. At the end of our study, we discuss the necessary concessions for the realization of simulations, the applicability and the limitations of both the tools used and the method adopted, as well as the representativeness of the results obtained
Resumo:
Chitin is an important structural component of the cellular wall of fungi and exoskeleton of many invertebrate plagues, such as insects and nematodes. In digestory systems of insects it forms a named matrix of peritrophic membrane. One of the most studied interaction models protein-carbohydrate is the model that involves chitin-binding proteins. Among the involved characterized domains already in this interaction if they detach the hevein domain (HD), from of Hevea brasiliensis (Rubber tree), the R&R consensus domain (R&R), found in cuticular proteins of insects, and the motif called in this study as conglicinin motif (CD), found in the cristallography structure of the β-conglicinin bounded with GlcNac. These three chitin-binding domains had been used to determine which of them could be involved in silico in the interaction of Canavalia ensiformis and Vigna unguiculata vicilins with chitin, as well as associate these results with the WD50 of these vicilins for Callosobruchus maculatus larvae. The technique of comparative modeling was used for construction of the model 3D of the vicilin of V. unguiculata, that was not found in the data bases. Using the ClustalW program it was gotten localization of these domains in the vicilins primary structure. The domains R&R and CD had been found with bigger homology in the vicilins primary sequences and had been target of interaction studies. Through program GRAMM models of interaction ( dockings ) of the vicilins with GlcNac had been gotten. The results had shown that, through analysis in silico, HD is not part of the vicilins structures, proving the result gotten with the alignment of the primary sequences; the R&R domain, although not to have structural similarity in the vicilins, probably it has a participation in the activity of interaction of these with GlcNac; whereas the CD domain participates directly in the interaction of the vicilins with GlcNac. These results in silico show that the amino acid number, the types and the amount of binding made for the CD motif with GlcNac seem to be directly associates to the deleterious power that these vicilins show for C. maculatus larvae. This can give an initial step in the briefing of as the vicilins interact with alive chitin in and exert its toxic power for insects that possess peritrophic membrane
Resumo:
The Iota, Kappa and Lambda commercial carrageenans are rarely pure and normally contain varying amounts of the other types of carrageenans. The exact amount of impurity depends on the seaweed source and extraction procedure. Then, different analysis methods have been applied for determination of the main constituents of carrageenans because these three carrageenans are extensively used in food, cosmetic and pharmaceutical industry. The electrophoresis of these compounds proved that the carrageenans are constituted by sulfated polysaccharides. These compounds were characterized by colorimetric methods and was observed that the Lambda carrageenan shown the greater value (33.38%) of sulfate. These polymers were examined by means of 13C NMR spectroscopy and infrared spectra. The polysaccharides consisted mainly of units alternating of sulfated galactoses and anhydrogalactoses. The aim of the study was also to test the inflammatory action of these different polysaccharides. A suitable model of inflammation is acute sterile inflammation of the rat hind limb induced by carrageenan. Paw edema was induced by injecting carrageenans (κ, ι and λ) in saline into the hind paw of a male Wistar rats (175–200 g). The pathway to acute inflammation by carrageenan (kappa, iota and lambda) were expressed as time-edema dependence and measured by paw edema volume. For this purpose, was used an apparatus (pakymeter), which makes it possible to measure the inflammation (swelling of the rat foot) with sufficient accuracy. The results showed that κ-carrageenan (1%) have an edema of 3.7 mm and the paw edema increase was time and dose dependent; the ι-carrageenan (0.2%) caused an edema of 4 mm and the λ-carrageenan (1%) caused an edema of 3.6 mm. Other model was used in this study based in the inflammation of pleura for comparatives studies. Injection of carrageenans into the pleural cavity of rat induced an acute inflammatory response characterized by fluid accumulation in the pleural cavity, a large number of neutrophils and raised NO production. The levels of NO were measured by Griess reactive. The ι-carrageenan caused the greater inflammation, because it has high concentration of nitrite/nitrate (63.478 nmoles/rat), exudato volume (1.52 ml) and PMNs (4902 x 103 cells). Quantitative evaluation of inflammations of rats is a useful and important parameter for the evaluation of the efficacy of anti-inflammatory drugs
Resumo:
The present work shows a contribution to the studies of development and solid sinterization of a metallic matrix composite MMC that has as starter materials 316L stainless steel atomized with water, and two different Tantalum Carbide TaC powders, with averages crystallite sizes of 13.78 nm and 40.66 nm. Aiming the metallic matrix s density and hardness increase was added different nanometric sizes of TaC by dispersion. The 316L stainless steel is an alloy largely used because it s high resistance to corrosion property. Although, its application is limited by the low wear resistance, consequence of its low hardness. Besides this, it shows low sinterability and it cannot be hardened by thermal treatments traditional methods because of the austenitic structure, face centered cubic, stabilized mainly in nickel presence. Steel samples added with TaC 3% wt (each sample with different type of carbide), following a mechanical milling route using conventional mill for 24 hours. Each one of the resulted samples, as well as the pure steel sample, were compacted at 700 MPa, room temperature, without any addictive, uniaxial tension, using a 5 mm diameter cylindrical mold, and quantity calculated to obtain compacted final average height of 5 mm. Subsequently, were sintered in vacuum atmosphere, temperature of 1290ºC, heating rate of 20ºC/min, using different soaking times of 30 and 60 min and cooled at room temperature. The sintered samples were submitted to density and micro-hardness analysis. The TaC reforced samples showed higher density values and an expressive hardness increase. The complementary analysis in optical microscope, scanning electronic microscope and X ray diffractometer, showed that the TaC, processed form, contributed with the hardness increase, by densification, itself hardness and grains growth control at the metallic matrix, segregating itself to the grain boarders
Resumo:
It seeks to find an alternative to the current tantalum electrolytic capacitors in the market due to its high cost. Niobium is a potential substitute, since both belong to the same group of the periodic table and because of this have many similar physical and chemical properties. Niobium has several technologically important applications, and Brazil has the largest reserves, around 96%. There are including niobium in reserves of tantalite and columbite in Rio Grande do Norte. These electrolytic capacitors have high capacitance specifies, ie they can store high energy in small volumes compared to other types of capacitors. This is the main attraction of this type of capacitor because is growing demand in the production of capacitors with capacitance specifies increasingly high, this because of the miniaturization of various devices such as GPS devices, televisions, computers, phones and many others. The production route of the capacitor was made by powder metallurgy. The initial niobium powder supplied by EEL-USP was first characterized by XRD, SEM, XRF and laser particle size, to then be sieved into three particle size, 200, 400 e 635mesh. The powders were then compacted and sintered at 1350, 1450 and 1550°C using two sintering time 30 and 60min. Sintering is one of the most important parts of the process as it affects properties as porosity and surface cleaning of the samples, which greatly affected the quality of the capacitor. The sintered samples then underwent a process of anodic oxidation, which created a thin film of niobium pentóxido over the whole porous surface of the sample, this film is the dielectric capacitor. The oxidation process variables influence the performance of the film and therefore the capacitor. The samples were characterized by electrical measurements of capacitance, loss factor, ESR, relative density, porosity and surface area. After the characterizations was made an annealing in air ate 260ºC for 60min. After this treatment were made again the electrical measurements. The particle size of powders and sintering affected the porosity and in turn the specific area of the samples. The larger de area of the capacitor, greater is the capacitance. The powder showed the highest capacitance was with the smallest particle size. Higher temperatures and times of sintering caused samples with smaller surface area, but on the other hand the cleaning surface impurities was higher for this cases. So a balance must be made between the gain that is achieved with the cleaning of impurities and the loss with the decreased in specific area. The best results were obtained for the temperature of 1450ºC/60min. The influence of annealing on the loss factor and ESR did not follow a well-defined pattern, because their values increased in some cases and decreased in others. The most interesting results due to heat treatment were with respect to capacitance, which showed an increase for all samples after treatment
Resumo:
This masther dissertation presents a contribution to the study of 316L stainless steel sintering aiming to study their behavior in the milling process and the effect of isotherm temperature on the microstructure and mechanical properties. The 316L stainless steel is a widely used alloy for their high corrosion resistance property. However its application is limited by the low wear resistance consequence of its low hardness. In previous work we analyzed the effect of sintering additives as NbC and TaC. This study aims at deepening the understanding of sintering, analyzing the effect of grinding on particle size and microstructure and the effect of heating rate and soaking time on the sintered microstructure and on their microhardness. Were milled 316L powders with NbC at 1, 5 and 24 hours respectively. Particulates were characterized by SEM and . Cylindrical samples height and diameter of 5.0 mm were compacted at 700 MPa. The sintering conditions were: heating rate 5, 10 and 15◦C/min, temperature 1000, 1100, 1200, 1290 and 1300◦C, and soaking times of 30 and 60min. The cooling rate was maintained at 25◦C/min. All samples were sintered in a vacuum furnace. The sintered microstructure were characterized by optical and electron microscopy as well as density and microhardness. It was observed that the milling process has an influence on sintering, as well as temperature. The major effect was caused by firing temperature, followed by the grinding and heating rate. In this case, the highest rates correspond to higher sintering.
Resumo:
The use of binders in the soil for the production of solid bricks is an old construction technique that has been used by several civilizations over time. At the same time, the need for environmental preservation and the tendency of scarcity of natural resources make the construction invest in researching new concepts, methods and materials for building systems for the sustainability of their economic activities. Thus arises the need to obtain building materials with low power consumption, capable of reducing the growing housing shortage of rural and urban population. Currently, research has been conducted on this topic to better understand the cementitious and pozzolanic reactions that occur in the formation of the microstructure of the soil-cement when added to other materials such as, for example, lime, and the relationship between microstructure and formed interfaces with the physical, mechanical and chemical analysis in compounds made from these ternary compositions. In this context, this study aimed to analyze the results of the influence of the incorporation of lime to the soil-cement to form a ternary mixture to produce soil-cement bricks and mortar without structural purposes. From the inclusion of contents of 6 %, 8 %, 10% and 12% lime to the soil, and soil-cement mixes in amounts of 2 %, 3 %, 4 % and 5 % were shaped-bodies of -cylindrical specimens to determine the optimum moisture content and maximum dry apparent specific weight. Then they were cured, and subjected to the tests of compressive strength, absorption and durability modified. Compositions obtained the best results in the tests performed on the bodies-of-proof cylindrical served as a parameter for molding of solid bricks, which underwent the same experimental methodology previously cited. The raw materials used, as well as compositions in which the bricks were molded solid, were characterized by physical and chemical tests, X-ray diffraction and scanning electron microscopy. The results obtained in the study indicate that the compositions studied, that showed the best results in terms of compressive strength, water absorption and durability ternary composition was soil, 10 % cement and 2 % lime
Resumo:
This work reports the influence of the poly (ethylene terephthalate) textile surface modification by plasmas of O2 and mixtures (N2 + O2), on their physical and chemical properties. The treatment was carried out in a vacuum chamber. Some parameters remained constant during all treatment, such as: Voltage 470 V; Pressure 1,250 Mbar; Current: 0, 10 A and gas flow: 10 cm3/min. Other parameters, such as working gas composition and treatment time, were modified as the following: to the O2 plasma modified samples only the treatment time was changed (10, 20, 30, 40, 50 and 60 minutes). To the plasma with O2 and N2 only the chemical concentrations were changed. Through Capillary tests (vertical) an increase in textile wettability was observed as well as its influence on aging time and its consequence on wettability. The surface functional groups created after plasma treatments were investigated using X-ray Photoelectron Spectroscopy (XPS). The surface topography was examined by scanning electron microscope (SEM)
Resumo:
Exploration of heavy oil reservoirs is increasing every year in worldwide, because the discovery of light oil reservoirs is becoming increasingly rare. This fact has stimulated the research with the purpose of becoming viable, technically and economically, the exploration of such oil reserves. In Brazil, in special in the Northeast region, there is a large amount of heavy oil reservoir, where the recovery by the so called secondary methods Water injection or gas injection is inefficient or even impracticable in some reservoirs with high viscosity oils (heavy oils). In this scenario, steam injection appears as an interesting alternative for recover of these kinds of oil reservoirs. Its main mechanism consists of oil viscosity reduction through steam injection, increasing reservoir temperature. This work presents a parametric simulation study of some operational and reservoir variables that had influence on oil recovery in thin reservoirs typically found in Brazilian Northeast Basins, that use the steam injection as improved oil recovery method. To carry out simulations, it was used the commercial software STARS (Steam, Thermal, and Advanced Processes Reservoir Simulator) from CMG (Computer Modeling Group) version 2007.11. Reservoirs variables studied were horizontal permeability, vertical and horizontal permeability ratio, water zone and pay zone thickness ratio, pay zone thickness and thermal conductivity of the rock. Whereas, operational parameters studied were distance between wells and steam injection rate. Results showed that reservoir variables that had more influence on oil recovery were horizontal permeability and water zone and pay zone thickness ratio. In relation to operational variables, results showed that short distances between wells and low steam injection rates improved oil recovery
Resumo:
Currently, due to part of world is focalized to petroleum, many researches with this theme have been advanced to make possible the production into reservoirs which were classified as unviable. Because of geological and operational challenges presented to oil recovery, more and more efficient methods which are economically successful have been searched. In this background, steam flood is in evidence mainly when it is combined with other procedures to purpose low costs and high recovery factors. This work utilized nitrogen as an alternative fluid after steam flood to adjust the best combination of alternation between these fluids in terms of time and rate injection. To describe the simplified economic profile, many analysis based on liquid cumulative production were performed. The completion interval and injection fluid rates were fixed and the oil viscosity was ranged at 300 cP, 1.000 cP and 3.000 cP. The results defined, for each viscosity, one specific model indicating the best period to stop the introduction of steam and insertion of nitrogen, when the first injected fluid reached its economic limit. Simulations in physics model defined from one-eighth nine-spot inverted were realized using the commercial simulator Steam, Thermal and Advanced Processes Reservoir Simulator STARS of Computer Modelling Group CMG
Resumo:
Due to reservoirs complexity and significantly large reserves, heavy oil recovery has become one of the major oil industry challenges. Thus, thermal methods have been widely used as a strategic method to improve heavy oil recovery. These methods improve oil displacement through viscosity reduction, enabling oil production in fields which are not considered commercial by conventional recovery methods. Among the thermal processes, steam flooding is the most used today. One consequence in this process is gravity segregation, given by difference between reservoir and injected fluids density. This phenomenon may be influenced by the presence of reservoir heterogeneities. Since most of the studies are carried out in homogeneous reservoirs, more detailed studies of heterogeneities effects in the reservoirs during steam flooding are necessary, since most oil reservoirs are heterogeneous. This paper presents a study of reservoir heterogeneities and their influence in gravity segregation during steam flooding process. In this study some heterogeneous reservoirs with physical characteristics similar those found in the Brazilian Northeast Basin were analyzed. To carry out the simulations, it was used the commercial simulator STARS by CMG (Computer Modeling Group) - version 2007.11. Heterogeneities were modeled with lower permeability layers. Results showed that the presence of low permeability barriers can improve the oil recovery, and reduce the effects of gravity segregation, depending on the location of heterogeneities. The presence of these barriers have also increased the recovered fraction even with the reduction of injected steam rate