958 resultados para Analytical hierarchical process
Resumo:
Objective To analyze the characteristics of faculty work in nursing higher education. Method An exploratory qualitative study with a theoretical-methodological framework of dialectical and historical materialism. The faculty work process was adopted as the analytical category, grounded on conceptions of work and professionalism. Semi-structured interviews were conducted with 24 faculty members from three higher education institutions in the city of São Paulo, classified according to the typology of institutional contexts. Results The faculty members at these higher education institutions are a heterogeneous group, under different working conditions. Intensification and precarious conditions of the faculty work is common to all three contexts, although there are important distinctions in the practices related to teaching, research and extension. Conclusion Faculty professionalization can be the starting point for analyzing and coping with such a distinct reality of faculty work and practice.
Resumo:
Accomplish high quality of final products in pharmaceutical industry is a challenge that requires the control and supervision of all the manufacturing steps. This request created the necessity of developing fast and accurate analytical methods. Near infrared spectroscopy together with chemometrics, fulfill this growing demand. The high speed providing relevant information and the versatility of its application to different types of samples lead these combined techniques as one of the most appropriated. This study is focused on the development of a calibration model able to determine amounts of API from industrial granulates using NIR, chemometrics and process spectra methodology.
Resumo:
Since the first anti-doping tests in the 1960s, the analytical aspects of the testing remain challenging. The evolution of the analytical process in doping control is discussed in this paper with a particular emphasis on separation techniques, such as gas chromatography and liquid chromatography. These approaches are improving in parallel with the requirements of increasing sensitivity and selectivity for detecting prohibited substances in biological samples from athletes. Moreover, fast analyses are mandatory to deal with the growing number of doping control samples and the short response time required during particular sport events. Recent developments in mass spectrometry and the expansion of accurate mass determination has improved anti-doping strategies with the possibility of using elemental composition and isotope patterns for structural identification. These techniques must be able to distinguish equivocally between negative and suspicious samples with no false-negative or false-positive results. Therefore, high degree of reliability must be reached for the identification of major metabolites corresponding to suspected analytes. Along with current trends in pharmaceutical industry the analysis of proteins and peptides remains an important issue in doping control. Sophisticated analytical tools are still mandatory to improve their distinction from endogenous analogs. Finally, indirect approaches will be discussed in the context of anti-doping, in which recent advances are aimed to examine the biological response of a doping agent in a holistic way.
Resumo:
In applied regional analysis, statistical information is usually published at different territorial levels with the aim providing inforamtion of interest for different potential users. When using this information, there are two different choices: first, to use normative regions ( towns, provinces, etc.) or, second, to design analytical regions directly related with the analysed phenomena. In this paper, privincial time series of unemployment rates in Spain are used in order to compare the results obtained by applying yoy analytical regionalisation models ( a two stages procedure based on cluster analysis and a procedure based on mathematical programming) with the normative regions available at two different scales: NUTS II and NUTS I. The results have shown that more homogeneous regions were designed when applying both analytical regionalisation tools. Two other obtained interesting results are related with the fact that analytical regions were also more estable along time and with the effects of scales in the regionalisation process
Resumo:
In applied regional analysis, statistical information is usually published at different territorial levels with the aim providing inforamtion of interest for different potential users. When using this information, there are two different choices: first, to use normative regions ( towns, provinces, etc.) or, second, to design analytical regions directly related with the analysed phenomena. In this paper, privincial time series of unemployment rates in Spain are used in order to compare the results obtained by applying yoy analytical regionalisation models ( a two stages procedure based on cluster analysis and a procedure based on mathematical programming) with the normative regions available at two different scales: NUTS II and NUTS I. The results have shown that more homogeneous regions were designed when applying both analytical regionalisation tools. Two other obtained interesting results are related with the fact that analytical regions were also more estable along time and with the effects of scales in the regionalisation process
Resumo:
We present a simple model of communication in networks with hierarchical branching. We analyze the behavior of the model from the viewpoint of critical systems under different situations. For certain values of the parameters, a continuous phase transition between a sparse and a congested regime is observed and accurately described by an order parameter and the power spectra. At the critical point the behavior of the model is totally independent of the number of hierarchical levels. Also scaling properties are observed when the size of the system varies. The presence of noise in the communication is shown to break the transition. The analytical results are a useful guide to forecasting the main features of real networks.
Resumo:
Recent reports indicate that of the over 25,000 bridges in Iowa, slightly over 7,000 (29%) are either structurally deficient or functionally obsolete. While many of these bridges may be strengthened or rehabilitated, some simply need to be replaced. Before implementing one of these options, one should consider performing a diagnostic load test on the structure to more accurately assess its load carrying capacity. Frequently, diagnostic load tests reveal strength and serviceability characteristics that exceed the predicted codified parameters. Usually, codified parameters are very conservative in predicting lateral load distribution characteristics and the influence of other structural attributes. As a result, the predicted rating factors are typically conservative. In cases where theoretical calculations show a structural deficiency, it may be very beneficial to apply a "tool" that utilizes a more accurate theoretical model which incorporates field-test data. At a minimum, this approach results in more accurate load ratings and many times results in increased rating factors. Bridge Diagnostics, Inc. (BDI) developed hardware and software that are specially designed for performing bridge ratings based on data obtained from physical testing. To evaluate the BDI system, the research team performed diagnostic load tests on seven "typical" bridge structures: three steel-girder bridges with concrete decks, two concrete slab bridges, and two steel-girder bridges with timber decks. In addition, a steel-girder bridge with a concrete deck previously tested and modeled by BDI was investigated for model verification purposes. The tests were performed by attaching strain transducers on the bridges at critical locations to measure strains resulting from truck loading positioned at various locations on the bridge. The field test results were used to develop and validate analytical rating models. Based on the experimental and analytical results, it was determined that bridge tests could be conducted relatively easy, that accurate models could be generated with the BDI software, and that the load ratings, in general, were greater than the ratings, obtained using the codified LFD Method (according to AASHTO Standard Specifications for Highway Bridges).
Resumo:
The present study is an integral part of a broader study focused on the design and implementation of self-cleaning culverts, i.e., configurations that prevent the formation of sediment deposits after culvert construction or cleaning. Sediment deposition at culverts is influenced by many factors, including the size and characteristics of material of which the channel is composed, the hydraulic characteristics generated under different hydrology events, the culvert geometry design, channel transition design, and the vegetation around the channel. The multitude of combinations produced by this set of variables makes the investigation of practical situations a complex undertaking. In addition to the considerations above, the field and analytical observations have revealed flow complexities affecting the flow and sediment transport through culverts that further increase the dimensions of the investigation. The flow complexities investigated in this study entail: flow non-uniformity in the areas of transition to and from the culvert, flow unsteadiness due to the flood wave propagation through the channel, and the asynchronous correlation between the flow and sediment hydrographs resulting from storm events. To date, the literature contains no systematic studies on sediment transport through multi-box culverts or investigations on the adverse effects of sediment deposition at culverts. Moreover, there is limited knowledge about the non-uniform, unsteady sediment transport in channels of variable geometry. Furthermore, there are few readily useable (inexpensive and practical) numerical models that can reliably simulate flow and sediment transport in such complex situations. Given the current state of knowledge, the main goal of the present study is to investigate the above flow complexities in order to provide the needed insights for a series of ongoing culvert studies. The research was phased so that field observations were conducted first to understand the culvert behavior in Iowa landscape. Modeling through complementary hydraulic model and numerical experiments was subsequently carried out to gain the practical knowledge for the development of the self-cleaning culvert designs.
Resumo:
A beautiful smile is directly related with white teeth. Nowadays oral care has increased and developed processes for beautiful smiles. Dental bleaching is frequently used in odontology, not just for health care also for aesthetic treatment. With the possibility of teeth bleaching, now the importance is in, how white the tooth is? Because color is relate to an individual perception. In order to assets teeth correct color identification has been developed many color guides, models, spaces and analytical methods. Spite all of these useful tools the color interpretation depends on environmental factors, position of the sample in the data acquisition and most importantly the instrument sensitivity. The commons methods have proved to be useful. They are easy to handle, some are portable but they do not have a high sensitivity. The present work is based on the integration of a new analytical technique for color acquisition. High spectral Image (HSI) is able to performed image analysis with high quality and efficiency. HSI is used in many fields and we used it for color image analysis within the bleaching process. The main comparison was done with the HSI and the colorimeter through the processes of two different bleaching protocols. The results showed that HSI has higher sensitivity than the colorimeter. During the analysis the dental surface with the HSI we were able to notice surface changes. These changes were analyzed by roughness studies.
Resumo:
We have studied how leaders emerge in a group as a consequence of interactions among its members. We propose that leaders can emerge as a consequence of a self-organized process based on local rules of dyadic interactions among individuals. Flocks are an example of self-organized behaviour in a group and properties similar to those observed in flocks might also explain some of the dynamics and organization of human groups. We developed an agent-based model that generated flocks in a virtual world and implemented it in a multi-agent simulation computer program that computed indices at each time step of the simulation to quantify the degree to which a group moved in a coordinated way (index of flocking behaviour) and the degree to which specific individuals led the group (index of hierarchical leadership). We ran several series of simulations in order to test our model and determine how these indices behaved under specific agent and world conditions. We identified the agent, world property, and model parameters that made stable, compact flocks emerge, and explored possible environmental properties that predicted the probability of becoming a leader.
Resumo:
In general, laboratory activities are costly in terms of time, space, and money. As such, the ability to provide realistically simulated laboratory data that enables students to practice data analysis techniques as a complementary activity would be expected to reduce these costs while opening up very interesting possibilities. In the present work, a novel methodology is presented for design of analytical chemistry instrumental analysis exercises that can be automatically personalized for each student and the results evaluated immediately. The proposed system provides each student with a different set of experimental data generated randomly while satisfying a set of constraints, rather than using data obtained from actual laboratory work. This allows the instructor to provide students with a set of practical problems to complement their regular laboratory work along with the corresponding feedback provided by the system's automatic evaluation process. To this end, the Goodle Grading Management System (GMS), an innovative web-based educational tool for automating the collection and assessment of practical exercises for engineering and scientific courses, was developed. The proposed methodology takes full advantage of the Goodle GMS fusion code architecture. The design of a particular exercise is provided ad hoc by the instructor and requires basic Matlab knowledge. The system has been employed with satisfactory results in several university courses. To demonstrate the automatic evaluation process, three exercises are presented in detail. The first exercise involves a linear regression analysis of data and the calculation of the quality parameters of an instrumental analysis method. The second and third exercises address two different comparison tests, a comparison test of the mean and a t-paired test.
Resumo:
The microencapsulation of palm oil may be a mechanism for protecting and promoting the controlled release of its bioactive compounds. To optimize the microencapsulation process, it is necessary to accurately quantify the palm oil present both external and internal to the microcapsules. In this study, we developed and validated a spectrophotometric method to determine the microencapsulation efficiency of palm oil by complex coacervation. We used gelatin and gum arabic (1:1) as wall material in a 5% concentration (w/v) and palm oil in the same concentration. The coacervates were obtained at pH 4.0 ± 0.01, decanted for 24 h, frozen (−40 ºC), and lyophilized for 72 h. Morphological analyzes were then performed. We standardized the extraction of the external palm oil through five successive washes with an organic solvent. We then explored the best method for rupturing the microcapsules. After successive extractions with hexane, we determined the amount of palm oil contained in the microcapsules using a spectrophotometer. The proposed method was shown to be of low cost, fast, and easy to implement. In addition, in the validation step, we confirmed the method to be safe and reliable, as it proved to be specific, accurate, precise, and robust.
Resumo:
The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.
Resumo:
Työn tarkoituksena oli kehittää analyyttinen erotusmenetelmä eräässä valmistusprosessissa käytettävän hapettavan aineen ja liuottimen välillä syntyvien reaktiotuotteiden tutkimiseen ja analysoimiseen. Lisäksi tarkoituksena oli tutkia prosessiolosuhteiden turvallisuutta. Kirjallisuusosassa käsitellään erilaisia orgaanisia peroksideja, niiden käyttötarkoituksia ja niiden käyttöön liittyviä huomioitavia asioita. Lisäksi tarkastellaan yleisimpiä analyysimenetelmiä, joita on käytetty erilaisten peroksidien analysoinnissa. Näitä analyysimenetelmiä on useimmiten käytetty nestemäisten näytteiden tutkimuksissa. Harvemmin on analysoitu kaasu- ja kiintoainenäytteitä. Kokeellisessa osassa kehitettiin kirjallisuuden perusteella peroksidiyhdisteille identifiointimenetelmä ja tutkittiin prosessin näytteet. Analyysimenetelmiksi valittiin iodometrinen titraus ja HPLC-UV-MS-menetelmä. Lisäksi käytettiin peroksidimittaukseen soveltuvia testiliuskoja. Tutkimus osoitti, että iodometrisen titrauksen ja testiliuskojen perusteella näytteissä oli vähäisiä määriä peroksideja viikon jälkeen peroksidilisäyksestä. HPLC-UV-MS-analyysien perusteella näytteiden analysointia häiritsi selluloosa, jota löytyi jokaisesta näytteestä.