919 resultados para Management Sciences and Quantitative Methods
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
The aim of this master’s thesis is to research and analyze how purchase invoice processing can be automated and streamlined in a system renewal project. The impacts of workflow automation on invoice handling are studied by means of time, cost and quality aspects. Purchase invoice processing has a lot of potential for automation because of its labor-intensive and repetitive nature. As a case study combining both qualitative and quantitative methods, the topic is approached from a business process management point of view. The current process was first explored through interviews and workshop meetings to create a holistic understanding of the process at hand. Requirements for process streamlining were then researched focusing on specified vendors and their purchase invoices, which helped to identify the critical factors for successful invoice automation. To optimize the flow from invoice receipt to approval for payment, the invoice receiving process was outsourced and the automation functionalities of the new system utilized in invoice handling. The quality of invoice data and the need of simple structured purchase order (PO) invoices were emphasized in the system testing phase. Hence, consolidated invoices containing references to multiple PO or blanket release numbers should be simplified in order to use automated PO matching. With non-PO invoices, it is important to receive the buyer reference details in an applicable invoice data field so that automation rules could be created to route invoices to a review and approval flow. In the beginning of the project, invoice processing was seen ineffective both time- and cost-wise, and it required a lot of manual labor to carry out all tasks. In accordance with testing results, it was estimated that over half of the invoices could be automated within a year after system implementation. Processing times could be reduced remarkably, which would then result savings up to 40 % in annual processing costs. Due to several advancements in the purchase invoice process, business process quality could also be perceived as improved.
Resumo:
The present study was designed to compare the homeostasis model assessment (HOMA) and quantitative insulin sensitivity check index (QUICKI) with data from forearm metabolic studies of healthy individuals and of subjects in various pathological states. Fifty-five healthy individuals and 112 patients in various pathological states, including type 2 diabetes mellitus, essential hypertension and others, were studied after an overnight fast and for 3 h after ingestion of 75 g of glucose, by HOMA, QUICKI and the forearm technique to estimate muscle uptake of glucose combined with indirect calorimetry (oxidative and non-oxidative glucose metabolism). The patients showed increased HOMA (1.88 ± 0.14 vs 1.13 ± 0.10 pmol/l x mmol/l) and insulin/glucose (I/G) index (1.058.9 ± 340.9 vs 518.6 ± 70.7 pmol/l x (mg/100 ml forearm)-1), and decreased QUICKI (0.36 ± 0.004 vs 0.39 ± 0.006 (µU/ml + mg/dl)-1) compared with the healthy individuals. Analysis of the data for the group as a whole (patients and healthy individuals) showed that the estimate of insulin resistance by HOMA was correlated with data obtained in the forearm metabolic studies (glucose uptake: r = -0.16, P = 0.04; non-oxidative glucose metabolism: r = -0.20. P = 0.01, and I/G index: r = 0.17, P = 0.03). The comparison of QUICKI with data of the forearm metabolic studies showed significant correlation between QUICKI and non-oxidative glucose metabolism (r = 0.17, P = 0.03) or I/G index (r = -0.37, P < 0.0001). The HOMA and QUICKI are good estimates of insulin sensitivity as data derived from forearm metabolic studies involving direct measurements of insulin action on muscle glucose metabolism.
Resumo:
Several methods are used to estimate anaerobic threshold (AT) during exercise. The aim of the present study was to compare AT obtained by a graphic visual method for the estimate of ventilatory and metabolic variables (gold standard), to a bi-segmental linear regression mathematical model of Hinkley's algorithm applied to heart rate (HR) and carbon dioxide output (VCO2) data. Thirteen young (24 ± 2.63 years old) and 16 postmenopausal (57 ± 4.79 years old) healthy and sedentary women were submitted to a continuous ergospirometric incremental test on an electromagnetic braking cycloergometer with 10 to 20 W/min increases until physical exhaustion. The ventilatory variables were recorded breath-to-breath and HR was obtained beat-to-beat over real time. Data were analyzed by the nonparametric Friedman test and Spearman correlation test with the level of significance set at 5%. Power output (W), HR (bpm), oxygen uptake (VO2; mL kg-1 min-1), VO2 (mL/min), VCO2 (mL/min), and minute ventilation (VE; L/min) data observed at the AT level were similar for both methods and groups studied (P > 0.05). The VO2 (mL kg-1 min-1) data showed significant correlation (P < 0.05) between the gold standard method and the mathematical model when applied to HR (r s = 0.75) and VCO2 (r s = 0.78) data for the subjects as a whole (N = 29). The proposed mathematical method for the detection of changes in response patterns of VCO2 and HR was adequate and promising for AT detection in young and middle-aged women, representing a semi-automatic, non-invasive and objective AT measurement.
Resumo:
A new scientometric indicator, the h-index, has been recently proposed (Hirsch JE. Proc Natl Acad Sci 2005; 102: 16569-16572). The index avoids some shortcomings of the calculation of the total number of citations as a parameter to evaluate scientific performance. Although it has become known only recently, it has had widespread acceptance. A comparison of the average h-index of members of the Brazilian Academy of Sciences (BAS) and of the National Academy of Sciences of the USA (NAS-USA) was carried out for 10 different areas of science. Although, as expected, the comparison was unfavorable to the members of the BAS, the imbalance was distinct in different areas. Since these two academies represent, to a significant extent, the science of top quality produced in each country, the comparison allows the identification of the areas in Brazil that are closer to the international stakeholders of scientific excellence. The areas of Physics and Mathematics stand out in this context. The heterogeneity of the h-index in the different areas, estimated by the median dispersion of the index, is significantly higher in the BAS than in the NAS-USA. No elements have been collected in the present study to provide an explanation for this fact.
Resumo:
It has been reported that, compared with simple increased nuchal translucency, fetal cases with septated cystic hygroma (CH) are more likely to face perinatal handicaps. However, pediatric outcomes and proper prenatal counseling for this anomaly have not yet been truly defined. We performed this study to determine pregnancy and pediatric outcomes of fetuses with septated CH. We searched records for cases with septated CH and collected data for structural abnormalities, karyotype analysis, and pregnancy outcomes. Fetuses born with septated CH were also evaluated for their pediatric outcomes. Sixty-nine fetuses with septated CH were enrolled in the study. Results showed that chromosomal abnormalities were present in 28 fetuses (40.6%), and the most common aneuploidy was Turner syndrome (n=14, 20.3%); 16 (23.2%) of the remaining cases, in which aneuploidy was not found, had coexistent structural malformations; 25 (36.2%) cases had normal karyotype and morphology. The total number of live births and infants with unfavorable neurologic follow-up were 13 (18.8%) and 2 (2.9%), respectively. Septated CH is associated with poor perinatal outcomes; therefore, karyotype analysis and ultrasonographic anomaly screening should be performed as initial steps, and expectant management should be offered to couples with euploid fetuses that have normal morphology.
Resumo:
The aim of this thesis is to propose a novel control method for teleoperated electrohydraulic servo systems that implements a reliable haptic sense between the human and manipulator interaction, and an ideal position control between the manipulator and the task environment interaction. The proposed method has the characteristics of a universal technique independent of the actual control algorithm and it can be applied with other suitable control methods as a real-time control strategy. The motivation to develop this control method is the necessity for a reliable real-time controller for teleoperated electrohydraulic servo systems that provides highly accurate position control based on joystick inputs with haptic capabilities. The contribution of the research is that the proposed control method combines a directed random search method and a real-time simulation to develop an intelligent controller in which each generation of parameters is tested on-line by the real-time simulator before being applied to the real process. The controller was evaluated on a hydraulic position servo system. The simulator of the hydraulic system was built based on Markov chain Monte Carlo (MCMC) method. A Particle Swarm Optimization algorithm combined with the foraging behavior of E. coli bacteria was utilized as the directed random search engine. The control strategy allows the operator to be plugged into the work environment dynamically and kinetically. This helps to ensure the system has haptic sense with high stability, without abstracting away the dynamics of the hydraulic system. The new control algorithm provides asymptotically exact tracking of both, the position and the contact force. In addition, this research proposes a novel method for re-calibration of multi-axis force/torque sensors. The method makes several improvements to traditional methods. It can be used without dismantling the sensor from its application and it requires smaller number of standard loads for calibration. It is also more cost efficient and faster in comparison to traditional calibration methods. The proposed method was developed in response to re-calibration issues with the force sensors utilized in teleoperated systems. The new approach aimed to avoid dismantling of the sensors from their applications for applying calibration. A major complication with many manipulators is the difficulty accessing them when they operate inside a non-accessible environment; especially if those environments are harsh; such as in radioactive areas. The proposed technique is based on design of experiment methodology. It has been successfully applied to different force/torque sensors and this research presents experimental validation of use of the calibration method with one of the force sensors which method has been applied to.
Resumo:
This paper was designed to evaluate the rancidity of 18 pet food samples using the Diamed FATS kits and official AOCS methods for the quantification of free fatty acids, peroxide value and concentrations of malonaldehyde and alkenal in the lipid extracted. Although expiration dates have passed, the samples presented good quality evidencing little oxidative rancidity. The results of this study suggest that the Brazilian pet food market is replete with products of excellent quality due to the competitiveness of this market sector.
Resumo:
There are more than 7000 languages in the world, and many of these have emerged through linguistic divergence. While questions related to the drivers of linguistic diversity have been studied before, including studies with quantitative methods, there is no consensus as to which factors drive linguistic divergence, and how. In the thesis, I have studied linguistic divergence with a multidisciplinary approach, applying the framework and quantitative methods of evolutionary biology to language data. With quantitative methods, large datasets may be analyzed objectively, while approaches from evolutionary biology make it possible to revisit old questions (related to, for example, the shape of the phylogeny) with new methods, and adopt novel perspectives to pose novel questions. My chief focus was on the effects exerted on the speakers of a language by environmental and cultural factors. My approach was thus an ecological one, in the sense that I was interested in how the local environment affects humans and whether this human-environment connection plays a possible role in the divergence process. I studied this question in relation to the Uralic language family and to the dialects of Finnish, thus covering two different levels of divergence. However, as the Uralic languages have not previously been studied using quantitative phylogenetic methods, nor have population genetic methods been previously applied to any dialect data, I first evaluated the applicability of these biological methods to language data. I found the biological methodology to be applicable to language data, as my results were rather similar to traditional views as to both the shape of the Uralic phylogeny and the division of Finnish dialects. I also found environmental conditions, or changes in them, to be plausible inducers of linguistic divergence: whether in the first steps in the divergence process, i.e. dialect divergence, or on a large scale with the entire language family. My findings concerning Finnish dialects led me to conclude that the functional connection between linguistic divergence and environmental conditions may arise through human cultural adaptation to varying environmental conditions. This is also one possible explanation on the scale of the Uralic language family as a whole. The results of the thesis bring insights on several different issues in both a local and a global context. First, they shed light on the emergence of the Finnish dialects. If the approach used in the thesis is applied to the dialects of other languages, broader generalizations may be drawn as to the inducers of linguistic divergence. This again brings us closer to understanding the global patterns of linguistic diversity. Secondly, the quantitative phylogeny of the Uralic languages, with estimated times of language divergences, yields another hypothesis as to the shape and age of the language family tree. In addition, the Uralic languages can now be added to the growing list of language families studied with quantitative methods. This will allow broader inferences as to global patterns of language evolution, and more language families can be included in constructing the tree of the world’s languages. Studying history through language, however, is only one way to illuminate the human past. Therefore, thirdly, the findings of the thesis, when combined with studies of other language families, and those for example in genetics and archaeology, bring us again closer to an understanding of human history.
Resumo:
Researchers have widely recognised and accepted that firm performance is increasingly related to knowledge-based issues. Two separately developed literature streams, intellectual capital (IC) and knowledge management (KM), have been established as the key discussions related to knowledge-based competitive advantage of the firm. Intellectual capital has provided evidence on the strategic key intangible resources of the firm, which could be deployed to create competitive advantage. Knowledge management, in turn, has focused on the managerial processes and practices which can be used to leverage IC to create competitive advantage. Despite extensive literature on both issues, some notable research gaps remain to be closed. In effect, one major gap within the knowledge management research is the lack of understanding related to its influence on firm performance, while IC researchers have articulated a need to utilise more finegrained conceptual models to better understand the key strategic value-creating resources of the firm. In this dissertation, IC is regarded as the entire intellectual capacity, knowledge and competences of the firm that can be leveraged to achieve sustained competitive advantage. KM practices are defined as organisational and managerial activities that enable the firm to leverage its IC to create value. The objective of this dissertation is to answer the research question: “What is the relationship between intellectual capital, knowledge management practices and firm performance?” Five publications have addressed the research question using different approaches. The first two publications were systematic literature reviews of the extant empirical IC and KM research, which established the current state of understanding regarding the relationship between IC, KM practices and firm performance. Publications III and IV were empirical research articles that assessed the developed conceptual model related to IC, KM practices and firm performance. Finally, Publication V was among the first research papers to merge IC and KM disciplines in order to find out which configurations could yield organisational benefits in terms of innovation and market performance outcomes.
Resumo:
An efficient way of synthesizing the deuterium labelled analogues of three methoxypyrazine compounds: 2-d3-methoxy-3-isopropylpyrazine, 2-d3-methoxy-3- isobutylpyrazine, and 2-d3-methoxy-3-secbutylpyrazine, has been developed. To confirm that the deuterium labels had been incorporated into the expected positions in the molecules synthesized, the relevant characterization by NMR, HRMS and GC/MS analysis was conducted. Another part of this work involved quantitative determination of methoxypyrazines in water and wines. Solid-phase extraction (SPE) proved to be a suitable means for the sample separation and concentration prior to GC/MS analysis.Such factors as the presence of ethanol, salt, and acid have been investigated which can influence the recovery by SPE for the pyrazines from the water matrix. Significantly, in this work comparatively simple fractional distillation was attempted to replace the conventional steam distillation for pre-concentrating a sample with a relatively large volume prior to SPE. Finally, a real wine sample spiked with the relevant isotope-labelled methoxypyrazines was quantitatively analyzed, revealing that the wine with 10 beetles per litre contained 138 ppt of 2-methoxy-3-isopropylpyrazine. Interestingly, we have also found that 2-methoxy-3-secbutylpyrazine exhibits an extremely low detection limit in GC/MS analysis compared with the detection limit of the other two methoxypyrazines: 2- methoxy-3-isopropylpyrazine and 2-methoxy-3-isobutylpyrazine.
Resumo:
This dissertation investigates the practice of leadership in collaboratively designed and funded research in a university setting. More specifically, this research explores the meaning of leadership as experienced by researchers who were, or still are, engaged on Social Sciences and Humanities Research Council (SSHRC) funded collaborative research projects in a university setting. This qualitative study (Gay & Airasian, 2003) is situated within a social constructivist paradigm (Kezar, Carducci, & Contreras-McGavin, 2006) and involves an analysis of the responses from 12 researchers who answered 11questions related to my overarching research question: What is the impact of leadership on university based collaborative research projects funded by the Social Sciences and Humanities Research Council based on the experiences of researchers involved? The data that emerged supported and enhanced the existing literature related to leadership and collaborative groups in academia. The type of preferred leadership that emerged as a result of this research seemed to indicate that the type of leader that appeared to be optimal in this context might be described as a functional collaborative expert.
Resumo:
This work investigates mathematical details and computational aspects of Metropolis-Hastings reptation quantum Monte Carlo and its variants, in addition to the Bounce method and its variants. The issues that concern us include the sensitivity of these algorithms' target densities to the position of the trial electron density along the reptile, time-reversal symmetry of the propagators, and the length of the reptile. We calculate the ground-state energy and one-electron properties of LiH at its equilibrium geometry for all these algorithms. The importance sampling is performed with a single-determinant large Slater-type orbitals (STO) basis set. The computer codes were written to exploit the efficiencies engineered into modern, high-performance computing software. Using the Bounce method in the calculation of non-energy-related properties, those represented by operators that do not commute with the Hamiltonian, is a novel work. We found that the unmodified Bounce gives good ground state energy and very good one-electron properties. We attribute this to its favourable time-reversal symmetry in its target density's Green's functions. Breaking this symmetry gives poorer results. Use of a short reptile in the Bounce method does not alter the quality of the results. This suggests that in future applications one can use a shorter reptile to cut down the computational time dramatically.