940 resultados para practical epistemology analysis
Resumo:
Laser additive manufacturing (LAM), known also as 3D printing, is a powder bed fusion (PBF) type of additive manufacturing (AM) technology used to manufacture metal parts layer by layer by assist of laser beam. The development of the technology from building just prototype parts to functional parts is due to design flexibility. And also possibility to manufacture tailored and optimised components in terms of performance and strength to weight ratio of final parts. The study of energy and raw material consumption in LAM is essential as it might facilitate the adoption and usage of the technique in manufacturing industries. The objective this thesis was find the impact of LAM on environmental and economic aspects and to conduct life cycle inventory of CNC machining and LAM in terms of energy and raw material consumption at production phases. Literature overview in this thesis include sustainability issues in manufacturing industries with focus on environmental and economic aspects. Also life cycle assessment and its applicability in manufacturing industry were studied. UPLCI-CO2PE! Initiative was identified as mostly applied exiting methodology to conduct LCI analysis in discrete manufacturing process like LAM. Many of the reviewed literature had focused to PBF of polymeric material and only few had considered metallic materials. The studies that had included metallic materials had only measured input and output energy or materials of the process and compared to different AM systems without comparing to any competitive process. Neither did any include effect of process variation when building metallic parts with LAM. Experimental testing were carried out to make dissimilar samples with CNC machining and LAM in this thesis. Test samples were designed to include part complexity and weight reductions. PUMA 2500Y lathe machine was used in the CNC machining whereas a modified research machine representing EOSINT M-series was used for the LAM. The raw material used for making the test pieces were stainless steel 316L bar (CNC machined parts) and stainless steel 316L powder (LAM built parts). An analysis of power, time, and the energy consumed in each of the manufacturing processes on production phase showed that LAM utilises more energy than CNC machining. The high energy consumption was as result of duration of production. Energy consumption profiles in CNC machining showed fluctuations with high and low power ranges. LAM energy usage within specific mode (standby, heating, process, sawing) remained relatively constant through the production. CNC machining was limited in terms of manufacturing freedom as it was not possible to manufacture all the designed sample by machining. And the one which was possible was aided with large amount of material removed as waste. Planning phase in LAM was shorter than in CNC machining as the latter required many preparation steps. Specific energy consumption (SEC) were estimated in LAM based on the practical results and assumed platform utilisation. The estimated platform utilisation showed SEC could reduce when more parts were placed in one build than it was in with the empirical results in this thesis (six parts).
Resumo:
cDNA microarray is an innovative technology that facilitates the analysis of the expression of thousands of genes simultaneously. The utilization of this methodology, which is rapidly evolving, requires a combination of expertise from the biological, mathematical and statistical sciences. In this review, we attempt to provide an overview of the principles of cDNA microarray technology, the practical concerns of the analytical processing of the data obtained, the correlation of this methodology with other data analysis methods such as immunohistochemistry in tissue microarrays, and the cDNA microarray application in distinct areas of the basic and clinical sciences.
Resumo:
The shift towards a knowledge-based economy has inevitably prompted the evolution of patent exploitation. Nowadays, patent is more than just a prevention tool for a company to block its competitors from developing rival technologies, but lies at the very heart of its strategy for value creation and is therefore strategically exploited for economic pro t and competitive advantage. Along with the evolution of patent exploitation, the demand for reliable and systematic patent valuation has also reached an unprecedented level. However, most of the quantitative approaches in use to assess patent could arguably fall into four categories and they are based solely on the conventional discounted cash flow analysis, whose usability and reliability in the context of patent valuation are greatly limited by five practical issues: the market illiquidity, the poor data availability, discriminatory cash-flow estimations, and its incapability to account for changing risk and managerial flexibility. This dissertation attempts to overcome these impeding barriers by rationalizing the use of two techniques, namely fuzzy set theory (aiming at the first three issues) and real option analysis (aiming at the last two). It commences with an investigation into the nature of the uncertainties inherent in patent cash flow estimation and claims that two levels of uncertainties must be properly accounted for. Further investigation reveals that both levels of uncertainties fall under the categorization of subjective uncertainty, which differs from objective uncertainty originating from inherent randomness in that uncertainties labelled as subjective are highly related to the behavioural aspects of decision making and are usually witnessed whenever human judgement, evaluation or reasoning is crucial to the system under consideration and there exists a lack of complete knowledge on its variables. Having clarified their nature, the application of fuzzy set theory in modelling patent-related uncertain quantities is effortlessly justified. The application of real option analysis to patent valuation is prompted by the fact that both patent application process and the subsequent patent exploitation (or commercialization) are subject to a wide range of decisions at multiple successive stages. In other words, both patent applicants and patentees are faced with a large variety of courses of action as to how their patent applications and granted patents can be managed. Since they have the right to run their projects actively, this flexibility has value and thus must be properly accounted for. Accordingly, an explicit identification of the types of managerial flexibility inherent in patent-related decision making problems and in patent valuation, and a discussion on how they could be interpreted in terms of real options are provided in this dissertation. Additionally, the use of the proposed techniques in practical applications is demonstrated by three fuzzy real option analysis based models. In particular, the pay-of method and the extended fuzzy Black-Scholes model are employed to investigate the profitability of a patent application project for a new process for the preparation of a gypsum-fibre composite and to justify the subsequent patent commercialization decision, respectively; a fuzzy binomial model is designed to reveal the economic potential of a patent licensing opportunity.
Resumo:
We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio ³95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.
Resumo:
The marine bioprocessing industry offers great potential to utilize byproducts for fish meal replacement in aquafeeds. Jumbo squid is an important fishery commodity in Mexico, but only the mantle is marketed. Head, fins, guts and tentacles are discarded in spite of being protein-rich byproducts. This study evaluated the use of two jumbo squid byproduct hydrolysates obtained by acid-enzymatic hydrolysis (AEH) and by autohydrolysis (AH) as ingredients in practical diets for shrimp. The hydrolysates were included at levels of 2.5 and 5.0% of the diet dry weight in four practical diets, including a control diet without hydrolysate. Shrimp growth and survival were not significantly affected by the dietary treatments. Postharvest quality of abdominal muscle was evaluated in terms of proximate composition and sensory evaluation. Significantly higher crude protein was observed in the muscle of shrimp fed the highest hydrolysate levels, AH 5% (204.8 g kg- 1) or AEH 5% (201.3 g kg- 1). Sensory analysis of cooked muscle showed significant differences for all variables evaluated: color, odor, flavor, and firmness. It was concluded that Jumbo squid byproducts can be successfully processed by autohydrolysis or acid-enzymatic hydrolysis, and that up to 5.0% of the hydrolysates can be incorporated into shrimp diets without affecting growth or survival.
Resumo:
Gravitational phase separation is a common unit operation found in most large-scale chemical processes. The need for phase separation can arise e.g. from product purification or protection of downstream equipment. In gravitational phase separation, the phases separate without the application of an external force. This is achieved in vessels where the flow velocity is lowered substantially compared to pipe flow. If the velocity is low enough, the denser phase settles towards the bottom of the vessel while the lighter phase rises. To find optimal configurations for gravitational phase separator vessels, several different geometrical and internal design features were evaluated based on simulations using OpenFOAM computational fluid dynamics (CFD) software. The studied features included inlet distributors, vessel dimensions, demister configurations and gas phase outlet configurations. Simulations were conducted as single phase steady state calculations. For comparison, additional simulations were performed as dynamic single and two-phase calculations. The steady state single phase calculations provided indications on preferred configurations for most above mentioned features. The results of the dynamic simulations supported the utilization of the computationally faster steady state model as a practical engineering tool. However, the two-phase model provides more truthful results especially with flows where a single phase does not determine the flow characteristics.
Resumo:
Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.
Resumo:
Biorefineries is a perspective field of study that covers many opportunities of a successful business unit with respect to sustainability. The thesis focuses on the following key objective: identification of a competitive biorefineries production process in small and medium segments of the chemical and forest industries in Finland. The scope of the research relates to the selected biorefineries operations in Finland and the use of hemicellulose, as a raw material. The identification of the types of biorefineries and the important technical and process characteristics opens the advantage in the company’s competitive analysis. The study concentrates on the practical approach to the scientific methods of the market and companies research with the help of Quality Function Deployment and House of Quality tool. The thesis’s findings provide mindset version of the expert’s House of Quality application, identification of crucial biorefineries technical and design characteristics’ correlation and their effect on the competitive behavior of a company. The theoretical background helps to build the picture of the problematic issues within the field and provides scientific possible solutions. The analysis of the biorefineries’ market and companies operations bring the practical-oriented aptitude of the research. The results of the research can be used for the following investigations in a field and may be applied as a company’s management analytic and strategic application.
Resumo:
This study aims to extend prior knowledge on the learning and developmental outcomes of the experiential learning cycle of David Kolb by the analysis of its practical realization at Team Academy. The study is based on the constructivist approach to learning and considers, among others, the concepts of autonomy support, Nonaka and Takeuchi's knowledge creation model, Luft and Ingham's Johari Window and Deci and Ryan's Self-determination theory. For the investigation deep interviews were carried out with the participants of Team Academy, both learners and coaches. Taking the interview results and the above described theories into consideration this study concludes that experiential learning results not only in effective learning, but also in a remarkable soft skill acquisition, self-development and increase in motivation with an internal locus of causality. Real-life projects permit the learners to experience real challenges. By the practical activities and teamwork they also get the possibility to find out their personal strengths, weaknesses and unique capacities.
Resumo:
The definition of knowledge as justified true belief is the best we presently have. However, the canonical tripartite analysis of knowledge does not do justice to it due to a Platonic conception of a priori truth that puts the cart before the horse. Within a pragmatic approach, I argue that by doing away with a priori truth, namely by submitting truth to justification, and by accordingly altering the canonical analysis of knowledge, this is a fruitful definition. So fruitful indeed that it renders the Gettier counterexamples vacuous, allowing positive work in epistemology and related disciplines.
Resumo:
The quantitative component of this study examined the effect of computerassisted instruction (CAI) on science problem-solving performance, as well as the significance of logical reasoning ability to this relationship. I had the dual role of researcher and teacher, as I conducted the study with 84 grade seven students to whom I simultaneously taught science on a rotary-basis. A two-treatment research design using this sample of convenience allowed for a comparison between the problem-solving performance of a CAI treatment group (n = 46) versus a laboratory-based control group (n = 38). Science problem-solving performance was measured by a pretest and posttest that I developed for this study. The validity of these tests was addressed through critical discussions with faculty members, colleagues, as well as through feedback gained in a pilot study. High reliability was revealed between the pretest and the posttest; in this way, students who tended to score high on the pretest also tended to score high on the posttest. Interrater reliability was found to be high for 30 randomly-selected test responses which were scored independently by two raters (i.e., myself and my faculty advisor). Results indicated that the form of computer-assisted instruction (CAI) used in this study did not significantly improve students' problem-solving performance. Logical reasoning ability was measured by an abbreviated version of the Group Assessment of Lx)gical Thinking (GALT). Logical reasoning ability was found to be correlated to problem-solving performance in that, students with high logical reasoning ability tended to do better on the problem-solving tests and vice versa. However, no significant difference was observed in problem-solving improvement, in the laboratory-based instruction group versus the CAI group, for students varying in level of logical reasoning ability.Insignificant trends were noted in results obtained from students of high logical reasoning ability, but require further study. It was acknowledged that conclusions drawn from the quantitative component of this study were limited, as further modifications of the tests were recommended, as well as the use of a larger sample size. The purpose of the qualitative component of the study was to provide a detailed description ofmy thesis research process as a Brock University Master of Education student. My research journal notes served as the data base for open coding analysis. This analysis revealed six main themes which best described my research experience: research interests, practical considerations, research design, research analysis, development of the problem-solving tests, and scoring scheme development. These important areas ofmy thesis research experience were recounted in the form of a personal narrative. It was noted that the research process was a form of problem solving in itself, as I made use of several problem-solving strategies to achieve desired thesis outcomes.
Resumo:
This qualitative study addresses the question of how teachers negotiate meaning of new curriculum to better understand how curriculum is transformed from a theoretical construct to a practical one. Through interviews with 5 teachers, their experiences were examined as they negotiated the process of implementing new curriculum. Three theoretical constructs provided the entry point into the study: epistemology, teacher knowledge, and teacher learning. Using inductive analysis, 4 points or attributes of negotiation emerged: reference, growth, autonomy, and reconciliation. These attributes provided a theoretical framework from which a constructivist conceptualization of teacher learning and teacher knowledge could serve to understand the process of how teachers negotiate meaning of curriculum. Studied and theorized in this way, teacher knowledge and teacher learning are seen to be inextricably linked in a relationship that is dynamically changed by forces of stability and instability. Theorizing the negotiation of meaning from a constructivist epistemology also strengthened the assertion that negotiating meaning is a unique structural process, and that knowledge construction is therefore unique to each knower and subject to experience in a particular time and place. The implications for such a theory are, first, that it questions the legitimacy of privatized teacher practice and, second, that it calls for a renewed conceptualization of collegial network and relationship to strengthen the capacity for negotiating meaning of curricular initiatives. Understanding the relationship of curricular theory and negotiating meaning also has implications for curriculum development. In particular, the study highlights the necessity of professional discretion and the generative process of negotiating meaning.
Resumo:
In a recent paper, Bai and Perron (1998) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. We first address the problem of estimation of the break dates and present an efficient algorithm to obtain global minimizers of the sum of squared residuals. This algorithm is based on the principle of dynamic programming and requires at most least-squares operations of order O(T 2) for any number of breaks. Our method can be applied to both pure and partial structural-change models. Secondly, we consider the problem of forming confidence intervals for the break dates under various hypotheses about the structure of the data and the errors across segments. Third, we address the issue of testing for structural changes under very general conditions on the data and the errors. Fourth, we address the issue of estimating the number of breaks. We present simulation results pertaining to the behavior of the estimators and tests in finite samples. Finally, a few empirical applications are presented to illustrate the usefulness of the procedures. All methods discussed are implemented in a GAUSS program available upon request for non-profit academic use.
Resumo:
L’architecture au sens strict, qui renvoie à la construction, n’est pas indépendante des déterminations mentales, des images et des valeurs esthétiques, comme références, amenées par divers champs d’intérêt au problème du sens. Elle est, de par ce fait, un objet d’interprétation. Ce qu’on appelle communément « signification architecturale », est un univers vaste dans lequel sont constellées des constructions hypothétiques. En ce qui nous concerne, il s’agit non seulement de mouler la signification architecturale selon un cadre et des matières spécifiques de référence, mais aussi, de voir de près la relation de cette question avec l’attitude de perception de l’homme. Dans l’étude de la signification architecturale, on ne peut donc se détacher du problème de la perception. Au fond, notre travail montrera leur interaction, les moyens de sa mise en œuvre et ce qui est en jeu selon les pratiques théoriques qui la commandent. En posant la question de l’origine de l’acte de perception, qui n’est ni un simple acte de voir, ni un acte contemplatif, mais une forme d’interaction active avec la forme architecturale ou la forme d’art en général, on trouve dans les écrits de l’historien Christian Norberg-Schulz deux types de travaux, et donc deux types de réponses dont nous pouvons d’emblée souligner le caractère antinomique l’une par rapport à l’autre. C’est qu’il traite, dans le premier livre qu’il a écrit, Intentions in architecture (1962), connu dans sa version française sous le titre Système logique de l’architecture (1974, ci-après SLA), de l’expression architecturale et des modes de vie en société comme un continuum, défendant ainsi une approche culturelle de la question en jeu : la signification architecturale et ses temporalités. SLA désigne et représente un système théorique influencé, à bien des égards, par les travaux de l’épistémologie de Jean Piaget et par les contributions de la sémiotique au développement de l’étude de la signification architecturale. Le second type de réponse sur l’origine de l’acte de perception que formule Norberg-Schulz, basé sur sur les réflexions du philosophe Martin Heidegger, se rapporte à un terrain d’étude qui se situe à la dérive de la revendication du fondement social et culturel du langage architectural. Il lie, plus précisément, l’étude de la signification à l’étude de l’être. Reconnaissant ainsi la primauté, voire la prééminence, d’une recherche ontologique, qui consiste à soutenir les questionnements sur l’être en tant qu’être, il devrait amener avec régularité, à partir de son livre Existence, Space and Architecture (1971), des questions sur le fondement universel et historique de l’expression architecturale. Aux deux mouvements théoriques caractéristiques de ses écrits correspond le mouvement que prend la construction de notre thèse que nous séparons en deux parties. La première partie sera ainsi consacrée à l’étude de SLA avec l’objectif de déceler les ambiguïtés qui entourent le cadre de son élaboration et à montrer les types de legs que son auteur laisse à la théorie architecturale. Notre étude va montrer l’aspect controversé de ce livre, lié aux influences qu’exerce la pragmatique sur l’étude de la signification. Il s’agit dans cette première partie de présenter les modèles théoriques dont il débat et de les mettre en relation avec les différentes échelles qui y sont proposées pour l’étude du langage architectural, notamment avec l’échelle sociale. Celle-ci implique l’étude de la fonctionnalité de l’architecture et des moyens de recherche sur la typologie de la forme architecturale et sur sa schématisation. Notre approche critique de cet ouvrage prend le point de vue de la recherche historique chez Manfredo Tafuri. La seconde partie de notre thèse porte, elle, sur les fondements de l’intérêt chez Norberg-Schulz à partager avec Heidegger la question de l’Être qui contribuent à fonder une forme d’investigation existentielle sur la signification architecturale et du problème de la perception . L’éclairage de ces fondements exige, toutefois, de montrer l’enracinement de la question de l’Être dans l’essence de la pratique herméneutique chez Heidegger, mais aussi chez H. G. Gadamer, dont se réclame aussi directement Norberg-Schulz, et de dévoiler, par conséquent, la primauté établie de l’image comme champ permettant d’instaurer la question de l’Être au sein de la recherche architecturale. Sa recherche conséquente sur des valeurs esthétiques transculturelles a ainsi permis de réduire les échelles d’étude de la signification à l’unique échelle d’étude de l’Être. C’est en empruntant cette direction que Norberg-Schulz constitue, au fond, suivant Heidegger, une approche qui a pour tâche d’aborder l’« habiter » et le « bâtir » à titre de solutions au problème existentiel de l’Être. Notre étude révèle, cependant, une interaction entre la question de l’Être et la critique de la technique moderne par laquelle l’architecture est directement concernée, centrée sur son attrait le plus marquant : la reproductibilité des formes. Entre les écrits de Norberg-Schulz et les analyses spécifiques de Heidegger sur le problème de l’art, il existe un contexte de rupture avec le langage de la théorie qu’il s’agit pour nous de dégager et de ramener aux exigences du travail herméneutique, une approche que nous avons nous-même adoptée. Notre méthode est donc essentiellement qualitative. Elle s’inspire notamment des méthodes d’interprétation, de là aussi notre recours à un corpus constitué des travaux de Gilles Deleuze et de Jacques Derrida ainsi qu’à d’autres travaux associés à ce type d’analyse. Notre recherche demeure cependant attentive à des questions d’ordre épistémologique concernant la relation entre la discipline architecturale et les sciences qui se prêtent à l’étude du langage architectural. Notre thèse propose non seulement une compréhension approfondie des réflexions de Norberg-Schulz, mais aussi une démonstration de l’incompatibilité de la phénoménologie de Heidegger et des sciences du langage, notamment la sémiotique.
Resumo:
We study the workings of the factor analysis of high-dimensional data using artificial series generated from a large, multi-sector dynamic stochastic general equilibrium (DSGE) model. The objective is to use the DSGE model as a laboratory that allow us to shed some light on the practical benefits and limitations of using factor analysis techniques on economic data. We explain in what sense the artificial data can be thought of having a factor structure, study the theoretical and finite sample properties of the principal components estimates of the factor space, investigate the substantive reason(s) for the good performance of di¤usion index forecasts, and assess the quality of the factor analysis of highly dissagregated data. In all our exercises, we explain the precise relationship between the factors and the basic macroeconomic shocks postulated by the model.