926 resultados para Odyssey Stand Alone
Resumo:
During the last decade, conservation banking mechanisms have emerged in the environmental discourse as new market instruments to promote biodiversity conservation. Compensation was already provided for in environmental law in many countries, as the last step of the mitigation hierarchy. The institutional arrangements developed in this context have been redefined and reshaped as market-based instruments (MBIs). As such, they are discursively disentangled from the complex legal-economic nexus they are part of. Monetary transactions are given prominence and tend to be presented as stand alone agreements, whereas they take place in the context of prescriptive regulations. The pro-market narrative featuring conservation banking systems as market-like arrangements as well as their denunciation as instances of nature commodification tend to obscure their actual characteristics. The purpose of this paper is to describe the latter, adopting an explicitly analytical stance on these complex institutional arrangements and their performative dimensions. Beyond the discourse supporting them and notwithstanding the diversity of national policies and regulatory frameworks for compensation, the constitutive force of these mechanisms probably lies in their ability to redefine control, power and the distribution of costs and in their impacts in terms of land use rather than in their efficiency.
Resumo:
Reliance on private partners to help provide infrastructure investment and service delivery is increasing in the United States. Numerous studies have examined the determinants of the degree of private participation in infrastructure projects as governed by contract type. We depart from this simple public/private dichotomy by examining a rich set of contractual arrangements. We utilize both municipal and state-level data on 472 projects of various types completed between 1985 and 2008. Our estimates indicate that infrastructure characteristics, particularly those that reflect stand alone versus network characteristics, are key factors influencing the extent of private participation. Fiscal variables, such as a jurisdiction’s relative debt level, and basic controls, such as population and locality of government, increase the degree of private participation, while a greater tax burden reduces private participation.
Resumo:
Coronary artery disease (CAD) is a chronic process that evolves over decades and may culminate in myocardial infarction (MI). While invasive coronary angiography (ICA) is still considered the gold standard of imaging CAD, non-invasive assessment of both the vascular anatomy and myocardial perfusion has become an intriguing alternative. In particular, computed tomography (CT) and positron emission tomography (PET) form an attractive combination for such studies. Increased radiation dose is, however, a concern. Our aim in the current thesis was to test novel CT and PET techniques alone and in hybrid setting in the detection and assessment of CAD in clinical patients. Along with diagnostic accuracy, methods for the reduction of the radiation dose was an important target. The study investigating the coronary arteries of patients with atrial fibrillation (AF) showed that CAD may be an important etiology of AF because a high prevalence of CAD was demonstrated within AF patients. In patients with suspected CAD, we demonstrated that a sequential, prospectively ECG-triggered CT technique was applicable to nearly 9/10 clinical patients and the radiation dose was over 60% lower than with spiral CT. To detect the functional significance of obstructive CAD, a novel software for perfusion quantification, CarimasTM, showed high reproducibility with 15O-labelled water in PET, supporting feasibility and good clinical accuracy. In a larger cohort of 107 patients with moderate 30-70% pre-test probability of CAD, hybrid PET/CT was shown to be a powerful diagnostic method in the assessment of CAD with diagnostic accuracy comparable to that of invasive angiography and fractional flow reserve (FFR) measurements. A hybrid study may be performed with a reasonable radiation dose in a vast majority of the cases, improving the performance of stand-alone PET and CT angiography, particularly when the absolute quantification of the perfusion is employed. These results can be applied into clinical practice and will be useful for daily clinical diagnosis of CAD.
Resumo:
Bioactive glasses (BGs) form a group of synthetic, surface-active, composition-dependent, silica-based biomaterials with osteoconductive, osteopromotive, and even angiogenic, as well as antibacterial, properties. A national interdisciplinary research group, within the Combio Technology Program (2003–2007), developed a porous load-bearing composite for surgical applications made of BG 1–98 and polymer fibers. The pre-clinical part of this thesis focused on the in vitro and in vivo testing of the composite materials in a rabbit femur and spinal posterolateral fusion model. The femur model failed to demonstrate the previously seen positive effect of BG 1–98 on osteogenesis, probably due to the changed resorption properties of BG in the form of fibers. The spine study was terminated early due to adverse events. In vitro cultures showed the growth inhibition of human mesenchymal stems next to BG 1–98 fibers and radical pH changes. A prospective, long-term, follow-up study was conducted on BG–S53P4 and autogenous bone used as bone graft substitutes for instrumented posterolateral spondylodesis in the treatment of degenerative spondylolisthesis (n=17) and unstable burst fractures (n=10) during 1996–1998. The operative outcome was evaluated from X-rays and CT scans, and a clinical examination was also performed. On the BG side, a solid fusion was observed in the CT scans of 12 patients, and a partial fusion was found in 5 patients, the result being a total fusion rate in all fusion sites (n=41) 88% for levels L4/5 and L5/S1 in the spondylolisthesis group. In the spine fracture group, solid fusion was observed in five patients, and partial fusion was found in five resulting in a total fusion rate of 71% of all fusion sites (n=21). The pre-clinical results suggest that under certain conditions the physical form of BG can be more critical than its chemical composition when a clinical application is designed. The first long-term clinical results concerning the use of BG S53P4 as bone graft material in instrumented posterolateral spondylodesis seems to be a safe procedure, associated with a very low complication rate. BG S53P4 used as a stand-alone bone substitute cannot be regarded as being as efficient as AB in promoting solid fusion.
Resumo:
This thesis focuses on integration in project business, i.e. how projectbased companies organize their product and process structures when they deliver industrial solutions to their customers. The customers that invest in these solutions run their businesses in different geographical, political and economical environments, which should be acknowledged by the supplier when providing solutions comprising of larger and more complex scopes than previously supplied to these customers. This means that the suppliers are increasing their supply range by taking over some of the activities in the value chain that have traditionally been handled by the customer. In order to be able to provide the functioning solutions, including more engineering hours, technical equipment and a wider project network, a change is needed in the mindset in order to be able to carry out and take the required responsibility that these new approaches bring. For the supplier it is important to be able to integrate technical products, systems and services, but the supplier also needs to have the capabilities to integrate the cross-functional organizations and departments in the project network, the knowledge and information between and within these organizations and departments, along with inputs from the customer into the product and process structures during the lifecycle of the project under development. Hence, the main objective of this thesis is to explore the challenges of integration that industrial projects meet, and based on that, to suggest a concept of how to manage integration in project business by making use of integration mechanisms. Integration is considered the essential process for accomplishing an industrial project, whereas the accomplishment of the industrial project is considered to be the result of the integration. The thesis consists of an extended summary and four papers, that are based on three studies in which integration mechanisms for value creation in industrial project networks and the management of integration in project business have been explored. The research is based on an inductive approach where in particular the design, commissioning and operations functions of industrial projects have been studied, addressing entire project life-cycles. The studies have been conducted in the shipbuilding and power generation industries where the scopes of supply consist of stand-alone equipment, equipment and engineering, and turnkey solutions. These industrial solutions include demanding efforts in engineering and organization. Addressing the calls for more studies on the evolving value chains of integrated solutions, mechanisms for inter- and intra-organizational integration and subsequent value creation in project networks have been explored. The research results in thirteen integration mechanisms and a typology for integration is proposed. Managing integration consists of integrating the project network (the supplier and the sub-suppliers) and the customer (the customer’s business purpose, operations environment and the end-user) into the project by making use of integration mechanisms. The findings bring new insight into research on industrial project business by proposing integration of technology and engineering related elements with elements related to customer oriented business performance in contemporary project environments. Thirteen mechanisms for combining products and the processes needed to deliver projects are described and categorized according to the impact that they have on the management of knowledge and information. These mechanisms directly relate to the performance of the supplier, and consequently to the functioning of the solution that the project provides. This thesis offers ways to promote integration of knowledge and information during the lifecycle of industrial projects, enhancing the development towards innovative solutions in project business.
Resumo:
The target of the thesis is to improve product profitability control in continuous IT-services. Accurate product cost accounting and correctly allocated revenues are a necessity for good product profitability control. The focus of the study is on costs and revenues that are not traced directly to services. The thesis is focused on revenue allocations as revenue allocation methods have not been used in the case company before. In order to achieve the target revenue allocation methods, which improve the product profitability accounting and control, are presented. The research methods used in the thesis are literature review and empirical case study. The research approach is constructive. The theoretical part is composed of literature and articles that create a base for the empirical part. Internal interviews describe the current situation in the company and based on it development actions are planned. The part of the empirical case study is seen mostly in the limitations as the research is limited to concern only one department in the company. Problems in the revenue tracing are caused by customer specific services and lack of service definitions because of which the revenues are not traced correctly. Methods to allocate revenues are presented in the thesis and stand-alone revenue allocation method is the most suitable one because it is fair and it can be modified. Approximate product profitability analysis is done in the thesis and the results of it indicate that some services are profitable and some unprofitable.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
We developed a forced non-electric-shock running wheel (FNESRW) system that provides rats with high-intensity exercise training using automatic exercise training patterns that are controlled by a microcontroller. The proposed system successfully makes a breakthrough in the traditional motorized running wheel to allow rats to perform high-intensity training and to enable comparisons with the treadmill at the same exercise intensity without any electric shock. A polyvinyl chloride runway with a rough rubber surface was coated on the periphery of the wheel so as to permit automatic acceleration training, and which allowed the rats to run consistently at high speeds (30 m/min for 1 h). An animal ischemic stroke model was used to validate the proposed system. FNESRW, treadmill, control, and sham groups were studied. The FNESRW and treadmill groups underwent 3 weeks of endurance running training. After 3 weeks, the experiments of middle cerebral artery occlusion, the modified neurological severity score (mNSS), an inclined plane test, and triphenyltetrazolium chloride were performed to evaluate the effectiveness of the proposed platform. The proposed platform showed that enhancement of motor function, mNSS, and infarct volumes was significantly stronger in the FNESRW group than the control group (P<0.05) and similar to the treadmill group. The experimental data demonstrated that the proposed platform can be applied to test the benefit of exercise-preconditioning-induced neuroprotection using the animal stroke model. Additional advantages of the FNESRW system include stand-alone capability, independence of subjective human adjustment, and ease of use.
Resumo:
This study evaluated the effect of muscle satellite cells (MSCs) overexpressing myogenin (MyoG) on denervated muscle atrophy. Rat MSCs were isolated and transfected with the MyoG-EGFP plasmid vector GV143. MyoG-transfected MSCs (MTMs) were transplanted into rat gastrocnemius muscles at 1 week after surgical denervation. Controls included injections of untransfected MSCs or the vehicle only. Muscles were harvested and analyzed at 2, 4, and 24 weeks post-transplantation. Immunofluorescence confirmed MyoG overexpression in MTMs. The muscle wet weight ratio was significantly reduced at 2 weeks after MTM injection (67.17±6.79) compared with muscles injected with MSCs (58.83±5.31) or the vehicle (53.00±7.67; t=2.37, P=0.04 and t=3.39, P=0.007, respectively). The muscle fiber cross-sectional area was also larger at 2 weeks after MTM injection (2.63×103±0.39×103) compared with MSC injection (1.99×103±0.58×103) or the vehicle only (1.57×103±0.47×103; t=2.24, P=0.049 and t=4.22, P=0.002, respectively). At 4 and 24 weeks post-injection, the muscle mass and fiber cross-sectional area were similar across all three experimental groups. Immunohistochemistry showed that the MTM group had larger MyoG-positive fibers. The MTM group (3.18±1.13) also had higher expression of MyoG mRNA than other groups (1.41±0.65 and 1.03±0.19) at 2 weeks after injection (t=2.72, P=0.04). Transplanted MTMs delayed short-term atrophy of denervated muscles. This approach can be optimized as a novel stand-alone therapy or as a bridge to surgical re-innervation of damaged muscles.
Resumo:
Building a computational model for complex biological systems is an iterative process. It starts from an abstraction of the process and then incorporates more details regarding the specific biochemical reactions which results in the change of the model fit. Meanwhile, the model’s numerical properties such as its numerical fit and validation should be preserved. However, refitting the model after each refinement iteration is computationally expensive resource-wise. There is an alternative approach which ensures the model fit preservation without the need to refit the model after each refinement iteration. And this approach is known as quantitative model refinement. The aim of this thesis is to develop and implement a tool called ModelRef which does the quantitative model refinement automatically. It is both implemented as a stand-alone Java application and as one of Anduril framework components. ModelRef performs data refinement of a model and generates the results in two different well known formats (SBML and CPS formats). The development of this tool successfully reduces the time and resource needed and the errors generated as well by traditional reiteration of the whole model to perform the fitting procedure.
Resumo:
Biorefineries is a perspective field of study that covers many opportunities of a successful business unit with respect to sustainability. The thesis focuses on the following key objective: identification of a competitive biorefineries production process in small and medium segments of the chemical and forest industries in Finland. The scope of the research relates to the selected biorefineries operations in Finland and the use of hemicellulose, as a raw material. The identification of the types of biorefineries and the important technical and process characteristics opens the advantage in the company’s competitive analysis. The study concentrates on the practical approach to the scientific methods of the market and companies research with the help of Quality Function Deployment and House of Quality tool. The thesis’s findings provide mindset version of the expert’s House of Quality application, identification of crucial biorefineries technical and design characteristics’ correlation and their effect on the competitive behavior of a company. The theoretical background helps to build the picture of the problematic issues within the field and provides scientific possible solutions. The analysis of the biorefineries’ market and companies operations bring the practical-oriented aptitude of the research. The results of the research can be used for the following investigations in a field and may be applied as a company’s management analytic and strategic application.
Resumo:
I will argue that the doctrine of eternal recurrence of the same no better interprets cosmology than pink elephants interpret zoology. I will also argue that the eternal-reiurn-of-the-same doctrine as what Magnus calls "existential imperative" is without possibility of application and thus futile. To facilitate those arguments, the validity of the doctrine of the eternal recurrence of the same will be tested under distinct rubrics. Although each rubric will stand alone, one per chapter, as an evaluation of some specific aspect of eternal recurrence, the rubric sequence has been selected to accommodate the identification of what I shall be calling logic abridgments. The conclusions to be extracted from each rubric are grouped under the heading CONCLUSION and appear immediately following rubric ten. Then, or if, at the end of a rubric a reader is inclined to wonder which rubric or topic is next, and why, the answer can be found at the top of the following page. The question is usually answered in the very first sentence, but always answered in the first paragraph. The first rubric has been placed in order by chronological entitlement in that it deals with the evolution of the idea of eternal recurrence from the time of the ancient Greeks to Nietzsche's August, 1881 inspiration. This much-recommended technique is also known as starting at the beginning. Rubric 1 also deals with 20th. Century philosophers' assessments of the relationship between Nietzsche and ancient Greek thought. The only experience of E-R, Zarathustra's mountain vision, is second only because it sets the scene alluded to in following rubrics. The third rubric explores .ii?.ih T jc,i -I'w Nietzsche's evaluation of rationality so that his thought processes will be understood appropriately. The actual mechanism of E-R is tested in rubric four...The scientific proof Nietzsche assembled in support of E-R is assessed by contemporary philosophers in rubric five. E-R's function as an ethical imperative is debated in rubrics six and seven.. .The extent to which E-R fulfills its purpose in overcoming nihilism is measured against the comfort assured by major world religions in rubric eight. Whether E-R also serves as a redemption for revenge is questioned in rubric nine. Rubric ten assures that E-R refers to return of the identically same and not merely the similar. In addition to assemblage and evaluation of all ten rubrics, at the end of each rubric a brief recapitulation of its principal points concludes the chapter. In this essay I will assess the theoretical conditions under which the doctrine cannot be applicable and will show what contradictions and inconsistencies follow if the doctrine is taken to be operable. Harold Alderman in his book Nietzsche's Gift wrote, the "doctrine of eternal recurrence gives us a problem not in Platonic cosmology, but in Socratic selfreflection." ^ I will illustrate that the recurrence doctrine's cosmogony is unworkable and that if it were workable, it would negate self-reflection on the grounds that selfreflection cannot find its cause in eternal recurrence of the same. Thus, when the cosmology is shown to be impossible, any expected ensuing results or benefits will be rendered also impossible. The so-called "heaviest burden" will be exposed as complex, engrossing "what if speculations deserving no linkings to reality. To identify ^Alderman p. 84 abridgments of logic, contradictions and inconsistencies in Nietzsche's doctrine of eternal recurrence of the same, I. will examine the subject under the following schedule. In Chapter 1 the ancient origins of recurrence theories will be introduced. ..This chapter is intended to establish the boundaries within which the subsequent chapters, except Chapter 10, will be confined. Chapter 2, Zarathustra's vision of E-R, assesses the sections of Thus Spoke Zarathustra in which the phenomenon of recurrence of the same is reported. ..Nihilism as a psychological difficulty is introduced in this rubric, but that subject will be studied in detail in Chapter 8. In Chapter 2 the symbols of eternal recurrence of the same will be considered. Whether the recurrence image should be of a closed ring or as a coil will be of significance in many sections of my essay. I will argue that neither symbolic configuration can accommodate Nietzsche's supposed intention. Chapter 3 defends the description of E-R given by Zarathustra. Chapter 4, the cosmological mechanics of E-R, speculates on the seriousness with which Nietzsche might have intended the doctrine of eternal recurrence to be taken. My essay reports, and then assesses, the argument of those who suppose the doctrine to have been merely exploratory musings by Nietzsche on cosmological hypotheses...The cosmogony of E-R is examined. In Chapter 5, cosmological proofs tested, the proofs for Nietzsche's doctrine of return of the same are evaluated. This chapter features the position taken by Martin ' Heidegger. My essay suggests that while Heidegger's argument that recurrence of the same is a genuine cosmic agenda is admirable, it is not at all persuasive. Chapter 6, E-R is an ethical imperative, is in essence the reporting of a debate between two scholars regarding the possibility of an imperative in the doctrine of recurrence. Their debate polarizes the arguments I intend to develop. Chapter 7, does E-R of the same preclude alteration of attitudes, is a continuation of the debate presented in Chapter 6 with the focus shifted to the psychological from the cosmological aspects of eternal recurrence of the same. Chapter 8, Can E-R Overcome Nihilism?, is divided into two parts. In the first, nihilism as it applies to Nietzsche's theory is discussed. ..In part 2, the broader consequences, sources and definitions of nihilism are outlined. My essay argues that Nietzsche's doctrine is more nihilistic than are the world's major religions. Chapter 9, Is E-R a redemption for revenge?, examines the suggestion extracted from Thus Spoke Zarathustra that the doctrine of eternal recurrence is intended, among other purposes, as a redemption for mankind from the destructiveness of revenge. Chapter 10, E-R of the similar refuted, analyses a position that an element of chance can influence the doctrine of recurrence. This view appears to allow, not for recurrence of the same, but recurrence of the similar. A summary will recount briefly the various significant logic abridgments, contradictions, and inconsistencies associated with Nietzsche's doctrine of eternal recurrence of the same. In the 'conclusion' section of my essay my own opinions and observations will be assembled from the body of the essay.
Resumo:
The question of how we can encourage creative capacities in young people has never been more relevant than it is today (Pink, 2006; Robinson as cited in TEDtalksDirector, 2007; Eisner as cited in VanderbiltUniversity, 2009). While the world is rapidly evolving, education has the great challenge of adapting to keep up. Scholars say that to meet the needs of 21st century learners, pedagogy must focus on fostering creative skills to enable students to manage in a future we cannot yet envision (Robinson as cited in TEDtalksDirector, 2007). Further, research demonstrates that creativity thrives with autonomy, support, and without judgment (Amabile, 1996; Codack [Zak], 2010; Harrington, Block, & Block, 1987; Holt, 1989; Kohn, 1993). So how well are schools doing in this regard? How do alternative models of education nurture or neglect creativity, and how can this inform teaching practice all around? In other words, ultimately, how can we nurture creativity in education? This documentary explores these questions from a scholarly art-based perspective. Artist/researcher/teacher Rebecca Zak builds on her experience in the art studio, academia, and the art classroom to investigate the various philosophies and strategies that diverse educational models implement to illuminate the possibilities for educational and paradigmatic transformation. The Raising Creativity documentary project consists of multiple parts across multiple platforms. There are five videos in the series that answer the why, who, how, what, and now what about creativity in education respectively (i.e., why is this topic important, who has spoken/written on this topic already, how will this issue be investigated this time, what was observed during the inquiry, and now what will this mean going forward?). There is also a self-reflexive blog that addresses certain aspects of the topic in greater depth (located here, on this website) and in the context of Rebecca's lived experience to complement the video format. Together, all video and blog artifacts housed on this website function as a polyptych, wherein the pieces can stand alone individually yet are intended to work together and fulfill the dissertation requirements for Rebecca's doctorate degree in education in reimagined ways.
Resumo:
This case study traces the evolution of library assignments for biological science students from paper-based workbooks in a blended (hands-on) workshop to blended learning workshops using online assignments to online active learning modules which are stand-alone without any face-to-face instruction. As the assignments evolved to adapt to online learning supporting materials in the form of PDFs (portable document format), screen captures and screencasting were embedded into the questions as teaching moments to replace face-to-face instruction. Many aspects of the evolution of the assignment were based on student feedback from evaluations, input from senior lab demonstrators and teaching assistants, and statistical analysis of the students’ performance on the assignment. Advantages and disadvantages of paper-based and online assignments are discussed. An important factor for successful online learning may be the ability to get assistance.