876 resultados para Scale not given.None


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tässä kandidaatintyössä suunnitellaan ja toteutetaan regressiotestaus- ja ylläpitotyökalu Ohjelmoinnin perusteet -kurssin Python-ohjelmointitehtäville. Työkalun on tarkoitus auttaa kurssin vastuuhenkilöitä selvittämään kurssilla käytettyjen harjoitustehtävien esimerkkiratkaisujen toimivuus Python-versiossa, jota kurssilla aiotaan käyttää ohjelmointiympäristönä seuraavana vuonna, ja helpottaa harjoitusmateriaalin yhdenmukaisuuden varmistamista silloin kun Python-versiota vaihdetaan tai materiaaliin tehdään muutoksia. Työssä tutkitaan, miten tarkoitukseen sopiva yleispätevä testaustyökalu voidaan kehittää, mitä seikkoja sen suunnittelussa on otettava huomioon ja mitä ongelmia kehittämiseen liittyy. Yleispätevän testaustyökalun kehittäminen osoittautui hankalaksi, vaikka testattavat ohjelmat ovat yksinkertaisia. Harjoitusmateriaaliin kuuluneiden yli 50 ohjelman testaamisessa tarvittavia tiedostoja oli yhteensä hyvin suuri määrä, ja niiden käsittelemiseksi työkalussa ja sen ulkopuolella oli vaikeaa valita optimaalista hakemistorakennetta. Lisäksi joidenkin testattavien ohjelmien havaittiin vaativan testauksessa muista poikkeavia lisätoimenpiteitä, jotka päätettiin jättää toteuttamatta työn puitteissa. Työn toivottu tulos jäi siten osittain saavuttamatta. Tuloksena syntyi kuitenkin työkalu, jolla voidaan ajaa 93 % nykyisistä esimerkkiratkaisuista määritellyillä testisyötteillä halutussa Python -ympäristössä ja saada tiedot ohjelmien toimivuudesta sekä niiden tuottamien tulosteiden täsmäävyydestä esimerkkitulosteisiin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mass-produced paper electronics (large area organic printed electronics on paper-based substrates, “throw-away electronics”) has the potential to introduce the use of flexible electronic applications in everyday life. While paper manufacturing and printing have a long history, they were not developed with electronic applications in mind. Modifications to paper substrates and printing processes are required in order to obtain working electronic devices. This should be done while maintaining the high throughput of conventional printing techniques and the low cost and recyclability of paper. An understanding of the interactions between the functional materials, the printing process and the substrate are required for successful manufacturing of advanced devices on paper. Based on the understanding, a recyclable, multilayer-coated paper-based substrate that combines adequate barrier and printability properties for printed electronics and sensor applications was developed in this work. In this multilayer structure, a thin top-coating consisting of mineral pigments is coated on top of a dispersion-coated barrier layer. The top-coating provides well-controlled sorption properties through controlled thickness and porosity, thus enabling optimizing the printability of functional materials. The penetration of ink solvents and functional materials stops at the barrier layer, which not only improves the performance of the functional material but also eliminates potential fiber swelling and de-bonding that can occur when the solvents are allowed to penetrate into the base paper. The multi-layer coated paper under consideration in the current work consists of a pre-coating and a smoothing layer on which the barrier layer is deposited. Coated fine paper may also be used directly as basepaper, ensuring a smooth base for the barrier layer. The top layer is thin and smooth consisting of mineral pigments such as kaolin, precipitated calcium carbonate, silica or blends of these. All the materials in the coating structure have been chosen in order to maintain the recyclability and sustainability of the substrate. The substrate can be coated in steps, sequentially layer by layer, which requires detailed understanding and tuning of the wetting properties and topography of the barrier layer versus the surface tension of the top-coating. A cost competitive method for industrial scale production is the curtain coating technique allowing extremely thin top-coatings to be applied simultaneously with a closed and sealed barrier layer. The understanding of the interactions between functional materials formulated and applied on paper as inks, makes it possible to create a paper-based substrate that can be used to manufacture printed electronics-based devices and sensors on paper. The multitude of functional materials and their complex interactions make it challenging to draw general conclusions in this topic area. Inevitably, the results become partially specific to the device chosen and the materials needed in its manufacturing. Based on the results, it is clear that for inks based on dissolved or small size functional materials, a barrier layer is beneficial and ensures the functionality of the printed material in a device. The required active barrier life time depends on the solvents or analytes used and their volatility. High aspect ratio mineral pigments, which create tortuous pathways and physical barriers within the barrier layer limit the penetration of solvents used in functional inks. The surface pore volume and pore size can be optimized for a given printing process and ink through a choice of pigment type and coating layer thickness. However, when manufacturing multilayer functional devices, such as transistors, which consist of several printed layers, compromises have to be made. E.g., while a thick and porous top-coating is preferable for printing of source and drain electrodes with a silver particle ink, a thinner and less absorbing surface is required to form a functional semiconducting layer. With the multilayer coating structure concept developed in this work, it was possible to make the paper substrate suitable for printed functionality. The possibility of printing functional devices, such as transistors, sensors and pixels in a roll-to-roll process on paper is demonstrated which may enable introducing paper for use in disposable “onetime use” or “throwaway” electronics and sensors, such as lab-on-strip devices for various analyses, consumer packages equipped with product quality sensors or remote tracking devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forest biomass represents a geographically distributed feedstock, and geographical location affects the greenhouse gas (GHG) performance of a given forest-bioenergy system in several ways. For example, biomass availability, forest operations, transportation possibilities and the distances involved, biomass end-use possibilities, fossil reference systems, and forest carbon balances all depend to some extent on location. The overall objective of this thesis was to assess the GHG emissions derived from supply and energy-utilization chains of forest biomass in Finland, with a specific focus on the effect of location in relation to forest biomass’s availability and the transportation possibilities. Biomass availability and transportation-network assessments were conducted through utilization of geographical information system methods, and the GHG emissions were assessed by means of lifecycle assessment. The thesis is based on four papers in which forest biomass supply on industrial scale was assessed. The feedstocks assessed in this thesis include harvesting residues, smalldiameter energy wood and stumps. The principal implication of the findings in this thesis is that in Finland, the location and availability of biomass in the proximity of a given energyutilization or energy-conversion plant is not a decisive factor in supply-chain GHG emissions or the possible GHG savings to be achieved with forest-biomass energy use. Therefore, for the greatest GHG reductions with limited forest-biomass resources, energy utilization of forest biomass in Finland should be directed to the locations where most GHG savings are achieved through replacement of fossil fuels. Furthermore, one should prioritize the types of forest biomass with the lowest direct supply-chain GHG emissions (e.g., from transport and comminution) and the lowest indirect ones (in particular, soil carbon-stock losses), regardless of location. In this respect, the best combination is to use harvesting residues in combined heat and power production, replacing peat or coal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work, liquid-solid flow in industrial scale is modeled using the commercial software of Computational Fluid Dynamics (CFD) ANSYS Fluent 14.5. In literature, there are few studies on liquid-solid flow in industrial scale, but any information about the particular case with modified geometry cannot be found. The aim of this thesis is to describe the strengths and weaknesses of the multiphase models, when a large-scale application is studied within liquid-solid flow, including the boundary-layer characteristics. The results indicate that the selection of the most appropriate multiphase model depends on the flow regime. Thus, careful estimations of the flow regime are recommended to be done before modeling. The computational tool is developed for this purpose during this thesis. The homogeneous multiphase model is valid only for homogeneous suspension, the discrete phase model (DPM) is recommended for homogeneous and heterogeneous suspension where pipe Froude number is greater than 1.0, while the mixture and Eulerian models are able to predict also flow regimes, where pipe Froude number is smaller than 1.0 and particles tend to settle. With increasing material density ratio and decreasing pipe Froude number, the Eulerian model gives the most accurate results, because it does not include simplifications in Navier-Stokes equations like the other models. In addition, the results indicate that the potential location of erosion in the pipe depends on material density ratio. Possible sedimentation of particles can cause erosion and increase pressure drop as well. In the pipe bend, especially secondary flows, perpendicular to the main flow, affect the location of erosion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydrothermal carbonization (HTC) is a thermochemical process used in the production of charred matter similar in composition to coal. It involves the use of wet, carbohydrate feedstock, a relatively low temperature environment (180 °C-350 °C) and high autogenous pressure (up to 2,4 MPa) in a closed system. Various applications of the solid char product exist, opening the way for a range of biomass feedstock materials to be exploited that have so far proven to be troublesome due to high water content or other factors. Sludge materials are investigated as candidates for industrial-scale HTC treatment in fuel production. In general, HTC treatment of pulp and paper industry sludge (PPS) and anaerobically digested municipal sewage sludge (ADS) using existing technology is competitive with traditional treatment options, which range in price from EUR 30-80 per ton of wet sludge. PPS and ADS can be treated by HTC for less than EUR 13 and 33, respectively. Opportunities and challenges related to HTC exist, as this relatively new technology moves from laboratory and pilot-scale production to an industrial scale. Feedstock materials, end-products, process conditions and local markets ultimately determine the feasibility of a given HTC operation. However, there is potential for sludge materials to be converted to sustainable bio-coal fuel in a Finnish context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adapting and scaling up agile concepts, which are characterized by iterative, self-directed, customer value focused methods, may not be a simple endeavor. This thesis concentrates on studying challenges in a large-scale agile software development transformation in order to enhance understanding and bring insight into the underlying factors for such emerging challenges. This topic is approached through understanding the concepts of agility and different methods compared to traditional plan-driven processes, complex adaptive theory and the impact of organizational culture on agile transformational efforts. The empirical part was conducted by a qualitative case study approach. The internationally operating software development case organization had a year of experience of an agile transformation effort during it had also undergone organizational realignment efforts. The primary data collection was conducted through semi-structured interviews supported by participatory observation. As a result the identified challenges were categorized under four broad themes: organizational, management, team dynamics and process related. The identified challenges indicate that agility is a multifaceted concept. Agile practices may bring visibility in issues of which many are embedded in the organizational culture or in the management style. Viewing software development as a complex adaptive system could facilitate understanding of the underpinning philosophy and eventually solving the issues: interactions are more important than processes and solving a complex problem, such a novel software development, requires constant feedback and adaptation to changing requirements. Furthermore, an agile implementation seems to be unique in nature, and agents engaged in the interaction are the pivotal part of the success of achieving agility. In case agility is not a strategic choice for whole organization, it seems additional issues may arise due to different ways of working in different parts of an organization. Lastly, detailed suggestions to mitigate the challenges of the case organization are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concepts, models, or theories that end up shaping practices, whether those practices fall in the domains of science, technology, social movements, or business, always emerge through a change in language use. First, communities begin to talk differently, incorporating new vocabularies (Rorty, 1989), in their narratives. Whether the community’s new narratives respond to perceived anomalies or failures of the existing ones (Kuhn, 1962) or actually reveal inadequacies by addressing previously unrecognized practices (Fleck, 1979; Rorty, 1989) is less important here than the very phenomena that they introduce differences. Then, if the new language proves to be useful, for example, because it helps the community solve a problem or create a possibility that existing narratives do not, the new narrative will begin circulating more broadly throughout the community. If other communities learn of the usefulness of these new narratives, and find them sufficiently persuasive, they may be compelled to test, modify, and eventually adopt them. Of primary importance is the idea that a new concept or narrative perceived as useful is more likely to be adopted. We can expect that business concepts emerge through a similar pattern. Concepts such as “competitive advantage,” “disruption,” and the “resource based view,” now broadly known and accepted, were each at some point first introduced by a community. This community experimented with the concepts they introduced and found them useful. The concept “competitive advantage,” for example, helped researchers better explain why some firm’s outperformed others and helped practitioners more clearly understand what choices to make to improve the profit and growth prospects of their firms. The benefits of using these terms compelled other communities to consider, apply, and eventually adopt them as well. Were these terms not viewed as useful, they would not likely have been adopted. This thesis attempts to observe and anticipate new business concepts that may be emerging. It does so by seeking to observe a community of business practitioners that are using different language and appear to be more successful than a similar community of practitioners that are have not yet begun using this different language as extensively. It argues that if the community that is adopting new types of narratives is perceived as being more successful, their success will attract the attention of other communities who may then seek to adopt the same narratives. Specifically, this thesis compares the narratives used by a set of firms that are considered to be performing well (called Winners) with those of set of less-successful peers (called Losers). It does so with the aim of addressing two questions: - How do the strategic narratives that circulate within “winning” companies and their leaders differ from those circulating within “losing” companies and their leaders? - Given the answer to the first question: what new business strategy concepts are likely to emerge in the business community at large? I expected to observe “winning” companies shifting their language, abandoning an older set of narratives for newer ones. However the analysis indicates a more interesting dynamic: “winning” companies adopt the same core narratives as their “losing” peers with equal frequency yet they go beyond these. Both “winners” and “losers” seem to pursue economies of scale, customer captivity, best practices, and securing preferential access to resources with similar vigor. But “winners” seem to go further, applying three additional narratives in their pursuits of competitive advantage. They speak of coordinating what is uncoordinated, adopting what this thesis calls “exchanging the role of guest for that of host,” and “forcing a two-front battle” more frequently than their “loser” peers. Since these “winning” companies are likely perceived as being more successful, the unique narratives they use are more likely to be emulated and adopted. Understanding in what ways winners speak differently, therefore, gives us a glimpse into the possible future evolution of business concepts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Male Wistar rats were trained in one-trial step-down inhibitory avoidance using a 0.4-mA footshock. At various times after training (0, 1.5, 3, 6 and 9 h for the animals implanted into the CA1 region of the hippocampus; 0 and 3 h for those implanted into the amygdala), these animals received microinfusions of SKF38393 (7.5 µg/side), SCH23390 (0.5 µg/side), norepinephrine (0.3 µg/side), timolol (0.3 µg/side), 8-OH-DPAT (2.5 µg/side), NAN-190 (2.5 µg/side), forskolin (0.5 µg/side), KT5720 (0.5 µg/side) or 8-Br-cAMP (1.25 µg/side). Rats were tested for retention 24 h after training. When given into the hippocampus 0 h post-training, norepinephrine enhanced memory whereas KT5720 was amnestic. When given 1.5 h after training, all treatments were ineffective. When given 3 or 6 h post-training, 8-Br-cAMP, forskolin, SKF38393, norepinephrine and NAN-190 caused memory facilitation, while KT5720, SCH23390, timolol and 8-OH-DPAT caused retrograde amnesia. Again, at 9 h after training, all treatments were ineffective. When given into the amygdala, norepinephrine caused retrograde facilitation at 0 h after training. The other drugs infused into the amygdala did not cause any significant effect. These data suggest that in the hippocampus, but not in the amygdala, a cAMP/protein kinase A pathway is involved in memory consolidation at 3 and 6 h after training, which is regulated by D1, ß, and 5HT1A receptors. This correlates with data on increased post-training cAMP levels and a dual peak of protein kinase A activity and CREB-P levels (at 0 and 3-6 h) in rat hippocampus after training in this task. These results suggest that the hippocampus, but not the amygdala, is involved in long-term storage of step-down inhibitory avoidance in the rat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of diets enriched with oat or wheat bran (prepared by the addition of 300 g of each fiber to 1000 g of the regular diet), given for 8 weeks, on the mucosal height of the colon and cecum was investigated. Newly weaned (21 days old) and aged (12 months old) male Wistar rats were used in this study. As compared to controls, diets enriched with wheat bran provoked a significant increase in the mucosal height, whereas oat bran did not cause any effect. In newly weaned rats (21 days old), wheat bran increased the mucosal height (µm) in the cecum by 20% (mean ± SEM for 8 rats; 169.1 ± 5.2 and 202.9 ± 8.0 for control and wheat bran, respectively) and in the colon (218.8 ± 7.2 and 264.5 ± 18.8 for control and wheat bran, respectively). A similar effect was observed in aged rats (12 months old), with an increase of 15% in the mucosal height (µm) of the cecum (mean ± SEM of 8 rats; 193.2 ± 8.6 and 223.7 ± 8.3 for control and wheat bran, respectively) and of 17% in the colon (300.4 ± 9.2 and 352.2 ± 15.9 for control and wheat bran, respectively)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Sun is a crucial benchmark for how we see the universe. Especially when it comes to the visible range of the spectrum, stars are commonly compared to the Sun, as it is the most thoroughly studied star. In this work I have focussed on two aspects of the Sun and how it is used in modern astronomy. Firstly, I try to answer the question on how similar to the Sun another star can be. Given the limits of observations, we call a solar twin a star that has the same observed parameters as the Sun within its errors. These stars can be used as stand-in suns when doing observations, as normal night-time telescopes are not built to be pointed at the Sun. There have been many searches for these twins and every one of them provided not only information on how close to the Sun another star can be, but also helped us to understand the Sun itself. In my work I have selected _ 300 stars that are both photometrically and spectroscopically close to the Sun and found 22 solar twins, of which 17 were previously unknown and can therefore help the emerging picture on solar twins. In my second research project I have used my full sample of 300 solar analogue stars to check the temperature and metallicity scale of stellar catalogue calibrations. My photometric sample was originally drawn from the Geneva-Copenhagen-Survey (Nordström et al. 2004; Holmberg et al. 2007, 2009) for which two alternative calibrations exist, i.e. GCS-III (Holmberg et al. 2009) and C11 (Casagrande et al. 2011). I used very high resolution spectra of solar analogues, and a new approach to test the two calibrations. I found a zero–point shift of order of +75 K and +0.10 dex in effective temperature and metallicity, respectively, in the GCS-III and therefore favour the C11 calibration, which found similar offsets. I then performed a spectroscopic analysis of the stars to derive effective temperatures and metallicities, and tested that they are well centred around the solar values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Active Magnetic Bearings offer many advantages that have brought new applications to the industry. However, similarly to all new technology, active magnetic bearings also have downsides and one of those is the low standardization level. This thesis is studying mainly the ISO 14839 standard and more specifically the system verification methods. These verifying methods are conducted using a practical test with an existing active magnetic bearing system. The system is simulated with Matlab using rotor-bearing dynamics toolbox, but this study does not include the exact simulation code or a direct algebra calculation. However, this study provides the proof that standardized simulation methods can be applied in practical problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adrenocortical autoantibodies (ACA), present in 60-80% of patients with idiopathic Addison's disease, are conventionally detected by indirect immunofluorescence (IIF) on frozen sections of adrenal glands. The large-scale use of IIF is limited in part by the need for a fluorescence microscope and the fact that histological sections cannot be stored for long periods of time. To circumvent these restrictions we developed a novel peroxidase-labelled protein A (PLPA) technique for the detection of ACA in patients with Addison's disease and compared the results with those obtained with the classical IIF assay. We studied serum samples from 90 healthy control subjects and 22 patients with Addison's disease, who had been clinically classified into two groups: idiopathic (N = 13) and granulomatous (N = 9). ACA-PLPA were detected in 10/22 (45%) patients: 9/13 (69%) with the idiopathic form and 1/9 (11%) with the granulomatous form, whereas ACA-IIF were detected in 11/22 patients (50%): 10/13 (77%) with the idiopathic form and 1/9 (11%) with the granulomatous form. Twelve of the 13 idiopathic addisonians (92%) were positive for either ACA-PLPA or ACA-IIF, but only 7 were positive by both methods. In contrast, none of 90 healthy subjects was found to be positive for ACA. Thus, our study shows that the PLPA-based technique is useful, has technical advantages over the IIF method (by not requiring the use of a fluorescence microscope and by permitting section storage for long periods of time). However, since it is only 60% concordant with the ACA-IIF method, it should be considered complementary instead of an alternative method to IIF for the detection of ACA in human sera.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We evaluated the effects of infusions of the NMDA receptor antagonist D,L-2-amino-5-phosphonopentanoic acid (AP5) into the basolateral nucleus of the amygdala (BLA) on the formation and expression of memory for inhibitory avoidance. Adult male Wistar rats (215-300 g) were implanted under thionembutal anesthesia (30 mg/kg, ip) with 9.0-mm guide cannulae aimed 1.0 mm above the BLA. Bilateral infusions of AP5 (5.0 µg) were given 10 min prior to training, immediately after training, or 10 min prior to testing in a step-down inhibitory avoidance task (0.3 mA footshock, 24-h interval between training and the retention test session). Both pre- and post-training infusions of AP5 blocked retention test performance. When given prior to the test, AP5 did not affect retention. AP5 did not affect training performance, and a control experiment showed that the impairing effects were not due to alterations in footshock sensitivity. The results suggest that NMDA receptor activation in the BLA is involved in the formation, but not the expression, of memory for inhibitory avoidance in rats. However, the results do not necessarily imply that the role of NMDA receptors in the BLA is to mediate long-term storage of fear-motivated memory within the amygdala.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demand for the use of energy systems, entailing high efficiency as well as availability to harness renewable energy sources, is a key issue in order to tackling the threat of global warming and saving natural resources. Organic Rankine cycle (ORC) technology has been identified as one of the most promising technologies in recovering low-grade heat sources and in harnessing renewable energy sources that cannot be efficiently utilized by means of more conventional power systems. The ORC is based on the working principle of Rankine process, but an organic working fluid is adopted in the cycle instead of steam. This thesis presents numerical and experimental results of the study on the design of small-scale ORCs. Two main applications were selected for the thesis: waste heat re- covery from small-scale diesel engines concentrating on the utilization of the exhaust gas heat and waste heat recovery in large industrial-scale engine power plants considering the utilization of both the high and low temperature heat sources. The main objective of this work was to identify suitable working fluid candidates and to study the process and turbine design methods that can be applied when power plants based on the use of non-conventional working fluids are considered. The computational work included the use of thermodynamic analysis methods and turbine design methods that were based on the use of highly accurate fluid properties. In addition, the design and loss mechanisms in supersonic ORC turbines were studied by means of computational fluid dynamics. The results indicated that the design of ORC is highly influenced by the selection of the working fluid and cycle operational conditions. The results for the turbine designs in- dicated that the working fluid selection should not be based only on the thermodynamic analysis, but requires also considerations on the turbine design. The turbines tend to be fast rotating, entailing small blade heights at the turbine rotor inlet and highly supersonic flow in the turbine flow passages, especially when power systems with low power outputs are designed. The results indicated that the ORC is a potential solution in utilizing waste heat streams both at high and low temperatures and both in micro and larger scale appli- cations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The treatment of pain before it initiates may prevent the persistent pain-induced changes in the central nervous system that amplify pain long after the initial stimulus. The effects of pre- or postoperative intraperitoneal administration of morphine (2 to 8 mg/kg), dipyrone (40 and 80 mg/kg), diclofenac (2 to 8 mg/kg), ketoprofen (10 and 20 mg/kg), and tenoxicam (10 and 20 mg/kg) were studied in a rat model of post-incisional pain. Groups of 5 to 8 male Wistar rats (140-160 g) were used to test each drug dose. An incision was made on the plantar surface of a hind paw and the changes in the withdrawal threshold to mechanical stimulation were evaluated with Von Frey filaments at 1, 2, 6 and 24 h after the surgery. Tenoxicam was given 12 or 6 h preoperatively, whereas the remaining drugs were given 2 h or 30 min preoperatively. Postoperative drugs were all given 5 min after surgery. No drug abolished allodynia when injected before or after surgery, but thresholds were significantly higher than in control during up to 2 h following ketoprofen, 6 h following diclofenac, and 24 h following morphine, dipyrone or tenoxicam when drugs were injected postoperatively. Significant differences between pre- and postoperative treatments were obtained only with ketoprofen administered 30 min before surgery. Preoperative (2 h) intraplantar, but not intrathecal, ketoprofen reduced the post-incisional pain for up to 24 h after surgery. It is concluded that stimuli generated in the inflamed tissue, rather than changes in the central nervous system are relevant for the persistence of pain in the model of post-incisional pain.