38 resultados para Large-scale Structure Of Universe


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cell-based therapies have the potential to contribute to global healthcare, whereby the use of living cells and tissues can be used as medicinal therapies. Despite this potential, many challenges remain before the full value of this emerging field can be realized. The characterization of input material for cell-based therapy bioprocesses from multiple donors is necessary to identify and understand the potential implications of input variation on process development. In this work, we have characterized bone marrow derived human mesenchymal stem cells (BM-hMSCs) from multiple donors and discussed the implications of the measurable input variation on the development of autologous and allogeneic cell-based therapy manufacturing processes. The range of cumulative population doublings across the five BM-hMSC lines over 30 days of culture was 5.93, with an 18.2% range in colony forming efficiency at the end of the culture process and a 55.1% difference in the production of interleukin-6 between these cell lines. It has been demonstrated that this variation results in a range in the process time between these donor hMSC lines for a hypothetical product of over 13 days, creating potential batch timing issues when manufacturing products from multiple patients. All BM-hMSC donor lines demonstrated conformity to the ISCT criteria but showed a difference in cell morphology. Metabolite analysis showed that hMSCs from the different donors have a range in glucose consumption of 26.98 pmol cell−1 day−1, Lactate production of 29.45 pmol cell−1 day−1 and ammonium production of 1.35 pmol cell−1 day−1, demonstrating the extent of donor variability throughout the expansion process. Measuring informative product attributes during process development will facilitate progress towards consistent manufacturing processes, a critical step in the translation cell-based therapies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When machining a large-scale aerospace part, the part is normally located and clamped firmly until a set of features are machined. When the part is released, its size and shape may deform beyond the tolerance limits due to stress release. This paper presents the design of a new fixing method and flexible fixtures that would automatically respond to workpiece deformation during machining. Deformation is inspected and monitored on-line, and part location and orientation can be adjusted timely to ensure follow-up operations are carried out under low stress and with respect to the related datum defined in the design models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents for the first time the concept of measurement assisted assembly (MAA) and outlines the research priorities of the realisation of this concept in the industry. MAA denotes a paradigm shift in assembly for high value and complex products and encompasses the development and use of novel metrology processes for the holistic integration and capability enhancement of key assembly and ancillary processes. A complete framework for MAA is detailed showing how this can facilitate a step change in assembly process capability and efficiency for large and complex products, such as airframes, where traditional assembly processes exhibit the requirement for rectification and rework, use inflexible tooling and are largely manual, resulting in cost and cycle time pressures. The concept of MAA encompasses a range of innovativemeasurement- assisted processes which enable rapid partto- part assembly, increased use of flexible automation, traceable quality assurance and control, reduced structure weight and improved levels of precision across the dimensional scales. A full scale industrial trial of MAA technologies has been carried out on an experimental aircraft wing demonstrating the viability of the approach while studies within 140 smaller companies have highlighted the need for better adoption of existing process capability and quality control standards. The identified research priorities for MAA include the development of both frameless and tooling embedded automated metrology networks. Other research priorities relate to the development of integrated dimensional variation management, thermal compensation algorithms as well as measurement planning and inspection of algorithms linking design to measurement and process planning. © Springer-Verlag London 2013.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study presents a computational parametric analysis of DME steam reforming in a large scale Circulating Fluidized Bed (CFB) reactor. The Computational Fluid Dynamic (CFD) model used, which is based on Eulerian-Eulerian dispersed flow, has been developed and validated in Part I of this study [1]. The effect of the reactor inlet configuration, gas residence time, inlet temperature and steam to DME ratio on the overall reactor performance and products have all been investigated. The results have shown that the use of double sided solid feeding system remarkable improvement in the flow uniformity, but with limited effect on the reactions and products. The temperature has been found to play a dominant role in increasing the DME conversion and the hydrogen yield. According to the parametric analysis, it is recommended to run the CFB reactor at around 300 °C inlet temperature, 5.5 steam to DME molar ratio, 4 s gas residence time and 37,104 ml gcat -1 h-1 space velocity. At these conditions, the DME conversion and hydrogen molar concentration in the product gas were both found to be around 80%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is concerned with the nature of liquid flow across industrial sieve trays operating in the spray, mixed, and the emulsified flow regimes. In order to overcome the practical difficulties of removing many samples from a commercial tray, the mass transfer process was investigated in an air water simulator column by heat transfer analogy. The temperature of the warm water was measured by many thermocouples as the water flowed across the single pass 1.2 m diameter sieve tray. The thermocouples were linked to a mini computer for the storage of the data. The temperature data were then transferred to a main frame computer to generate temperature profiles - analogous to concentration profiles. A comprehensive study of the existing tray efficiency models was carried out using computerised numerical solutions. The calculated results were compared with experimental results published by the Fractionation Research Incorporation (FRl) and the existing models did not show any agreement with the experimental results. Only the Porter and Lockett model showed a reasonable agreement with the experimental results for cenain tray efficiency values. A rectangular active section tray was constructed and tested to establish the channelling effect and the result of its effect on circular tray designs. The developed flow patterns showed predominantly flat profiles and some indication of significant liquid flow through the central region of the tray. This comfirms that the rectangular tray configuration might not be a satisfactory solution for liquid maldistribution on sieve trays. For a typical industrial tray the flow of liquid as it crosses the tray from the inlet to the outlet weir could be affected by the mixing of liquid by the eddy, momentum and the weir shape in the axial or the transverse direction or both. Conventional U-shape profiles were developed when the operating conditions were such that the froth dispersion was in the mixed regime, with good liquid temperature distribution while in the spray regime. For the 12.5 mm hole diameter tray the constant temperature profiles were found to be in the axial direction while in the spray regime and in the transverse direction for the 4.5 mm hole tray. It was observed that the extent of the liquid stagnant zones at the sides of the tray depended on the tray hole diameter and was larger for the 4.5 mm hole tray. The liquid hold-up results show a high liquid hold-up at the areas of the tray with low liquid temperatures, this supports the doubts about the assumptions of constant point efficiency across an operating tray. Liquid flow over the outlet weir showed more liquid flow at the centre of the tray at high liquid loading with low liquid flow at both ends of the weir. The calculated results of the point and tray efficiency model showed a general increase in the calculated point and tray efficiencies with an increase in the weir loading, as the flow regime changed from the spray to the mixed regime the point and the tray efficiencies increased from approximately 30 to 80%.Through the mixed flow regime the efficiencies were found to remain fairly constant, and as the operating conditions were changed to maintain an emulsified flow regime there was a decrease in the resulting efficiencies. The results of the estimated coefficient of mixing for the small and large hole diameter trays show that the extent of liquid mixing on an operating tray generally increased with increasing capacity factor, but decreased with increasing weir loads. This demonstrates that above certain weir loads, the effect of eddy diffusion mechanism on the process of liquid mixing on an operating tray to be negligible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regenerative medicine technologies have the potential to revolutionise human healthcare. However, whilst science has revealed the potential, and early products have shown the power of such therapies, there is now a need for the long-term supply of human stem cells in sufficient numbers to create reproducible and cost effective therapeutic products. The industrial platforms to be developed for human cell culture are in some ways analogous to those already developed for biopharmaceutical production using mammalian cells at large scales. However, there are a number of unique challenges that need to be addressed, largely because the quality of the cell is paramount, rather than the proteins that they express. © 2013 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The method of isotope substitution in neutron diffraction was used to measure the structure of liquid ZnCl2 at 332(5)?°C and glassy ZnCl2 at 25(1)?°C. The partial structure factors were obtained from the measured diffraction patterns by using the method of singular value decomposition and by using the reverse Monte Carlo procedure. The partial structure factors reproduce the diffraction patterns measured by high-energy x-ray diffraction once a correction for the resolution function of the neutron diffractometer has been made. The results show that the predominant structural motif in both phases is the corner sharing ZnCl4 tetrahedron and that there is a small number of edge-sharing configurations, these being more abundant in the liquid. The tetrahedra organize on an intermediate length scale to give a first sharp diffraction peak in the measured diffraction patterns at a scattering vector kFSDP?1 Å-1 that is most prominent for the Zn-Zn correlations. The results support the notion that the relative fragility of tetrahedral glass forming MX2 liquids is related to the occurrence of edge-sharing units.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An alternative explanation for the modes of failure of large scale failures of open pit walls to those of classical slope stability theory is proposed that makes use of the concept of a transition zone, which is described by a modified Prandtls prism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we study the localization problem in large-scale Underwater Wireless Sensor Networks (UWSNs). Unlike in the terrestrial positioning, the global positioning system (GPS) can not work efficiently underwater. The limited bandwidth, the severely impaired channel and the cost of underwater equipment all makes the localization problem very challenging. Most current localization schemes are not well suitable for deep underwater environment. We propose a hierarchical localization scheme to address the challenging problems. The new scheme mainly consists of four types of nodes, which are surface buoys, Detachable Elevator Transceivers (DETs), anchor nodes and ordinary nodes. Surface buoy is assumed to be equipped with GPS on the water surface. A DET is attached to a surface buoy and can rise and down to broadcast its position. The anchor nodes can compute their positions based on the position information from the DETs and the measurements of distance to the DETs. The hierarchical localization scheme is scalable, and can be used to make balances on the cost and localization accuracy. Initial simulation results show the advantages of our proposed scheme. © 2009 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we study an area localization problem in large scale Underwater Wireless Sensor Networks (UWSNs). The limited bandwidth, the severely impaired channel and the cost of underwater equipment all makes the underwater localization problem very challenging. Exact localization is very difficult for UWSNs in deep underwater environment. We propose a Mobile DETs based efficient 3D multi-power Area Localization Scheme (3D-MALS) to address the challenging problem. In the proposed scheme, the ideas of 2D multi-power Area Localization Scheme(2D-ALS) [6] and utilizing Detachable Elevator Transceiver (DET) are used to achieve the simplicity, location accuracy, scalability and low cost performances. The DET can rise and down to broadcast its position. And it is assumed that all the underwater nodes underwater have pressure sensors and know their z coordinates. The simulation results show that our proposed scheme is very efficient. © 2009 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Question/Issue: We combine agency and institutional theory to explain the division of equity shares between the foreign (majority) and local (minority) partners within foreign affiliates. We posit that once the decision to invest is made, the ownership structure is arranged so as to generate appropriate incentives to local partners, taking into account both the institutional environment and the firm-specific difficulty in monitoring. Research Findings/Insights: Using a large firm-level dataset for the period 2003-2011 from 16 Central and Eastern European countries and applying selectivity corrected estimates, we find that both weaker host country institutions and higher share of intangible assets in total assets in the firm imply higher minority equity share of local partners. The findings hold when controlling for host country effects and when the attributes of the institutional environment are instrumented. Theoretical/Academic Implications: The classic view is that weak institutions lead to concentrated ownership, yet it leaves the level of minority equity shares unexplained. Our contribution uses a firm-level perspective combined with national-level variation in the institutional environment, and applies agency theory to explain the minority local partner share in foreign affiliates. In particular, we posit that the information asymmetry and monitoring problem in firms are exacerbated by weak host country institutions, but also by the higher share of intangible assets in total assets. Practitioner/Policy Implications: Assessing investment opportunities abroad, foreign firms need to pay attention not only to features directly related to corporate governance (e.g., bankruptcy codes) but also to the broad institutional environment. In weak institutional environments, foreign parent firms need to create strong incentives for local partners by offering them significant minority shares in equity. The same recommendation applies to firms with higher shares of intangible assets in total assets. © 2014 The Authors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a potential method to assist developers of future bioenergy schemes when selecting from available suppliers of biomass materials. The method aims to allow tacit requirements made on biomass suppliers to be considered at the design stage of new developments. The method used is a combination of the Analytical Hierarchy Process and the Quality Function Deployment methods (AHP-QFD). The output of the method is a ranking and relative weighting of the available suppliers which could be used to improve optimization algorithms such as linear and goal programming. The paper is at a conceptual stage and no results have been obtained. The aim is to use the AHP-QFD method to bridge the gap between treatment of explicit and tacit requirements of bioenergy schemes; allowing decision makers to identify the most successful supply strategy available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: To conduct an independent evaluation of the first phase of the Health Foundation's Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Design: Mixed method evaluation involving five substudies, before and after design. Setting: NHS hospitals in United Kingdom. Participants: Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. Intervention: The SPI1 was a compound (multicomponent) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Results: Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P<0.01) effect in favour of the SPI1 hospitals in one of 11 dimensions of the staff questionnaire (organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration - monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items) - there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control hospitals and from 37% (141) to 78% (296) in SPI1 hospitals (odds ratio for "difference in difference" 2.1, 99% confidence interval 1.0 to 4.3; P=0.008). Use of a formal scoring system for patients with pneumonia also increased over time (from 2% (102) to 23% (111) in control hospitals and from 2% (170) to 9% (189) in SPI1 hospitals), which favoured controls and was not significant (0.3, 0.02 to 3.4; P=0.173). There were no improvements in the proportion of prescription errors and no effects that could be attributed to SPI1 in non-targeted generic areas (such as enhanced safety culture). On some measures, the lack of effect could be because compliance was already high at baseline (such as use of steroids in over 85% of cases where indicated), but even when there was more room for improvement (such as in quality of medical history taking), there was no significant additional net effect of SPI1. There were no changes over time or between control and SPI1 hospitals in errors or rates of adverse events in patients in medical wards. Mortality increased from 11% (27) to 16% (39) among controls and decreased from17%(63) to13%(49) among SPI1 hospitals, but the risk adjusted difference was not significant (0.5, 0.2 to 1.4; P=0.085). Poor care was a contributing factor in four of the 178 deaths identified by review of case notes. The survey of patients showed no significant differences apart from an increase in perception of cleanliness in favour of SPI1 hospitals. Conclusions The introduction of SPI1 was associated with improvements in one of the types of clinical process studied (monitoring of vital signs) and one measure of staff perceptions of organisational climate. There was no additional effect of SPI1 on other targeted issues nor on other measures of generic organisational strengthening.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large-scale introduction of Organic Solar Cells (OSCs) onto the market is currently limited by their poor stability in light and air, factors present in normal working conditions for these devices. Thus, great efforts have to be undertaken to understand the photodegradation mechanisms of their organic materials in order to find solutions that mitigate these effects. This study reports on the elucidation of the photodegradation mechanisms occurring in a low bandgap polymer, namely, Si-PCPDTBT (poly[(4,4′-bis(2-ethylhexyl)dithieno[3,2-b:2′,3′-d]silole)-2,6-diyl-alt-(4,7-bis(2-thienyl)-2,1,3-benzothiadiazole)-5,5′-diyl]). Complementary analytical techniques (AFM, HS-SPME-GC-MS, UV-vis and IR spectroscopy) have been employed to monitor the modification of the chemical structure of the polymer upon photooxidative aging and the subsequent consequences on its architecture and nanomechanical properties. Furthermore, these different characterization techniques have been combined with a theoretical approach based on quantum chemistry to elucidate the evolution of the polymer alkyl side chains and backbone throughout exposure. Si-PCPDTBT is shown to be more stable against photooxidation than the commonly studied p-type polymers P3HT and PCDTBT, while modeling demonstrated the benefits of using silicon as a bridging atom in terms of photostability. (Figure Presented).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.