881 resultados para Large-scale analysis
Resumo:
We combine multi-wavelength data in the AEGIS-XD and C-COSMOS surveys to measure the typical dark matter halo mass of X-ray selected active galactic nuclei (AGN) [L_X(2–10 keV) > 10^42 erg s^− 1] in comparison with far-infrared selected star-forming galaxies detected in the Herschel/PEP survey (PACS Evolutionary Probe; L_IR > 10^11 L_⊙) and quiescent systems at z ≈ 1. We develop a novel method to measure the clustering of extragalactic populations that uses photometric redshift probability distribution functions in addition to any spectroscopy. This is advantageous in that all sources in the sample are used in the clustering analysis, not just the subset with secure spectroscopy. The method works best for large samples. The loss of accuracy because of the lack of spectroscopy is balanced by increasing the number of sources used to measure the clustering. We find that X-ray AGN, far-infrared selected star-forming galaxies and passive systems in the redshift interval 0.6 < z < 1.4 are found in haloes of similar mass, log M_DMH/(M_⊙ h^−1) ≈ 13.0. We argue that this is because the galaxies in all three samples (AGN, star-forming, passive) have similar stellar mass distributions, approximated by the J-band luminosity. Therefore, all galaxies that can potentially host X-ray AGN, because they have stellar masses in the appropriate range, live in dark matter haloes of log M_DMH/(M_⊙ h^−1) ≈ 13.0 independent of their star formation rates. This suggests that the stellar mass of X-ray AGN hosts is driving the observed clustering properties of this population. We also speculate that trends between AGN properties (e.g. luminosity, level of obscuration) and large-scale environment may be related to differences in the stellar mass of the host galaxies.
Resumo:
El reciente crecimiento masivo de medios on-line y el incremento de los contenidos generados por los usuarios (por ejemplo, weblogs, Twitter, Facebook) plantea retos en el acceso e interpretación de datos multilingües de manera eficiente, rápida y asequible. El objetivo del proyecto TredMiner es desarrollar métodos innovadores, portables, de código abierto y que funcionen en tiempo real para generación de resúmenes y minería cross-lingüe de medios sociales a gran escala. Los resultados se están validando en tres casos de uso: soporte a la decisión en el dominio financiero (con analistas, empresarios, reguladores y economistas), monitorización y análisis político (con periodistas, economistas y políticos) y monitorización de medios sociales sobre salud con el fin de detectar información sobre efectos adversos a medicamentos.
Resumo:
In this work, we propose a new methodology for the large scale optimization and process integration of complex chemical processes that have been simulated using modular chemical process simulators. Units with significant numerical noise or large CPU times are substituted by surrogate models based on Kriging interpolation. Using a degree of freedom analysis, some of those units can be aggregated into a single unit to reduce the complexity of the resulting model. As a result, we solve a hybrid simulation-optimization model formed by units in the original flowsheet, Kriging models, and explicit equations. We present a case study of the optimization of a sour water stripping plant in which we simultaneously consider economics, heat integration and environmental impact using the ReCiPe indicator, which incorporates the recent advances made in Life Cycle Assessment (LCA). The optimization strategy guarantees the convergence to a local optimum inside the tolerance of the numerical noise.
Resumo:
A classic T-cell phenotype in systemic lupus erythematosus (SLE) is the downregulation and replacement of the CD3ζ chain that alters T-cell receptor signaling. However, genetic associations with SLE in the human CD247 locus that encodes CD3ζ are not well established and require replication in independent cohorts. Our aim was therefore to examine, localize and validate CD247-SLE association in a large multiethnic population. We typed 44 contiguous CD247 single-nucleotide polymorphisms (SNPs) in 8922 SLE patients and 8077 controls from four ethnically distinct populations. The strongest associations were found in the Asian population (11 SNPs in intron 1, 4.99 × 10(-4) < P < 4.15 × 10(-2)), where we further identified a five-marker haplotype (rs12141731-rs2949655-rs16859085-rs12144621-rs858554; G-G-A-G-A; P(hap) = 2.12 × 10(-5)) that exceeded the most associated single SNP rs858554 (minor allele frequency in controls = 13%; P = 4.99 × 10(-4), odds ratio = 1.32) in significance. Imputation and subsequent association analysis showed evidence of association (P < 0.05) at 27 additional SNPs within intron 1. Cross-ethnic meta-analysis, assuming an additive genetic model adjusted for population proportions, showed five SNPs with significant P-values (1.40 × 10(-3) < P< 3.97 × 10(-2)), with one (rs704848) remaining significant after Bonferroni correction (P(meta) = 2.66 × 10(-2)). Our study independently confirms and extends the association of SLE with CD247, which is shared by various autoimmune disorders and supports a common T-cell-mediated mechanism.
Resumo:
Includes bibliographical references
Resumo:
Includes bibliographical references.
Resumo:
Data Envelopment Analysis (DEA) is one of the most widely used methods in the measurement of the efficiency and productivity of Decision Making Units (DMUs). DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper proposes a neural network back-propagation Data Envelopment Analysis to address this problem for the very large scale datasets now emerging in practice. Neural network requirements for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of large datasets. Finally, the back-propagation DEA algorithm is applied to five large datasets and compared with the results obtained by conventional DEA.
Resumo:
In this thesis patterns of working hours in large-scale grocery retailing in Britain and France are compared. The research is carried out using cross-national comparative methodology, and the analysis is based on information derived from secondary sources and empirical research in large-scale grocery retailing involving employers and trade unions at industry level and case studies at outlet level. The thesis begins by comparing national patterns of working hours in Britain and France over the post-war period. Subsequently, a detailed comparison of working hours in large-scale grocery retailing in Britain and France is carried out through the analysis of secondary sources and empirical data. Emphasis is placed on analyzing part-time working hours. They are contrasted and compared at national level and explained in terms of supply and demand factors. The relationships between the structuring of, and satisfaction with, working hours and factors determining women's integration in the workforce in Britain and France are investigated. Part-time hours are then compared and contrasted in large-scale grocery retailing in the context of the analysis of working hours. The relationship between the structuring of working hours and satisfaction with them is examined in both countries through research with women part-timers in case study outlets. The cross-national comparative methodology is used to examine whether dissimilar national contexts in Britain and France have led to different patterns of working hours in large-scale grocery retailing. The principal conclusion is that significant differences are found in the length, organization and flexibility of working hours and that these differences can be attributed to dissimilar socio-economic, political, and cultural contexts in the two countries.
Resumo:
This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. Recently, motion current signature analysis has been addressed as an alternative to the use of sensors for monitoring internal faults of a motor. A maintenance system based upon the analysis of motion current signature avoids the need for the implementation and maintenance of expensive motion sensing technology. By developing nonlinear dynamical analysis for motion current signature, the research described in this thesis implements a novel real-time predictive maintenance system for current and future manufacturing machine systems. A crucial concept underpinning this project is that the motion current signature contains information relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of concept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network approach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the presence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear techniques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.
Resumo:
Different procurement decisions taken by relief organizations can result in considerably different implications in regards to transport, storage, and distribution of humanitarian aid and ultimately can influence the performance of the humanitarian supply chain and the delivery of the humanitarian aid. In this article, we look into what resources are needed and how these resources evolve in the delivery of humanitarian aid. Drawing on the resource-based view of the firm, we develop a framework to categorize the impact of local resources on the configuration of humanitarian supply chains. In contrast to other papers, the importance of localizing the configuration of the humanitarian supply chain is not only conceptually recognized, but empirical investigations are also provided. In terms of methodology, this article is based on the analysis of secondary data from two housing reconstruction projects. Findings indicate that the use of local resources in humanitarian aid has positive effects on programs' overall supply chain performance and these effects are not only related to the macroeconomic perspective, but benefits expand to improvements related to the use of knowledge. At the same time, it was found that local sourcing often comes with a number of problems. For example, in one of the cases, significant problems existed, which were related to the scarcity of local supplies. Both housing reconstruction projects have indicated the continuous need for changes throughout the programs as a dynamic supply chain configuration is important for the long-term sustainability of reconstruction aid. © 2014 Decision Sciences Institute.
Resumo:
Data envelopment analysis (DEA) is the most widely used methods for measuring the efficiency and productivity of decision-making units (DMUs). The need for huge computer resources in terms of memory and CPU time in DEA is inevitable for a large-scale data set, especially with negative measures. In recent years, wide ranges of studies have been conducted in the area of artificial neural network and DEA combined methods. In this study, a supervised feed-forward neural network is proposed to evaluate the efficiency and productivity of large-scale data sets with negative values in contrast to the corresponding DEA method. Results indicate that the proposed network has some computational advantages over the corresponding DEA models; therefore, it can be considered as a useful tool for measuring the efficiency of DMUs with (large-scale) negative data.
Resumo:
Human mesenchymal stem cell (hMSC) therapies have the potential to revolutionise the healthcare industry and replicate the success of the therapeutic protein industry; however, for this to be achieved there is a need to apply key bioprocessing engineering principles and adopt a quantitative approach for large-scale reproducible hMSC bioprocess development. Here we provide a quantitative analysis of the changes in concentration of glucose, lactate and ammonium with time during hMSC monolayer culture over 4 passages, under 100% and 20% dissolved oxgen (dO2), where either a 100%, 50% or 0% growth medium exchange was performed after 72h in culture. Yield coefficients, specific growth rates (h-1) and doubling times (h) were calculated for all cases. The 100% dO2 flasks outperformed the 20% dO2 flasks with respect to cumulative cell number, with the latter consuming more glucose and producing more lactate and ammonium. Furthermore, the 100% and 50% medium exchange conditions resulted in similar cumulative cell numbers, whilst the 0% conditions were significantly lower. Cell immunophenotype and multipotency were not affected by the experimental culture conditions. This study demonstrates the importance of determining optimal culture conditions for hMSC expansion and highlights a potential cost savings from only making a 50% medium exchange, which may prove significant for large-scale bioprocessing. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
This chapter discusses network protection of high-voltage direct current (HVDC) transmission systems for large-scale offshore wind farms where the HVDC system utilizes voltage-source converters. The multi-terminal HVDC network topology and protection allocation and configuration are discussed with DC circuit breaker and protection relay configurations studied for different fault conditions. A detailed protection scheme is designed with a solution that does not require relay communication. Advanced understanding of protection system design and operation is necessary for reliable and safe operation of the meshed HVDC system under fault conditions. Meshed-HVDC systems are important as they will be used to interconnect large-scale offshore wind generation projects. Offshore wind generation is growing rapidly and offers a means of securing energy supply and addressing emissions targets whilst minimising community impacts. There are ambitious plans concerning such projects in Europe and in the Asia-Pacific region which will all require a reliable yet economic system to generate, collect, and transmit electrical power from renewable resources. Collective offshore wind farms are efficient and have potential as a significant low-carbon energy source. However, this requires a reliable collection and transmission system. Offshore wind power generation is a relatively new area and lacks systematic analysis of faults and associated operational experience to enhance further development. Appropriate fault protection schemes are required and this chapter highlights the process of developing and assessing such schemes. The chapter illustrates the basic meshed topology, identifies the need for distance evaluation, and appropriate cable models, then details the design and operation of the protection scheme with simulation results used to illustrate operation. © Springer Science+Business Media Singapore 2014.
Resumo:
Cell-based therapies have the potential to contribute to global healthcare, whereby the use of living cells and tissues can be used as medicinal therapies. Despite this potential, many challenges remain before the full value of this emerging field can be realized. The characterization of input material for cell-based therapy bioprocesses from multiple donors is necessary to identify and understand the potential implications of input variation on process development. In this work, we have characterized bone marrow derived human mesenchymal stem cells (BM-hMSCs) from multiple donors and discussed the implications of the measurable input variation on the development of autologous and allogeneic cell-based therapy manufacturing processes. The range of cumulative population doublings across the five BM-hMSC lines over 30 days of culture was 5.93, with an 18.2% range in colony forming efficiency at the end of the culture process and a 55.1% difference in the production of interleukin-6 between these cell lines. It has been demonstrated that this variation results in a range in the process time between these donor hMSC lines for a hypothetical product of over 13 days, creating potential batch timing issues when manufacturing products from multiple patients. All BM-hMSC donor lines demonstrated conformity to the ISCT criteria but showed a difference in cell morphology. Metabolite analysis showed that hMSCs from the different donors have a range in glucose consumption of 26.98 pmol cell−1 day−1, Lactate production of 29.45 pmol cell−1 day−1 and ammonium production of 1.35 pmol cell−1 day−1, demonstrating the extent of donor variability throughout the expansion process. Measuring informative product attributes during process development will facilitate progress towards consistent manufacturing processes, a critical step in the translation cell-based therapies.
Resumo:
The purpose of this investigation was to develop new techniques to generate segmental assessments of body composition based on Segmental Bioelectrical Impedance Analysis (SBIA). An equally important consideration was the design, simulation, development, and the software and hardware integration of the SBIA system. This integration was carried out with a Very Large Scale Integration (VLSI) Field Programmable Gate Array (FPGA) microcontroller that analyzed the measurements obtained from segments of the body, and provided full body and segmental Fat Free Mass (FFM) and Fat Mass (FM) percentages. Also, the issues related to the estimate of the body's composition in persons with spinal cord injury (SCI) were addressed and investigated. This investigation demonstrated that the SBIA methodology provided accurate segmental body composition measurements. Disabled individuals are expected to benefit from these SBIA evaluations, as they are non-invasive methods, suitable for paralyzed individuals. The SBIA VLSI system may replace bulky, non flexible electronic modules attached to human bodies. ^