910 resultados para Goddard Space Flight Center. Mission Operations and Data Systems Directorate.
Resumo:
Manufacturing planning and control systems are fundamental to the successful operations of a manufacturing organisation. 10 order to improve their business performance, significant investment is made by companies into planning and control systems; however, not all companies realise the benefits sought Many companies continue to suffer from high levels of inventory, shortages, obsolete parts, poor resource utilisation and poor delivery performance. This thesis argues that the fit between the planning and control system and the manufacturing organisation is a crucial element of success. The design of appropriate control systems is, therefore, important. The different approaches to the design of manufacturing planning and control systems are investigated. It is concluded that there is no provision within these design methodologies to properly assess the impact of a proposed design on the manufacturing facility. Consequently, an understanding of how a new (or modified) planning and control system will perform in the context of the complete manufacturing system is unlikely to be gained until after the system has been implemented and is running. There are many modelling techniques available, however discrete-event simulation is unique in its ability to model the complex dynamics inherent in manufacturing systems, of which the planning and control system is an integral component. The existing application of simulation to manufacturing control system issues is limited: although operational issues are addressed, application to the more fundamental design of control systems is rarely, if at all, considered. The lack of a suitable simulation-based modelling tool does not help matters. The requirements of a simulation tool capable of modelling a host of different planning and control systems is presented. It is argued that only through the application of object-oriented principles can these extensive requirements be achieved. This thesis reports on the development of an extensible class library called WBS/Control, which is based on object-oriented principles and discrete-event simulation. The functionality, both current and future, offered by WBS/Control means that different planning and control systems can be modelled: not only the more standard implementations but also hybrid systems and new designs. The flexibility implicit in the development of WBS/Control supports its application to design and operational issues. WBS/Control wholly integrates with an existing manufacturing simulator to provide a more complete modelling environment.
Resumo:
Purpose - The main aim of the research is to shed light on the role of information and communication technology (ICT) in the logistics innovation process of small and medium-sized third party logistics providers (3PLs). Design/methodology/approach - A triangulated research strategy was designed using a combination of quantitative and qualitative methods. The former involved the use of a questionnaire survey of small and medium-sized Italian 3PLs with 153 usable responses received. The latter comprised a series of focus groups and the use of seven case studies. Findings - There is a relatively low level of ICT expenditure with few companies adopting formal technology investment strategies. The findings highlight the strategic importance of supply chain integration for 3PLs with companies that have embarked on an expansion of their service portfolios showing a higher level of both ICT usage and information integration. Lack of technology skills in the workforce is a major constraint on ICT adoption. Given the proliferation of logistics-related ICT tools and applications in recent years it has been difficult for small and medium-sized 3PLs to select appropriate applications. Research limitations/implications - The paper provides practical guidelines to researchers in the effective use of mixed-methods research based on the concept of methodological triangulation. In particular, it shows how questionnaire surveys, focus groups and case study analysis can be used in combination to provide insights into multi-faceted supply chain phenomena. It also identifies several potentially fruitful avenues for future research in this specific field. Practical implications - The paper's findings provide useful guidance for practitioners on the effective adoption of ICT as part of the logistics innovation process. The findings also provide support for ICT vendors in the design of ICT solutions that are aligned to the needs of small 3PLs. Originality/value - There is currently a paucity of research into the drivers and inhibitors of ICT in the innovation processes of small and medium-sized 3PLs. This paper fills this gap by exploring the issue using a range of complementary research approaches. Copyright © 2013 Emerald Group Publishing Limited. All rights reserved.
Resumo:
MEG beamformer algorithms work by making the assumption that correlated and spatially distinct local field potentials do not develop in the human brain. Despite this assumption, images produced by such algorithms concur with those from other non-invasive and invasive estimates of brain function. In this paper we set out to develop a method that could be applied to raw MEG data to explicitly test his assumption. We show that a promax rotation of MEG channel data can be used as an approximate estimator of the number of spatially distinct correlated sources in any frequency band.
Resumo:
Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation. In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data. For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.
Resumo:
I proposed the study of two distinct aspects of Ten-Eleven Translocation 2 (TET2) protein for understanding specific functions in different body systems. In Part I, I characterized the molecular mechanisms of Tet2 in the hematological system. As the second member of Ten-Eleven Translocation protein family, TET2 is frequently mutated in leukemic patients. Previous studies have shown that the TET2 mutations frequently occur in 20% myelodysplastic syndrome/myeloproliferative neoplasm (MDS/MPN), 10% T-cell lymphoma leukemia and 2% B-cell lymphoma leukemia. Genetic mouse models also display distinct phenotypes of various types of hematological malignancies. I performed 5-hydroxymethylcytosine (5hmC) chromatin immunoprecipitation sequencing (ChIP-Seq) and RNA sequencing (RNA-Seq) of hematopoietic stem/progenitor cells to determine whether the deletion of Tet2 can affect the abundance of 5hmC at myeloid, T-cell and B-cell specific gene transcription start sites, which ultimately result in various hematological malignancies. Subsequent Exome sequencing (Exome-Seq) showed that disease-specific genes are mutated in different types of tumors, which suggests that TET2 may protect the genome from being mutated. The direct interaction between TET2 and Mutator S Homolog 6 (MSH6) protein suggests TET2 is involved in DNA mismatch repair. Finally, in vivo mismatch repair studies show that the loss of Tet2 causes a mutator phenotype. Taken together, my data indicate that TET2 binds to MSH6 to protect genome integrity. In Part II, I intended to better understand the role of Tet2 in the nervous system. 5-hydroxymethylcytosine regulates epigenetic modification during neurodevelopment and aging. Thus, Tet2 may play a critical role in regulating adult neurogenesis. To examine the physiological significance of Tet2 in the nervous system, I first showed that the deletion of Tet2 reduces the 5hmC levels in neural stem cells. Mice lacking Tet2 show abnormal hippocampal neurogenesis along with 5hmC alternations at different gene promoters and corresponding gene expression downregulation. Through the luciferase reporter assay, two neural factors Neurogenic differentiation 1 (NeuroD1) and Glial fibrillary acidic protein (Gfap) were down-regulated in Tet2 knockout cells. My results suggest that Tet2 regulates neural stem/progenitor cell proliferation and differentiation in adult brain.
Resumo:
The objective of this research was to develop a methodology for transforming and dynamically segmenting data. Dynamic segmentation enables transportation system attributes and associated data to be stored in separate tables and merged when a specific query requires a particular set of data to be considered. A major benefit of dynamic segmentation is that individual tables can be more easily updated when attributes, performance characteristics, or usage patterns change over time. Applications of a progressive geographic database referencing system in transportation planning are vast. Summaries of system condition and performance can be made, and analyses of specific portions of a road system are facilitated.
Resumo:
Abstract. Two ideas taken from Bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each person's assignment. Unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. To achieve this target, the Bayesian optimization algorithm builds a Bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. Computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. It is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.
Resumo:
In this document, we wish to describe statistics, data and the importance of the 13th CONTECSI – International Conference on Information Systems and Technology Management, which took place in the University of São Paulo, from June 1st through 3rd and was organized by TECSI/EAC/FEA/USP/ECA/POLI. This report presents statistics of the 13th CONTECSI, Goals and Objectives, Program, Plenary Sessions, Doctoral Consortium, Parallel Sessions, Honorable Mentions and Committees. We would like to point out the huge importance of the financial aid given by CAPES, CNPq, FAPESP, as well as the support of FEA USP, POLI USP, ECA USP, ANPAD, AIS, ISACA, UNINOVE, Mackenzie, Universidade do Porto, Rutgers School/USA, São Paulo Convention Bureau and CCINT-FEA-USP.
Resumo:
Abstract. Two ideas taken from Bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each person's assignment. Unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. To achieve this target, the Bayesian optimization algorithm builds a Bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. Computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. It is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.
Resumo:
Liquid crystals (LCs) have revolutionized the display and communication technologies. Doping of LCs with inorganic nanoparticles such as carbon nanotubes, gold nanoparticles and ferroelectric nanoparticles have garnered the interest of research community as they aid in improving the electro-optic performance. In this thesis, we examine a hybrid nanocomposite comprising of 5CB liquid crystal and block copolymer functionalized barium titanate ferroelectric nanoparticles. This hybrid system exhibits a giant soft-memory effect. Here, spontaneous polarization of ferroelectric nanoparticles couples synergistically with the radially aligned BCP chains to create nanoscopic domains that can be rotated electromechanically and locked in space even after the removal of the applied electric field. The resulting non-volatile memory is several times larger than the non-functionalized sample and provides an insight into the role of non-covalent polymer functionalization. We also present the latest results from the dielectric and spectroscopic study of field assisted alignment of gold nanorods.
Resumo:
Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.
Resumo:
Steam injection is the most used thermal recovery method of oil nowadays because of the high degree of development of the technique that allows high recovery factors. However, injection of superheated steam into the reservoir affects the entire structure of the well, including the cemented layer that presents a retrogression of compressive strength and increases the permeability due to formation of more crystalline and denser phases at temperatures above 110 °C. These changes result in failures in the cement that favor the entrance of formation fluids into the annulus space resulting in unsafe operations and restrictions in the economic life of the well. But the strength retrogression can be prevented by partial replacement of cement by silica-based materials that reduce the CaO/SiO2 ratio of cement slurries changing the trajectory of the reactions, converting those deleterious phases in phases with satisfactory mechanical strength and permeability. The aim of this study was to evaluate the behavior of a ceramic waste material rich in silica in partial and total substitution of a mineral additive used to fight the strength retrogression of cement slurries subjected to high temperatures. The evaluation was made by compression, X-ray diffraction (XRD) and thermogravimetry (TG/DTG). The samples were submitted to a cycle of low temperature (38 °C) for 28 days and a cycle of low temperature followed by exposure to 280 ºC and 1000 psi by 3 days. The results showed that slurries with additions of up to 30% of the waste material are not enough to prevent the strength retrogression, while slurries with additions of the waste material combined with silica flour in various proportions produced hydrated products of low Ca/Si ratios that maintained the compressive strength at satisfactory levels
Resumo:
The use of silvopastoral systems (SPS) can be a good alternative to reduce the environmental impacts of livestock breeding in Brazil. Despite the advantages offered by public policies, many producers hesitate to use this system. One of the reasons is the lack of information on health and productivity of cattle raised under these conditions. The experiment reported here was designed to compare the behavior of infection by gastrointestinal nematodes and weight gain of beef cattle raised in a SPS and a conventional pasture system. We monitored the number of eggs per gram of feces, the prevalent nematode genus, data on climate, forage availability, weight gain and packed cell volume (PCV) of the animals bred in the two systems. The infection by nematodes was significantly higher in the cattle raised in the SPS (p\0.05). The coprocultures revealed the presence of nematodes of the genera Haemonchus, Cooperia, Oesophagostomum and Trichostrongylus, in both systems, but the mean infestation rates of Haemonchus and Cooperia were higher in the SPS (p\0.05). The average of PCV values did not differ between the cattle in the two systems. The individual weight gain and stocking rate in the period did not vary between the systems (p[0.05). Despite the higher prevalence of nematodes in the SPS, no negative impact was detected on the animals? weight gain and health. The results of this experiment indicate that under the conditions studied, there is no need to alter the parasite management to assure good productive performance of cattle
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation.^ In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data.^ For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.^