945 resultados para General Electric Research Laboratories
Resumo:
"This study ... was carried out ... in the Small Aircraft Engine Department at Lynn Massachusetts."
Resumo:
Cover title.
Resumo:
Prior to the formation of the Incandescent Lamp Department, the Lamp Works of the General Electric Company were divided into the Edison Lamp and the National Lamp Divisions. Later, the Incandescent Lamp Department became the Lamp Department
Resumo:
Mode of access: Internet.
Resumo:
Photocopy.
Resumo:
"Work performed under contract DA-30-069-ORD-1955, administered by Bell Telephone Laboratories, Whippany, N. J."
Resumo:
Cover title.
Resumo:
This thesis follows the argument that, to fully understand the current position of national research laboratories in Great Britain one needs to study the historical development of the government research establishment as a specific social institution. A particular model is outlined in which it is argued that institutional characteristics evolve through the continual interplay between internal development and environmental factors within a changing political and economic context, and that the continuous development of an institution depends on its ability to adapt to changes in its operational environment. Within this framework important historical precedents for formal government institutional support for applied research are identified. and the transition from private to public patronage documented. The emergence and consolidation of government research laboratories in Britain is described in detail. The subsequent relative decline of public laboratories is interpreted in terms of the undermining of a traditional role resulting in legitimation crisis. It is concluded that it is no longer feasible to consider the public research laboratory as a coherent institutional form, and that the future of each individual laboratory can only be considered in relation to the institutional needs of its own sphere of operation. Nevertheless the laboratories have been forced into decline in an essentially unplanned way which may have serious consequences for the maintenance of the scientific and technical infrastructures, necessary for material progress in the national context.
Resumo:
The acceleration of technological change and the process of globalization has intensified competition and the need for new products (goods and services), resulting in growing concern for organizations in the development of technological, economic and social advances. This work presents an overview of the development of wind energy-related technologies and design trends. To conduct this research, it is (i) a literature review on technological innovation, technological forecasting methods and fundamentals of wind power; (ii) the analysis of patents, with the current technology landscape studied by means of finding information in patent databases; and (iii) the preparation of the map of technological development and construction of wind turbines of the future trend information from the literature and news from the sector studied. Step (ii) allowed the study of 25 644 patents between the years 2003-2012, in which the US and China lead the ranking of depositors and the American company General Electric and the Japanese Mitsubishi stand as the largest holder of wind technology. Step (iii) analyzed and identified that most of the innovations presented in the technological evolution of wind power are incremental product innovations to market. The proposed future trends shows that the future wind turbines tend to have a horizontal synchronous shaft, which with the highest diameter of 194m and 164m rotor nacelle top, the top having 7,5MW generation. The materials used for the blades are new materials with characteristics of low density and high strength. The towers are trend with hybrid materials, uniting the steel to the concrete. This work tries to cover the existing gap in the gym on the use of technological forecasting techniques for the wind energy industry, through the recognition that utilize the patent analysis, analysis of scientific articles and stories of the area, provide knowledge about the industry and influencing the quality of investment decisions in R & D and hence improves the efficiency and effectiveness of wind power generation
Resumo:
In this thesis, the first-order radar cross section (RCS) of an iceberg is derived and simulated. This analysis takes place in the context of a monostatic high frequency surface wave radar with a vertical dipole source that is driven by a pulsed waveform. The starting point of this work is a general electric field equation derived previ- ously for an arbitrarily shaped iceberg region surrounded by an ocean surface. The condition of monostatic backscatter is applied to this general field equation and the resulting expression is inverse Fourier transformed. In the time domain the excitation current of the transmit antenna is specified to be a pulsed sinusoid signal. The result- ing electric field equation is simplified and its physical significance is assessed. The field equation is then further simplified by restricting the iceberg's size to fit within a single radar patch width. The power received by the radar is calculated using this electric field equation. Comparing the received power with the radar range equation gives a general expression for the iceberg RCS. The iceberg RCS equation is found to depend on several parameters including the geometry of the iceberg, the radar frequency, and the electrical parameters of both the iceberg and the ocean surface. The RCS is rewritten in a form suitable for simulations and simulations are carried out for rectangularly shaped icebergs. Simulation results are discussed and are found to be consistent with existing research.
Resumo:
Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.
Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.
Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.
Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.
Resumo:
Once the preserve of university academics and research laboratories with high-powered and expensive computers, the power of sophisticated mathematical fire models has now arrived on the desk top of the fire safety engineer. It is a revolution made possible by parallel advances in PC technology and fire modelling software. But while the tools have proliferated, there has not been a corresponding transfer of knowledge and understanding of the discipline from expert to general user. It is a serious shortfall of which the lack of suitable engineering courses dealing with the subject is symptomatic, if not the cause. The computational vehicles to run the models and an understanding of fire dynamics are not enough to exploit these sophisticated tools. Too often, they become 'black boxes' producing magic answers in exciting three-dimensional colour graphics and client-satisfying 'virtual reality' imagery. As well as a fundamental understanding of the physics and chemistry of fire, the fire safety engineer must have at least a rudimentary understanding of the theoretical basis supporting fire models to appreciate their limitations and capabilities. The five day short course, "Principles and Practice of Fire Modelling" run by the University of Greenwich attempt to bridge the divide between the expert and the general user, providing them with the expertise they need to understand the results of mathematical fire modelling. The course and associated text book, "Mathematical Modelling of Fire Phenomena" are aimed at students and professionals with a wide and varied background, they offer a friendly guide through the unfamiliar terrain of mathematical modelling. These concepts and techniques are introduced and demonstrated in seminars. Those attending also gain experience in using the methods during "hands-on" tutorial and workshop sessions. On completion of this short course, those participating should: - be familiar with the concept of zone and field modelling; - be familiar with zone and field model assumptions; - have an understanding of the capabilities and limitations of modelling software packages for zone and field modelling; - be able to select and use the most appropriate mathematical software and demonstrate their use in compartment fire applications; and - be able to interpret model predictions. The result is that the fire safety engineer is empowered to realise the full value of mathematical models to help in the prediction of fire development, and to determine the consequences of fire under a variety of conditions. This in turn enables him or her to design and implement safety measures which can potentially control, or at the very least reduce the impact of fire.
Resumo:
"These studies were conducted by the General Electric Company, Reentry Systems Department, for the Stability and Control Section of the Flight Dynamics Laboratory of the Air Force Research and Technology Division."
Resumo:
Over the last decade, brief intervention for alcohol problems has become a well-validated and accepted treatment, with bried interventions frequently showing equivalence in terms of outcome to more extended treatments (Bien et al, 1993). A recent review of this studies found that heavy drinkers who received interventions of less than 1 h were almost twice as likely to moderate their drinking over the following 6-12 months as did those not receiving intervention (Wilk etal, 1997).Some studies have used motivational interviewing (MI) strategies (Monti et al, 1999); others have simply given information ajnd advice to reduce drinking (Fleming et al, 1997). Leaflets or information on strategies to assist in the attempt or follow-up sessions are sometimes provided (Fleming et al, 1997). In general practice research, provision of one or more follow-up sessions increases the reliability of intake reductions across studies (Poikolainen, 1999).
Resumo:
In their studies, Eley and Meyer (2004) and Meyer and Cleary (1998) found that there are sources of variation in the affective and process dimensions of learning in mathematics and clinical diagnosis specific to each of these disciplines. Meyer and Shanahan (2002) argue that: General purpose models of student learning that are transportable across different discipline contexts cannot, by definition, be sensitive to sources of variation that may be subject-specific (2002. p. 204). In other words, to explain the differences in learning approaches and outcomes in a particular discipline, there are discipline-specific factors, which cannot be uncovered in general educational research. Meyer and Shanahan (2002) argue for a need to "seek additional sources of variation that are perhaps conceptually unique ... within the discourse of particular disciplines" (p. 204). In this paper, the development of an economics-specific construct (called economic thinking ability) is reported. The construct aims to measure discipline-sited ability of students that has important influence on learning in economics. Using this construct, economic thinking abilities of introductory and intermediate level economics students were measured prior to the commencement, and at the end, of their study over one semester. This enabled factors associated with students' pre-course economic thinking ability and their development in economic thinking ability to be investigated. The empirical findings will address the 'nature' versus 'nurture' debate in economics education (Frank, et aI., 1993; Frey et al., 1993; Haucap and Tobias 2003). The implications for future research in economics education will also be discussed.