953 resultados para test-process features
Resumo:
A key issue in the approval process of antidepressants is the inconsistency of results between antidepressant clinical phase III trials. Identifying factors influencing efficacy data is needed to facilitate interpretation of the results.
Resumo:
In recent history, there has been a trend of increasing partisan polarization throughout most of the American political system. Some of the impacts of this polarization are obvious; however, there is reason to believe that we miss some of the indirect effects of polarization. Accompanying the trend of increased polarization has been an increase in the contentiousness of the Supreme Court confirmation process. I believe that these two trends are related. Furthermore, I argue that these trends have an impact on judicial behavior. This is an issue worth exploring, since the Supreme Court is the most isolated branch of the federal government. The Constitution structured the Supreme Court to ensure that it was as isolated as possible from short-term political pressures and interests. This study attempts to show how it may be possible that those goals are no longer being fully achieved. My first hypothesis in this study is that increases in partisan polarization are a direct cause of the increase in the level of contention during the confirmation process. I then hypothesize that the more contention a justice faces during his or her confirmation process, the more ideologically extreme that justice will then vote on the bench. This means that a nominee appointed by a Republican president will tend to vote even more conservatively than was anticipated following a contentious confirmation process, and vice versa for Democratic appointees. In order to test these hypotheses, I developed a data set for every Supreme Court nominee dating back to President Franklin D. Roosevelt¿s appointments (1937). With this data set, I ran a series of regression models to analyze these relationships. Statistically speaking, the results support my first hypothesis in a fairly robust manner. My regression results for my second hypothesis indicate that the trend I am looking for is present for Republican nominees. For Democratic nominees, the impacts are less robust. Nonetheless, as the results will show, contention during the confirmation process does seem to have some impact on judicial behavior. Following my quantitative analysis, I analyze a series of case studies. These case studies serve to provide tangible examples of these statistical trends as well as to explore what else may be going on during the confirmation process and subsequent judicial decision-making. I use Justices Stevens, Rehnquist, and Alito as the subjects for these case studies. These cases will show that the trends described above do seem to be identifiable at the level of an individual case. These studies further help to indicate other potential impacts on judicial behavior. For example, following Justice Rehnquist¿s move from Associate to Chief Justice, we see a marked change in his behavior. Overall, this study serves as a means of analyzing some of the more indirect impacts of partisan polarization in modern politics. Further, the study offers a means of exploring some of the possible constraints (both conscious and subconscious) that Supreme Court justices may feel while they decide how to cast a vote in a particular case. Given the wide-reaching implications of Supreme Court decisions, it is important to try to grasp a full view of how these decisions are made.
Resumo:
Since the late eighties, economists have been regarding the transition from command to market economies in Central and Eastern Europe with intense interest. In addition to studying the transition per se, they have begun using the region as a testing ground on which to investigate the validity of certain classic economic propositions. In his research, comprising three articles written in English and totalling 40 pages, Mr. Hanousek uses the so-called "Czech national experiment" (voucher privatisation scheme) to test the permanent income hypothesis (PIH). He took as his inspiration Kreinin's recommendation: "Since data concerning the behaviour of windfall income recipients is relatively scanty, and since such data can constitute an important test of the permanent income hypothesis, it is of interest to bring to bear on the hypothesis whatever information is available". Mr. Hanousek argues that, since the transfer of property to Czech citizens from 1992 to 1994 through the voucher scheme was not anticipated, it can be regarded as windfall income. The average size of the windfall was more than three month's salary and over 60 percent of the Czech population received this unexpected income. Furthermore, there are other reasons for conducting such an analysis in the Czech Republic. Firstly, the privatisation process took place quickly. Secondly, both the economy and consumer behaviour have been very stable. Thirdly, out of a total population of 10 million Czech citizens, an astonishing 6 million, that is, virtually every household, participated in the scheme. Thus Czech voucher privatisation provides a sample for testing the PIH almost equivalent to a full population, thus avoiding problems with the distribution of windfalls. Compare this, for instance with the fact that only 4% of the Israeli urban population received personal restitution from Germany, while the number of veterans who received the National Service Life Insurance Dividends amounted to less than 9% of the US population and were concentrated in certain age groups. But to begin with, Mr. Hanousek considers the question of whether the public percieves the transfer from the state to individual as an increase in net wealth. It can be argued that the state is only divesting itself of assets that would otherwise provide a future source of transfers. According to this argument, assigning these assets to individuals creates an offsetting change in the present value of potential future transfers so that individuals are no better off after the transfer. Mr. Hanousek disagrees with this approach. He points out that a change in the ownership of inefficient state-owned enterprises should lead to higher efficiency, which alone increases the value of enterprises and creates a windfall increase in citizens' portfolios. More importantly, the state and individuals had very different preferences during the transition. Despite government propaganda, it is doubtful that citizens of former communist countries viewed government-owned enterprises as being operated in the citizens' best interest. Moreover, it is unlikely that the public fully comprehended the sophisticated links between the state budget, state-owned enterprises, and transfers to individuals. Finally, the transfers were not equal across the population. Mr. Hanousek conducted a survey on 1263 individuals, dividing them into four monthly earnings categories. After determining whether the respondent had participated in the voucher process, he asked those who had how much of what they received from voucher privatisation had been (a) spent on goods and services, (b) invested elsewhere, (c) transferred to newly emerging pension funds, (d) given to a family member, and (e) retained in their original form as an investment. Both the mean and the variance of the windfall rise with income. He obtained similar results with respect to education, where the mean (median) windfall for those with a basic school education was 13,600 Czech Crowns (CZK), a figure that increased to 15,000 CZK for those with a high school education without exams, 19,900 CZK for high school graduates with exams, and 24,600 CZK for university graduates. Mr. Hanousek concludes that it can be argued that higher income (and better educated) groups allocated their vouchers or timed the disposition of their shares better. He turns next to an analysis of how respondents reported using their windfalls. The key result is that only a relatively small number of individuals reported spending on goods. Overall, the results provide strong support for the permanent income hypothesis, the only apparent deviation being the fact that both men and women aged 26 to 35 apparently consume more than they should if the windfall were annuitised. This finding is still fully consistent with the PIH, however, if this group is at a stage in their life-cycle where, without the windfall, they would be borrowing to finance consumption associated with family formation etc. Indeed, the PIH predicts that individuals who would otherwise borrow to finance consumption would consume the windfall up to the level equal to the annuitised fraction of the increase in lifetime income plus the full amount of the previously planned borrowing for consumption. Greater consumption would then be financed, not from investing the windfall, but from avoidance of future repayment obligations for debts that would have been incurred without the windfall.
Resumo:
The transition in Central and Eastern Europe since the late 1980s has provided a testing ground for classic propositions. This project looked at the impact of privatisation on private consumption, using the Czech experiment of voucher privatisation to test the permanent income hypothesis. This form of privatisation moved state assets to individuals and represented an unexpected windfall gain for participants in the scheme. Whether the windfall was consumed or saved offers a clear test of the permanent income hypothesis. Of a total population of 10 million, 6 million Czechs, i.e. virtually every household, participated in the scheme,. In a January 1996 survey, 1263 individuals were interviewed , 75% of whom had taken part. The data obtained suggests that only a small quantity of transferred assets were cashed in and spent on consumption, providing support for the permanent income hypothesis. The fraction of the windfall consumed grows with age, as would be predicted from the lower life expectancy of older consumers. The most interesting deviation was for people aged 26 to 35, who apparently consumed more that they would if the windfall were annuitised. As these people are at the stage in their lives when they would otherwise be borrowing to cover consumption related to establishing a family, etc., this is however consistent with the permanent income hypothesis, which predicts that individuals who would otherwise borrow money would use the windfall to avoid doing so.
Resumo:
Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.
Resumo:
The AEGISS (Ascertainment and Enhancement of Gastrointestinal Infection Surveillance and Statistics) project aims to use spatio-temporal statistical methods to identify anomalies in the space-time distribution of non-specific, gastrointestinal infections in the UK, using the Southampton area in southern England as a test-case. In this paper, we use the AEGISS project to illustrate how spatio-temporal point process methodology can be used in the development of a rapid-response, spatial surveillance system. Current surveillance of gastroenteric disease in the UK relies on general practitioners reporting cases of suspected food-poisoning through a statutory notification scheme, voluntary laboratory reports of the isolation of gastrointestinal pathogens and standard reports of general outbreaks of infectious intestinal disease by public health and environmental health authorities. However, most statutory notifications are made only after a laboratory reports the isolation of a gastrointestinal pathogen. As a result, detection is delayed and the ability to react to an emerging outbreak is reduced. For more detailed discussion, see Diggle et al. (2003). A new and potentially valuable source of data on the incidence of non-specific gastro-enteric infections in the UK is NHS Direct, a 24-hour phone-in clinical advice service. NHS Direct data are less likely than reports by general practitioners to suffer from spatially and temporally localized inconsistencies in reporting rates. Also, reporting delays by patients are likely to be reduced, as no appointments are needed. Against this, NHS Direct data sacrifice specificity. Each call to NHS Direct is classified only according to the general pattern of reported symptoms (Cooper et al, 2003). The current paper focuses on the use of spatio-temporal statistical analysis for early detection of unexplained variation in the spatio-temporal incidence of non-specific gastroenteric symptoms, as reported to NHS Direct. Section 2 describes our statistical formulation of this problem, the nature of the available data and our approach to predictive inference. Section 3 describes the stochastic model. Section 4 gives the results of fitting the model to NHS Direct data. Section 5 shows how the model is used for spatio-temporal prediction. The paper concludes with a short discussion.
Resumo:
BACKGROUND: In May 2003, a newborn auditory screening program was initiated in the Upper Palatinate. METHODS: Sequential OAE- and BERA-screening was conducted in all hospitals with obstetric facilities. The Screening Center at the Public Health Authority was responsible for the coordination of the screening process, completeness of participation, the follow-up of all subjects with a positive screening test and the quality of instrumental screening. RESULTS: A total of 96% of 17,469 newborns were screened. The referral rate at discharge was 1.6% (0.4% for bilateral positive findings). For 97% of the positive screening results, a definite diagnosis to confirm or exclude hearing loss was achieved; for 43% only after intervention by the Screening Center. Fifteen children with profound bilateral hearing impairment were identified of whom eight were only detected by the intervention of the Screening Center. CONCLUSION: The effective structures established in the Upper Palatinate provide a standard for the quality of neonatal auditory screening achievable in Germany.
Resumo:
As the demand for miniature products and components continues to increase, the need for manufacturing processes to provide these products and components has also increased. To meet this need, successful macroscale processes are being scaled down and applied at the microscale. Unfortunately, many challenges have been experienced when directly scaling down macro processes. Initially, frictional effects were believed to be the largest challenge encountered. However, in recent studies it has been found that the greatest challenge encountered has been with size effects. Size effect is a broad term that largely refers to the thickness of the material being formed and how this thickness directly affects the product dimensions and manufacturability. At the microscale, the thickness becomes critical due to the reduced number of grains. When surface contact between the forming tools and the material blanks occur at the macroscale, there is enough material (hundreds of layers of material grains) across the blank thickness to compensate for material flow and the effect of grain orientation. At the microscale, there may be under 10 grains across the blank thickness. With a decreased amount of grains across the thickness, the influence of the grain size, shape and orientation is significant. Any material defects (either natural occurring or ones that occur as a result of the material preparation) have a significant role in altering the forming potential. To date, various micro metal forming and micro materials testing equipment setups have been constructed at the Michigan Tech lab. Initially, the research focus was to create a micro deep drawing setup to potentially build micro sensor encapsulation housings. The research focus shifted to micro metal materials testing equipment setups. These include the construction and testing of the following setups: a micro mechanical bulge test, a micro sheet tension test (testing micro tensile bars), a micro strain analysis (with the use of optical lithography and chemical etching) and a micro sheet hydroforming bulge test. Recently, the focus has shifted to study a micro tube hydroforming process. The intent is to target fuel cells, medical, and sensor encapsulation applications. While the tube hydroforming process is widely understood at the macroscale, the microscale process also offers some significant challenges in terms of size effects. Current work is being conducted in applying direct current to enhance micro tube hydroforming formability. Initially, adding direct current to various metal forming operations has shown some phenomenal results. The focus of current research is to determine the validity of this process.
Resumo:
During the project, managers encounter numerous contingencies and are faced with the challenging task of making decisions that will effectively keep the project on track. This task is very challenging because construction projects are non-prototypical and the processes are irreversible. Therefore, it is critical to apply a methodological approach to develop a few alternative management decision strategies during the planning phase, which can be deployed to manage alternative scenarios resulting from expected and unexpected disruptions in the as-planned schedule. Such a methodology should have the following features but are missing in the existing research: (1) looking at the effects of local decisions on the global project outcomes, (2) studying how a schedule responds to decisions and disruptive events because the risk in a schedule is a function of the decisions made, (3) establishing a method to assess and improve the management decision strategies, and (4) developing project specific decision strategies because each construction project is unique and the lessons from a particular project cannot be easily applied to projects that have different contexts. The objective of this dissertation is to develop a schedule-based simulation framework to design, assess, and improve sequences of decisions for the execution stage. The contribution of this research is the introduction of applying decision strategies to manage a project and the establishment of iterative methodology to continuously assess and improve decision strategies and schedules. The project managers or schedulers can implement the methodology to develop and identify schedules accompanied by suitable decision strategies to manage a project at the planning stage. The developed methodology also lays the foundation for an algorithm towards continuously automatically generating satisfactory schedule and strategies through the construction life of a project. Different from studying isolated daily decisions, the proposed framework introduces the notion of {em decision strategies} to manage construction process. A decision strategy is a sequence of interdependent decisions determined by resource allocation policies such as labor, material, equipment, and space policies. The schedule-based simulation framework consists of two parts, experiment design and result assessment. The core of the experiment design is the establishment of an iterative method to test and improve decision strategies and schedules, which is based on the introduction of decision strategies and the development of a schedule-based simulation testbed. The simulation testbed used is Interactive Construction Decision Making Aid (ICDMA). ICDMA has an emulator to duplicate the construction process that has been previously developed and a random event generator that allows the decision-maker to respond to disruptions in the emulation. It is used to study how the schedule responds to these disruptions and the corresponding decisions made over the duration of the project while accounting for cascading impacts and dependencies between activities. The dissertation is organized into two parts. The first part presents the existing research, identifies the departure points of this work, and develops a schedule-based simulation framework to design, assess, and improve decision strategies. In the second part, the proposed schedule-based simulation framework is applied to investigate specific research problems.
Resumo:
This report shares my efforts in developing a solid unit of instruction that has a clear focus on student outcomes. I have been a teacher for 20 years and have been writing and revising curricula for much of that time. However, most has been developed without the benefit of current research on how students learn and did not focus on what and how students are learning. My journey as a teacher has involved a lot of trial and error. My traditional method of teaching is to look at the benchmarks (now content expectations) to see what needs to be covered. My unit consists of having students read the appropriate sections in the textbook, complete work sheets, watch a video, and take some notes. I try to include at least one hands-on activity, one or more quizzes, and the traditional end-of-unit test consisting mostly of multiple choice questions I find in the textbook. I try to be engaging, make the lessons fun, and hope that at the end of the unit my students get whatever concepts I‘ve presented so that we can move on to the next topic. I want to increase students‘ understanding of science concepts and their ability to connect understanding to the real-world. However, sometimes I feel that my lessons are missing something. For a long time I have wanted to develop a unit of instruction that I know is an effective tool for the teaching and learning of science. In this report, I describe my efforts to reform my curricula using the “Understanding by Design” process. I want to see if this style of curriculum design will help me be a more effective teacher and if it will lead to an increase in student learning. My hypothesis is that this new (for me) approach to teaching will lead to increased understanding of science concepts among students because it is based on purposefully thinking about learning targets based on “big ideas” in science. For my reformed curricula I incorporate lessons from several outstanding programs I‘ve been involved with including EpiCenter (Purdue University), Incorporated Research Institutions for Seismology (IRIS), the Master of Science Program in Applied Science Education at Michigan Technological University, and the Michigan Association for Computer Users in Learning (MACUL). In this report, I present the methodology on how I developed a new unit of instruction based on the Understanding by Design process. I present several lessons and learning plans I‘ve developed for the unit that follow the 5E Learning Cycle as appendices at the end of this report. I also include the results of pilot testing of one of lessons. Although the lesson I pilot-tested was not as successful in increasing student learning outcomes as I had anticipated, the development process I followed was helpful in that it required me to focus on important concepts. Conducting the pilot test was also helpful to me because it led me to identify ways in which I could improve upon the lesson in the future.
Resumo:
BACKGROUND: Complete investigation of thrombophilic or hemorrhagic clinical presentations is a time-, apparatus-, and cost-intensive process. Sensitive screening tests for characterizing the overall function of the hemostatic system, or defined parts of it, would be very useful. For this purpose, we are developing an electrochemical biosensor system that allows measurement of thrombin generation in whole blood as well as in plasma. METHODS: The measuring system consists of a single-use electrochemical sensor in the shape of a strip and a measuring unit connected to a personal computer, recording the electrical signal. Blood is added to a specific reagent mixture immobilized in dry form on the strip, including a coagulation activator (e.g., tissue factor or silica) and an electrogenic substrate specific to thrombin. RESULTS: Increasing thrombin concentrations gave standard curves with progressively increasing maximal current and decreasing time to reach the peak. Because the measurement was unaffected by color or turbidity, any type of blood sample could be analyzed: platelet-poor plasma, platelet-rich plasma, and whole blood. The test strips with the predried reagents were stable when stored for several months before testing. Analysis of the combined results obtained with different activators allowed discrimination between defects of the extrinsic, intrinsic, and common coagulation pathways. Activated protein C (APC) predried on the strips allowed identification of APC-resistance in plasma and whole blood samples. CONCLUSIONS: The biosensor system provides a new method for assessing thrombin generation in plasma or whole blood samples as small as 10 microL. The assay is easy to use, thus allowing it to be performed in a point-of-care setting.
Resumo:
Generative Fertigungsverfahren haben sich in den letzten Jahren als effektive Werkzeuge für die schnelle Entwicklung von Produkten nahezu beliebiger Komplexität entwickelt. Gleichzeitig wird gefordert, die Reproduzierbarkeit der Bauteile und auch seriennahe bzw. seriengleiche Eigenschaften zu gewährleisten. Die Vielfalt und der Umfang der Anwendungen sowie die große Anzahl verschiedener generativer Fertigungsverfahren verlangen adäquate Qualitätsüberwachungs- und Qualitätskontrollsysteme. Ein Lösungsansatz für die Qualitätsbewertung von generativen Fertigungsverfahren besteht in der Einführung eines Kennzahlensystems. Hierzu müssen zunächst Anforderungsprofile und Qualitätsmerkmale für generativ hergestellte Bauteile definiert werden, welche durch Prüfkörpergeometrien abgebildet und mit Hilfe von Einzelkennzahlen klassifiziert werden. In Rahmen der durchgeführten Untersuchungen wurde die Qualitätsbewertung anhand von Prüfkörpergeometrien am Beispiel des Laser-Sinterprozesses qualifiziert. Durch Beeinflussung der Prozessparameter, d.h. der gezielten Einbringung von Störgrößen, welche einzeln oder in Kombination zu unzulässigen Qualitätsschwankungen führen können, ist es möglich, die Qualität des Produktes zu beurteilen. Die Definition von Einzelkennzahlen, die eine Steuerung und Kontrolle sowie eine Vorhersage potentieller Fehler ermöglicht, bietet hierbei essentielle Möglichkeiten zur Qualitätsbewertung. Eine Zusammenführung zu einem gesamtheitlichen Kennzahlensystem soll zum einen den Prozess auf Grundlage der definierten Anforderungsprofile bewerten und zum anderen einen direkten Zusammenhang der ausgewählten Störgrößen und Prozessgrößen herleiten, um vorab eine Aussage über die Bauteilqualität treffen zu können.
Resumo:
In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.
Resumo:
We derive multiscale statistics for deconvolution in order to detect qualitative features of the unknown density. An important example covered within this framework is to test for local monotonicity on all scales simultaneously. We investigate the moderately ill-posed setting, where the Fourier transform of the error density in the deconvolution model is of polynomial decay. For multiscale testing, we consider a calibration, motivated by the modulus of continuity of Brownian motion. We investigate the performance of our results from both the theoretical and simulation based point of view. A major consequence of our work is that the detection of qualitative features of a density in a deconvolution problem is a doable task, although the minimax rates for pointwise estimation are very slow.
Resumo:
The Imager for Low Energetic Neutral Atoms test facility at the University of Bern was developed to investigate, characterize, and quantify physical processes on surfaces that are used to ionize neutral atoms before their analysis in neutral particle-sensing instruments designed for space research. The facility has contributed valuable knowledge of the interaction of ions with surfaces (e.g., fraction of ions scattered from surfaces and angular scattering distribution) and employs a novel measurement principle for the determination of secondary electron emission yields as a function of energy, angle of incidence, particle species, and sample surface for low particle energies. Only because of this test facility it was possible to successfully apply surface-science processes for the new detection technique for low-energetic neutral particles with energies below about 1 keV used in space applications. All successfully flown spectrometers for the detection of low-energetic neutrals based on the particle–surface interaction process use surfaces evaluated, tested, and calibrated in this facility. Many instruments placed on different spacecraft (e.g., Imager for Magnetopause-to-Aurora Global Exploration, Chandrayaan-1, Interstellar Boundary Explorer, etc.) have successfully used this technique.