145 resultados para STAR ADAPTIVE OPTICS
em CentAUR: Central Archive University of Reading - UK
Resumo:
As improvements to the optical design of spectrometer and radiometer instruments evolve with advances in detector sensitivity, use of focal plane detector arrays and innovations in adaptive optics for large high altitude telescopes, interest in mid-infrared astronomy and remote sensing applications have been areas of progressive research in recent years. This research has promoted a number of developments in infrared coating performance, particularly by placing increased demands on the spectral imaging requirements of filters to precisely isolate radiation between discrete wavebands and improve photometric accuracy. The spectral design and construction of multilayer filters to accommodate these developments has subsequently been an area of challenging thin-film research, to achieve high spectral positioning accuracy, environmental durability and aging stability at cryogenic temperatures, whilst maximizing the far-infrared performance. In this paper we examine the design and fabrication of interference filters in instruments that utilize the mid-infrared N-band (6-15 µm) and Q-band (16-28 µm) atmospheric windows, together with a rationale for the selection of materials, deposition process, spectral measurements and assessment of environmental durability performance.
Resumo:
The search for Earth-like exoplanets, orbiting in the habitable zone of stars other than our Sun and showing biological activity, is one of the most exciting and challenging quests of the present time. Nulling interferometry from space, in the thermal infrared, appears as a promising candidate technique for the task of directly observing extra-solar planets. It has been studied for about 10 years by ESA and NASA in the framework of the Darwin and TPF-I missions respectively. Nevertheless, nulling interferometry in the thermal infrared remains a technological challenge at several levels. Among them, the development of the "modal filter" function is mandatory for the filtering of the wavefronts in adequacy with the objective of rejecting the central star flux to an efficiency of about 105. Modal filtering takes benefit of the capability of single-mode waveguides to transmit a single amplitude function, to eliminate virtually any perturbation of the interfering wavefronts, thus making very high rejection ratios possible. The modal filter may either be based on single-mode Integrated Optics (IO) and/or Fiber Optics. In this paper, we focus on IO, and more specifically on the progress of the on-going "Integrated Optics" activity of the European Space Agency.
Resumo:
In this paper extensions to an existing tracking algorithm are described. These extensions implement adaptive tracking constraints in the form of regional upper-bound displacements and an adaptive track smoothness constraint. Together, these constraints make the tracking algorithm more flexible than the original algorithm (which used fixed tracking parameters) and provide greater confidence in the tracking results. The result of applying the new algorithm to high-resolution ECMWF reanalysis data is shown as an example of its effectiveness.
Resumo:
In this paper a cell by cell anisotropic adaptive mesh technique is added to an existing staggered mesh Lagrange plus remap finite element ALE code for the solution of the Euler equations. The quadrilateral finite elements may be subdivided isotropically or anisotropically and a hierarchical data structure is employed. An efficient computational method is proposed, which only solves on the finest level of resolution that exists for each part of the domain with disjoint or hanging nodes being used at resolution transitions. The Lagrangian, equipotential mesh relaxation and advection (solution remapping) steps are generalised so that they may be applied on the dynamic mesh. It is shown that for a radial Sod problem and a two-dimensional Riemann problem the anisotropic adaptive mesh method runs over eight times faster.
Resumo:
Planning a project with proper considerations of all necessary factors and managing a project to ensure its successful implementation will face a lot of challenges. Initial stage in planning a project for bidding a project is costly, time consuming and usually with poor accuracy on cost and effort predictions. On the other hand, detailed information for previous projects may be buried in piles of archived documents which can be increasingly difficult to learn from the previous experiences. Project portfolio has been brought into this field aiming to improve the information sharing and management among different projects. However, the amount of information that could be shared is still limited to generic information. This paper, we report a recently developed software system COBRA to automatically generate a project plan with effort estimation of time and cost based on data collected from previous completed projects. To maximise the data sharing and management among different projects, we proposed a method of using product based planning from PRINCE2 methodology. (Automated Project Information Sharing and Management System -�COBRA) Keywords: project management, product based planning, best practice, PRINCE2
Resumo:
A cell by cell anisotropic adaptive mesh Arbitrary Lagrangian Eulerian (ALE) method for the solution of the Euler equations is described. An efficient approach to equipotential mesh relaxation on anisotropically refined meshes is developed. Results for two test problems are presented.
Resumo:
The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1–20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.
Resumo:
For the well-known early Mesolithic site of Star Carr, dating of organic artefacts by accelerator mass spectrometry (AMS) has been hampered by treatment of bone and antler recovered during the original excavations with preservatives. Some, untreated, artefacts were, however, collected after Clark's excavation in 1950. Four of these artefacts were AMS dated in 1995, but two of the dates were significantly younger than the others, and were questionable due to their low collagen yields. These suspect samples have now been re-analysed, demonstrating that all four artefacts are of similar date. The significance of these dates for the chronology of Star Carr is discussed.
Resumo:
The Iowa gambling task (IGT) is one of the most influential behavioral paradigms in reward-related decision making and has been, most notably, associated with ventromedial prefrontal cortex function. However, performance in the IGT relies on a complex set of cognitive subprocesses, in particular integrating information about the outcome of choices into a continuously updated decision strategy under ambiguous conditions. The complexity of the task has made it difficult for neuroimaging studies to disentangle the underlying neurocognitive processes. In this study, we used functional magnetic resonance imaging in combination with a novel adaptation of the task, which allowed us to examine separately activation associated with the moment of decision or the evaluation of decision outcomes. Importantly, using whole-brain regression analyses with individual performance, in combination with the choice/outcome history of individual subjects, we aimed to identify the neural overlap between areas that are involved in the evaluation of outcomes and in the progressive discrimination of the relative value of available choice options, thus mapping the two fundamental cognitive processes that lead to adaptive decision making. We show that activation in right ventromedial and dorsolateral prefrontal cortex was predictive of adaptive performance, in both discriminating disadvantageous from advantageous decisions and confirming negative decision outcomes. We propose that these two prefrontal areas mediate shifting away from disadvantageous choices through their sensitivity to accumulating negative outcomes. These findings provide functional evidence of the underlying processes by which these prefrontal subregions drive adaptive choice in the task, namely through contingency-sensitive outcome evaluation.
Resumo:
In this article, we examine the case of a system that cooperates with a “direct” user to plan an activity that some “indirect” user, not interacting with the system, should perform. The specific application we consider is the prescription of drugs. In this case, the direct user is the prescriber and the indirect user is the person who is responsible for performing the therapy. Relevant characteristics of the two users are represented in two user models. Explanation strategies are represented in planning operators whose preconditions encode the cognitive state of the indirect user; this allows tailoring the message to the indirect user's characteristics. Expansion of optional subgoals and selection among candidate operators is made by applying decision criteria represented as metarules, that negotiate between direct and indirect users' views also taking into account the context where explanation is provided. After the message has been generated, the direct user may ask to add or remove some items, or change the message style. The system defends the indirect user's needs as far as possible by mentioning the rationale behind the generated message. If needed, the plan is repaired and the direct user model is revised accordingly, so that the system learns progressively to generate messages suited to the preferences of people with whom it interacts.