69 resultados para Large detector systems for particle and astroparticle physics
Resumo:
One major assumption in all orthogonal space-time block coding (O-STBC) schemes is that the channel remains static over the entire length of the codeword. However, time selective fading channels do exist, and in such case the conventional O-STBC detectors can suffer from a large error floor in the high signal-to-noise ratio (SNR) cases. This paper addresses such an issue by introducing a parallel interference cancellation (PIC) based detector for the Gi coded systems (i=3 and 4).
Resumo:
This paper will introduce the Baltex research programme and summarize associated numerical modelling work which has been undertaken during the last five years. The research has broadly managed to clarify the main mechanisms determining the water and energy cycle in the Baltic region, such as the strong dependence upon the large scale atmospheric circulation. It has further been shown that the Baltic Sea has a positive water balance, albeit with large interannual variations. The focus on the modelling studies has been the use of limited area models at ultra-high resolution driven by boundary conditions from global models or from reanalysis data sets. The programme has further initiated a comprehensive integration of atmospheric, land surface and hydrological modelling incorporating snow, sea ice and special lake models. Other aspects of the programme include process studies such as the role of deep convection, air sea interaction and the handling of land surface moisture. Studies have also been undertaken to investigate synoptic and sub-synoptic events over the Baltic region, thus exploring the role of transient weather systems for the hydrological cycle. A special aspect has been the strong interests and commitments of the meteorological and hydrological services because of the potentially large societal interests of operational applications of the research. As a result of this interests special attention has been put on data-assimilation aspects and the use of new types of data such as SSM/I, GPS-measurements and digital radar. A series of high resolution data sets are being produced. One of those, a 1/6 degree daily precipitation climatology for the years 1996–1999, is such a unique contribution. The specific research achievements to be presented in this volume of Meteorology and Atmospheric Physics is the result of a cooperative venture between 11 European research groups supported under the EU-Framework programmes.
Resumo:
Giant planets helped to shape the conditions we see in the Solar System today and they account for more than 99% of the mass of the Sun’s planetary system. They can be subdivided into the Ice Giants (Uranus and Neptune) and the Gas Giants (Jupiter and Saturn), which differ from each other in a number of fundamental ways. Uranus, in particular is the most challenging to our understanding of planetary formation and evolution, with its large obliquity, low self-luminosity, highly asymmetrical internal field, and puzzling internal structure. Uranus also has a rich planetary system consisting of a system of inner natural satellites and complex ring system, five major natural icy satellites, a system of irregular moons with varied dynamical histories, and a highly asymmetrical magnetosphere. Voyager 2 is the only spacecraft to have explored Uranus, with a flyby in 1986, and no mission is currently planned to this enigmatic system. However, a mission to the uranian system would open a new window on the origin and evolution of the Solar System and would provide crucial information on a wide variety of physicochemical processes in our Solar System. These have clear implications for understanding exoplanetary systems. In this paper we describe the science case for an orbital mission to Uranus with an atmospheric entry probe to sample the composition and atmospheric physics in Uranus’ atmosphere. The characteristics of such an orbiter and a strawman scientific payload are described and we discuss the technical challenges for such a mission. This paper is based on a white paper submitted to the European Space Agency’s call for science themes for its large-class mission programme in 2013.
Resumo:
A new dynamic model of water quality, Q(2), has recently been developed, capable of simulating large branched river systems. This paper describes the application of a generalized sensitivity analysis (GSA) to Q(2) for single reaches of the River Thames in southern England. Focusing on the simulation of dissolved oxygen (DO) (since this may be regarded as a proxy for the overall health of a river); the GSA is used to identify key parameters controlling model behavior and provide a probabilistic procedure for model calibration. It is shown that, in the River Thames at least, it is more important to obtain high quality forcing functions than to obtain improved parameter estimates once approximate values have been estimated. Furthermore, there is a need to ensure reasonable simulation of a range of water quality determinands, since a focus only on DO increases predictive uncertainty in the DO simulations. The Q(2) model has been applied here to the River Thames, but it has a broad utility for evaluating other systems in Europe and around the world.
Resumo:
We have integrated information on topography, geology and geomorphology with the results of targeted fieldwork in order to develop a chronology for the development of Lake Megafazzan, a giant lake that has periodically existed in the Fazzan Basin since the late Miocene. The development of the basin can be best understood by considering the main geological and geomorphological events that occurred thought Libya during this period and thus an overview of the palaeohydrology of all Libya is also presented. The origin of the Fazzan Basin appears to lie in the Late Miocene. At this time Libya was dominated by two large rivers systems that flowed into the Mediterranean Sea, the Sahabi River draining central and eastern Libya and the Wadi Nashu River draining much of western Libya. As the Miocene progressed the region become increasingly affected by volcanic activity on its northern and eastern margin that appears to have blocked the River Nashu in Late Miocene or early Messinian times forming a sizeable closed basin in the Fazzan within which proto-Lake Megafazzan would have developed during humid periods. The fall in base level associated with the Messinian desiccation of the Mediterranean Sea promoted down-cutting and extension of river systems throughout much of Libya. To the south of the proto Fazzan Basin the Sahabi River tributary know as Wadi Barjuj appears to have expanded its headwaters westwards. The channel now terminates at Al Haruj al Aswad. We interpret this as a suggestion that Wadi Barjuj was blocked by the progressive development of Al Haruj al Aswad. K/Ar dating of lava flows suggests that this occurred between 4 and 2 Ma. This event would have increased the size of the closed basin in the Fazzan by about half, producing a catchment close to its current size (-350,000 km(2)). The Fazzan Basin contains a wealth of Pleistocene to recent palaeolake sediment outcrops and shorelines. Dating of these features demonstrates evidence of lacustrine conditions during numerous interglacials spanning a period greater than 420 ka. The middle to late Pleistocene interglacials were humid enough to produce a giant lake of about 135,000 km(2) that we have called Lake Megafazzan. Later lake phases were smaller, the interglacials less humid, developing lakes of a few thousand square kilometres. In parallel with these palaeohydrological developments in the Fazzan Basin, change was occurring in other parts of Libya. The Lower Pliocene sea level rise caused sediments to infill much of the Messinian channel system. As this was occurring, subsidence in the Al Kufrah Basin caused expansion of the Al Kufrah River system at the expense of the River Sahabi. By the Pleistocene, the Al Kufrah River dominated the palaeohydrology of eastern Libya and had developed a very large inland delta in its northern reaches that exhibited a complex distributary channel network which at times fed substantial lakes in the Sirt Basin. At this time Libya was a veritable lake district during humid periods with about 10% of the country underwater. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The history of using vesicular systems for drug delivery to and through skin started nearly three decades ago with a study utilizing phospholipid liposomes to improve skin deposition and reduce systemic effects of triamcinolone acetonide. Subsequently, many researchers evaluated liposomes with respect to skin delivery, with the majority of them recording localized effects and relatively few studies showing transdermal delivery effects. Shortly after this, Transfersomes were developed with claims about their ability to deliver their payload into and through the skin with efficiencies similar to subcutaneous administration. Since these vesicles are ultradeformable, they were thought to penetrate intact skin deep enough to reach the systemic circulation. Their mechanisms of action remain controversial with diverse processes being reported. Parallel to this development, other classes of vesicles were produced with ethanol being included into the vesicles to provide flexibility (as in ethosomes) and vesicles were constructed from surfactants and cholesterol (as in niosomes). Thee ultradeformable vesicles showed variable efficiency in delivering low molecular weight and macromolecular drugs. This article will critically evaluate vesicular systems for dermal and transdermal delivery of drugs considering both their efficacy and potential mechanisms of action.
Resumo:
We describe a novel mechanism that can significantly lower the amplitude of the climatic response to certain large volcanic eruptions and examine its impact with a coupled ocean-atmosphere climate model. If sufficiently large amounts of water vapour enter the stratosphere, a climatically significant amount of water vapour can be left over in the lower stratosphere after the eruption, even after sulphate aerosol formation. This excess stratospheric humidity warms the tropospheric climate, and acts to balance the climatic cooling induced by the volcanic aerosol, especially because the humidity anomaly lasts for a period that is longer than the residence time of aerosol in the stratosphere. In particular, northern hemisphere high latitude cooling is reduced in magnitude. We discuss this mechanism in the context of the discrepancy between the observed and modelled cooling following the Krakatau eruption in 1883. We hypothesize that moist coignimbrite plumes caused by pyroclastic flows travelling over ocean rather than land, resulting from an eruption close enough to the ocean, might provide the additional source of stratospheric water vapour.
Resumo:
Many different reagents and methodologies have been utilised for the modification of synthetic and biological macromolecular systems. In addition, an area of intense research at present is the construction of hybrid biosynthetic polymers, comprised of biologically active species immobilised or complexed with synthetic polymers. One of the most useful and widely applicable techniques available for functionalisation of macromolecular systems involves indiscriminate carbene insertion processes. The highly reactive and non-specific nature of carbenes has enabled a multitude of macromolecular structures to be functionalised without the need for specialised reagents or additives. The use of diazirines as stable carbene precursors has increased dramatically over the past twenty years and these reagents are fast becoming the most popular photophors for photoaffinity labelling and biological applications in which covalent modification of macromolecular structures is the basis to understanding structure-activity relationships. This review reports the synthesis and application of a diverse range of diazirines in macromolecular systems.
Resumo:
The vibrations and tunnelling motion of malonaldehyde have been studied in their full dimensionality using an internal coordinate path Hamiltonian. In this representation there is one large amplitude internal coordinate s and 3N - 7 (=20) normal coordinates Q which are orthogonal to the large amplitude motion at all points. It is crucial that a high accuracy potential energy surface is used in order to obtain a good representation for the tunneling motion; we use a Moller-Plesset (MP2) surface. Our methodology is variational, that is we diagonalize a sufficiently large matrix in order to obtain the required vibrational levels, so an exact representation for the kinetic energy operator is used. In a harmonic valley representation (s, Q) complete convergence of the normal coordinate motions and the internal coordinate motions has been obtained; for the anharmonic valley in which we use two- and three-body terms in the surface (s, Q(1), Q(2)), we also obtain complete convergence. Our final computed stretching fundamentals are deficient because our potential energy surface is truncated at quartic terms in the normal coordinates, but our lower fundamentals are good.
Resumo:
The effect of different sugars and glyoxal on the formation of acrylamide in low-moisture starch-based model systems was studied, and kinetic data were obtained. Glucose was more effective than fructose, tagatose, or maltose in acrylamide formation, whereas the importance of glyoxal as a key sugar fragmentation intermediate was confirmed. Glyoxal formation was greater in model systems containing asparagine and glucose rather than fructose. A solid phase microextraction GC-MS method was employed to determine quantitatively the formation of pyrazines in model reaction systems. Substituted pyrazine formation was more evident in model systems containing fructose; however, the unsubstituted homologue, which was the only pyrazine identified in the headspace of glyoxal-asparagine systems, was formed at higher yields when aldoses were used as the reducing sugar. Highly significant correlations were obtained for the relationship between pyrazine and acrylamide formation. The importance of the tautomerization of the asparagine-carbonyl decarboxylated Schiff base in the relative yields of pyrazines and acrylamide is discussed.
Resumo:
Very large scale scheduling and planning tasks cannot be effectively addressed by fully automated schedule optimisation systems, since many key factors which govern 'fitness' in such cases are unformalisable. This raises the question of an interactive (or collaborative) approach, where fitness is assigned by the expert user. Though well-researched in the domains of interactively evolved art and music, this method is as yet rarely used in logistics. This paper concerns a difficulty shared by all interactive evolutionary systems (IESs), but especially those used for logistics or design problems. The difficulty is that objective evaluation of IESs is severely hampered by the need for expert humans in the loop. This makes it effectively impossible to, for example, determine with statistical confidence any ranking among a decent number of configurations for the parameters and strategy choices. We make headway into this difficulty with an Automated Tester (AT) for such systems. The AT replaces the human in experiments, and has parameters controlling its decision-making accuracy (modelling human error) and a built-in notion of a target solution which may typically be at odds with the solution which is optimal in terms of formalisable fitness. Using the AT, plausible evaluations of alternative designs for the IES can be done, allowing for (and examining the effects of) different levels of user error. We describe such an AT for evaluating an IES for very large scale planning.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like Java. These include portability, better runtime error checking, modularity, and multi-threading. But these arguments have not converted many HPC programmers, perhaps due to the scarcity of full-scale scientific Java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to Java. Both of these applications are parallelized using our thread-safe Java messaging system—MPJ Express. The first application is the Gadget-2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite-domain time-difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the Java and C versions of these two scientific applications, and demonstrate that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
The mutual influence of surface geometry (e.g. lattice parameters, morphology) and electronic structure is discussed for Cu-Ni bimetallic (111) surfaces. It is found that on flat surfaces the electronic d-states of the adlayer experience very little influence from the substrate electronic structure which is due to their large separation in binding energies and the close match of Cu and Ni lattice constants. Using carbon monoxide and benzene as probe molecules, it is found that in most cases the reactivity of Cu or Ni adlayers is very similar to the corresponding (111) single crystal surfaces. Exceptions are the adsorption of CO on submonolayers of Cu on Ni(111) and the dissociation of benzene on Ni/Cu(111) which is very different from Ni(111). These differences are related to geometric factors influencing the adsorption on these surfaces.
Resumo:
New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.