930 resultados para Large detector systems for particle and astroparticle physics
Resumo:
The study covers theFishing capture technology innovation includes the catching of aquatic animal, using any kind of gear techniques, operated from a vessel. Utilization of fishing techniques varies, depending upon the type of fisheries, and can go from a basic and little hook connected to a line to huge and complex mid water trawls or seines operated by large fishing vessels.The size and autonomy of a fishing vessel is largely determined by its ability to handle, process and store fish in good condition on board, and thus these two characteristics have been greatly influenced by the introduction and utilization of ice and refrigeration machinery. Other technological developments especially hydraulic hauling machinery, fish finding electronics and synthetic twines have also had a major impact on the efficiency and profitability of fishing vessels.A wide variety of fishing gears and practices ranging from small-scale artisanal to advanced mechanised systems are used for fish capture in Kerala. Most important among these fishing gears are trawls, seines, lines, gillnets and entangling nets and traps The modern sector was introduced in 1953 at Neendakara, Shakthikulangara region under the initiative of Indo-Norwegian project (INP). The novel facilities introduced in fishing industry by Indo- Norwegian project accordingly are mechanically operated new boats with new fishing nets. Soon after mechanization, motorization programme gained momentum in Kerala especially in Alleppey, Ernakulam and Kollam districts.
Resumo:
Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.
Resumo:
Soils represent a large carbon pool, approximately 1500 Gt, which is equivalent to almost three times the quantity stored in terrestrial biomass and twice the amount stored in the atmosphere. Any modification of land use or land management can induce variations in soil carbon stocks, even in agricultural systems that are perceived to be in a steady state. Tillage practices often induce soil aerobic conditions that are favourable to microbial activity and may lead to a degradation of soil structure. As a result, mineralisation of soil organic matter increases in the long term. The adoption of no-tillage systems and the maintenance of a permanent vegetation cover using Direct seeding Mulch-based Cropping system or DMC, may increase carbon levels in the topsoil. In Brazil, no-tillage practices (mainly DMC), were introduced approximately 30 years ago in the south in the Parana state, primarily as a means of reducing erosion. Subsequently, research has begun to study the management of the crop waste products and their effects on soil fertility, either in terms of phosphorus management, as a means of controlling soil acidity, or determining how manures can be applied in a more localised manner. The spread of no-till in Brazil has involved a large amount of extension work. The area under no-tillage is still increasing in the centre and north of the country and currently occupies ca. 20 million hectares, covering a diversity of environmental conditions, cropping systems and management practices. Most studies of Brazilian soils give rates of carbon storage in the top 40 cm of the soil of 0.4 to 1.7 t C ha(-1) per year, with the highest rates in the Cerrado region. However, caution must be taken when analysing DMC systems in terms of carbon sequestration. Comparisons should include changes in trace gas fluxes and should not be limited to a consideration of carbon storage in the soil alone if the full implications for global warming are to be assessed.
Resumo:
For the very large nonlinear dynamical systems that arise in a wide range of physical, biological and environmental problems, the data needed to initialize a numerical forecasting model are seldom available. To generate accurate estimates of the expected states of the system, both current and future, the technique of ‘data assimilation’ is used to combine the numerical model predictions with observations of the system measured over time. Assimilation of data is an inverse problem that for very large-scale systems is generally ill-posed. In four-dimensional variational assimilation schemes, the dynamical model equations provide constraints that act to spread information into data sparse regions, enabling the state of the system to be reconstructed accurately. The mechanism for this is not well understood. Singular value decomposition techniques are applied here to the observability matrix of the system in order to analyse the critical features in this process. Simplified models are used to demonstrate how information is propagated from observed regions into unobserved areas. The impact of the size of the observational noise and the temporal position of the observations is examined. The best signal-to-noise ratio needed to extract the most information from the observations is estimated using Tikhonov regularization theory. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Particle filters are fully non-linear data assimilation techniques that aim to represent the probability distribution of the model state given the observations (the posterior) by a number of particles. In high-dimensional geophysical applications the number of particles required by the sequential importance resampling (SIR) particle filter in order to capture the high probability region of the posterior, is too large to make them usable. However particle filters can be formulated using proposal densities, which gives greater freedom in how particles are sampled and allows for a much smaller number of particles. Here a particle filter is presented which uses the proposal density to ensure that all particles end up in the high probability region of the posterior probability density function. This gives rise to the possibility of non-linear data assimilation in large dimensional systems. The particle filter formulation is compared to the optimal proposal density particle filter and the implicit particle filter, both of which also utilise a proposal density. We show that when observations are available every time step, both schemes will be degenerate when the number of independent observations is large, unlike the new scheme. The sensitivity of the new scheme to its parameter values is explored theoretically and demonstrated using the Lorenz (1963) model.
Resumo:
A new accelerating cosmology driven only by baryons plus cold dark matter (CDM) is proposed in the framework of general relativity. In this scenario the present accelerating stage of the Universe is powered by the negative pressure describing the gravitationally-induced particle production of cold dark matter particles. This kind of scenario has only one free parameter and the differential equation governing the evolution of the scale factor is exactly the same of the Lambda CDM model. For a spatially flat Universe, as predicted by inflation (Omega(dm) + Omega(baryon) = 1), it is found that the effectively observed matter density parameter is Omega(meff) = 1 - alpha, where alpha is the constant parameter specifying the CDM particle creation rate. The supernovae test based on the Union data (2008) requires alpha similar to 0.71 so that Omega(meff) similar to 0.29 as independently derived from weak gravitational lensing, the large scale structure and other complementary observations.
Resumo:
The Pierre Auger Collaboration has reported. evidence for anisotropy in the distribution of arrival directions of the cosmic rays with energies E > E(th) = 5.5 x 10(19) eV. These show a correlation with the distribution of nearby extragalactic objects, including an apparent excess around the direction of Centaurus A. If the particles responsible for these excesses at E > E(th) are heavy nuclei with charge Z, the proton component of the sources should lead to excesses in the same regions at energies E/Z. We here report the lack of anisotropies in these directions at energies above E(th)/Z (for illustrative values of Z = 6, 13, 26). If the anisotropies above E(th) are due to nuclei with charge Z, and under reasonable assumptions about the acceleration process, these observations imply stringent constraints on the allowed proton fraction at the lower energies.
Resumo:
A nonvanishing cosmological term in Einstein's equations implies a nonvanishing spacetime curvature even in the absence of any kind of matter. It would, in consequence, affect many of the underlying kinematic tenets of physical theory. The usual commutative spacetime translations of the Poincare group would be replaced by the mixed conformal translations of the de Sitter group, leading to obvious alterations in elementary concepts such as time, energy and momentum. Although negligible at small scales, such modifications may come to have important consequences both in the large and for the inflationary picture of the early Universe. A qualitative discussion is presented, which suggests deep changes in Hamiltonian, Quantum and Statistical Mechanics. In the primeval universe as described by the standard cosmological model, in particular, the equations of state of the matter sources could be quite different from those usually introduced.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We report on operational experience with an experimental performance of the SLD barrel Cherenkov Ring Imaging Detector from the 1992 and 1993 physics runs. The liquid (C6F14) and gas (C5F12) radiator recirculation systems have performed well, and the drift gas supply system has operated successfully with TMAE for three years. Cherenkov rings have been observed from both the liquid and gas radiators. The number and angular resolution of Cherenkov photons have been measured, and found to be close to design specifications.
Resumo:
Morphologies of SrTiO3 particles and agglomerates synthesized by the traditional Pechini route and by the polymer precipitation route were characterized by the nitrogen adsorption/desorption technique and by transmission electron microscopy (TEM). A cluster structure of nanometric particles forming large agglomerates which are broken during pressing followed by cluster rearrangement was observed. The mean particle size is larger for SrTiO3 obtained by the Pechini route and is related to the precursor thermal decomposition and particle growth during calcination. The particle growth is controlled by neck growth among particles and further motion of the particle boundary. © 1995.
Resumo:
In the presence of a cosmological constant, interpreted as a purely geometric entity, absence of matter is represented by a de Sitter spacetime. As a consequence, ordinary Poincaré special relativity is no longer valid and must be replaced by a de Sitter special relativity. By considering the kinematics of a spinless particle in a de Sitter spacetime, we study the geodesics of this spacetime, the ensuing definitions of canonical momenta, and explore possible implications for quantum mechanics. © 2007 American Institute of Physics.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We study soft limits of correlation functions for the density and velocity fields in the theory of structure formation. First, we re-derive the (resummed) consistency conditions at unequal times using the eikonal approximation. These are solely based on symmetry arguments and are therefore universal. Then, we explore the existence of equal-time relations in the soft limit which, on the other hand, depend on the interplay between soft and hard modes. We scrutinize two approaches in the literature: the time-flow formalism, and a background method where the soft mode is absorbed into a locally curved cosmology. The latter has been recently used to set up (angular averaged) 'equal-time consistency relations'. We explicitly demonstrate that the time-flow relations and 'equal-time consistency conditions'are only fulfilled at the linear level, and fail at next-to-leading order for an Einstein de-Sitter universe. While applied to the velocities both proposals break down beyond leading order, we find that the 'equal-time consistency conditions'quantitatively approximates the perturbative results for the density contrast. Thus, we generalize the background method to properly incorporate the effect of curvature in the density and velocity fluctuations on short scales, and discuss the reasons behind this discrepancy. We conclude with a few comments on practical implementations and future directions.