958 resultados para Separability Criterion
Resumo:
Part I.
We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.
We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:
1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.
2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.
3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.
4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.
5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.
6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.
7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.
8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.
9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf/σ0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.
Part II.
Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.
Resumo:
Vortex rings constitute the main structure in the wakes of a wide class of swimming and flying animals, as well as in cardiac flows and in the jets generated by some moss and fungi. However, there is a physical limit, determined by an energy maximization principle called the Kelvin-Benjamin principle, to the size that axisymmetric vortex rings can achieve. The existence of this limit is known to lead to the separation of a growing vortex ring from the shear layer feeding it, a process known as `vortex pinch-off', and characterized by the dimensionless vortex formation number. The goal of this thesis is to improve our understanding of vortex pinch-off as it relates to biological propulsion, and to provide future researchers with tools to assist in identifying and predicting pinch-off in biological flows.
To this end, we introduce a method for identifying pinch-off in starting jets using the Lagrangian coherent structures in the flow, and apply this criterion to an experimentally generated starting jet. Since most naturally occurring vortex rings are not circular, we extend the definition of the vortex formation number to include non-axisymmetric vortex rings, and find that the formation number for moderately non-axisymmetric vortices is similar to that of circular vortex rings. This suggests that naturally occurring vortex rings may be modeled as axisymmetric vortex rings. Therefore, we consider the perturbation response of the Norbury family of axisymmetric vortex rings. This family is chosen to model vortex rings of increasing thickness and circulation, and their response to prolate shape perturbations is simulated using contour dynamics. Finally, the response of more realistic models for vortex rings, constructed from experimental data using nested contours, to perturbations which resemble those encountered by forming vortices more closely, is simulated using contour dynamics. In both families of models, a change in response analogous to pinch-off is found as members of the family with progressively thicker cores are considered. We posit that this analogy may be exploited to understand and predict pinch-off in complex biological flows, where current methods are not applicable in practice, and criteria based on the properties of vortex rings alone are necessary.
Resumo:
This thesis consists of three separate studies of roles that black holes might play in our universe.
In the first part we formulate a statistical method for inferring the cosmological parameters of our universe from LIGO/VIRGO measurements of the gravitational waves produced by coalescing black-hole/neutron-star binaries. This method is based on the cosmological distance-redshift relation, with "luminosity distances" determined directly, and redshifts indirectly, from the gravitational waveforms. Using the current estimates of binary coalescence rates and projected "advanced" LIGO noise spectra, we conclude that by our method the Hubble constant should be measurable to within an error of a few percent. The errors for the mean density of the universe and the cosmological constant will depend strongly on the size of the universe, varying from about 10% for a "small" universe up to and beyond 100% for a "large" universe. We further study the effects of random gravitational lensing and find that it may strongly impair the determination of the cosmological constant.
In the second part of this thesis we disprove a conjecture that black holes cannot form in an early, inflationary era of our universe, because of a quantum-field-theory induced instability of the black-hole horizon. This instability was supposed to arise from the difference in temperatures of any black-hole horizon and the inflationary cosmological horizon; it was thought that this temperature difference would make every quantum state that is regular at the cosmological horizon be singular at the black-hole horizon. We disprove this conjecture by explicitly constructing a quantum vacuum state that is everywhere regular for a massless scalar field. We further show that this quantum state has all the nice thermal properties that one has come to expect of "good" vacuum states, both at the black-hole horizon and at the cosmological horizon.
In the third part of the thesis we study the evolution and implications of a hypothetical primordial black hole that might have found its way into the center of the Sun or any other solar-type star. As a foundation for our analysis, we generalize the mixing-length theory of convection to an optically thick, spherically symmetric accretion flow (and find in passing that the radial stretching of the inflowing fluid elements leads to a modification of the standard Schwarzschild criterion for convection). When the accretion is that of solar matter onto the primordial hole, the rotation of the Sun causes centrifugal hangup of the inflow near the hole, resulting in an "accretion torus" which produces an enhanced outflow of heat. We find, however, that the turbulent viscosity, which accompanies the convective transport of this heat, extracts angular momentum from the inflowing gas, thereby buffering the torus into a lower luminosity than one might have expected. As a result, the solar surface will not be influenced noticeably by the torus's luminosity until at most three days before the Sun is finally devoured by the black hole. As a simple consequence, accretion onto a black hole inside the Sun cannot be an answer to the solar neutrino puzzle.
Resumo:
The behavior of population transfer in an excited-doublet four-level system driven by linear polarized few-cycle ultrashort laser pulses is investigated numerically. It is shown that almost complete population transfer can be achieved even when the adiabatic criterion is not fulfilled. Moreover, the robustness of this scheme in terms of the Rabi frequencies and chirp rates of the pulses is explored.
Resumo:
Two of the most important questions in mantle dynamics are investigated in three separate studies: the influence of phase transitions (studies 1 and 2), and the influence of temperature-dependent viscosity (study 3).
(1) Numerical modeling of mantle convection in a three-dimensional spherical shell incorporating the two major mantle phase transitions reveals an inherently three-dimensional flow pattern characterized by accumulation of cold downwellings above the 670 km discontinuity, and cylindrical 'avalanches' of upper mantle material into the lower mantle. The exothermic phase transition at 400 km depth reduces the degree of layering. A region of strongly-depressed temperature occurs at the base of the mantle. The temperature field is strongly modulated by this partial layering, both locally and in globally-averaged diagnostics. Flow penetration is strongly wavelength-dependent, with easy penetration at long wavelengths but strong inhibition at short wavelengths. The amplitude of the geoid is not significantly affected.
(2) Using a simple criterion for the deflection of an upwelling or downwelling by an endothermic phase transition, the scaling of the critical phase buoyancy parameter with the important lengthscales is obtained. The derived trends match those observed in numerical simulations, i.e., deflection is enhanced by (a) shorter wavelengths, (b) narrower up/downwellings (c) internal heating and (d) narrower phase loops.
(3) A systematic investigation into the effects of temperature-dependent viscosity on mantle convection has been performed in three-dimensional Cartesian geometry, with a factor of 1000-2500 viscosity variation, and Rayleigh numbers of 10^5-10^7. Enormous differences in model behavior are found, depending on the details of rheology, heating mode, compressibility and boundary conditions. Stress-free boundaries, compressibility, and temperature-dependent viscosity all favor long-wavelength flows, even in internally heated cases. However, small cells are obtained with some parameter combinations. Downwelling plumes and upwelling sheets are possible when viscosity is dependent solely on temperature. Viscous dissipation becomes important with temperature-dependent viscosity.
The sensitivity of mantle flow and structure to these various complexities illustrates the importance of performing mantle convection calculations with rheological and thermodynamic properties matching as closely as possible those of the Earth.
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
本文研究了压缩真空中无相互作用双原子的纠缠动力学特性。通过分析不同初始纠缠态的演化,发现压缩真空中纠缠原子失去纠缠的速度比在普通真空中更快,并且压缩越大纠缠衰减越快。可以用concurrence和可分性“距离”Lambda的时间演化来解释这种不同寻常的纠缠突然消失现象。
Resumo:
Separating the dynamics of variables that evolve on different timescales is a common assumption in exploring complex systems, and a great deal of progress has been made in understanding chemical systems by treating independently the fast processes of an activated chemical species from the slower processes that proceed activation. Protein motion underlies all biocatalytic reactions, and understanding the nature of this motion is central to understanding how enzymes catalyze reactions with such specificity and such rate enhancement. This understanding is challenged by evidence of breakdowns in the separability of timescales of dynamics in the active site form motions of the solvating protein. Quantum simulation methods that bridge these timescales by simultaneously evolving quantum and classical degrees of freedom provide an important method on which to explore this breakdown. In the following dissertation, three problems of enzyme catalysis are explored through quantum simulation.
Resumo:
Pipes containing flammable gaseous mixtures may be subjected to internal detonation. When the detonation normally impinges on a closed end, a reflected shock wave is created to bring the flow back to rest. This study built on the work of Karnesky (2010) and examined deformation of thin-walled stainless steel tubes subjected to internal reflected gaseous detonations. A ripple pattern was observed in the tube wall for certain fill pressures, and a criterion was developed that predicted when the ripple pattern would form. A two-dimensional finite element analysis was performed using Johnson-Cook material properties; the pressure loading created by reflected gaseous detonations was accounted for with a previously developed pressure model. The residual plastic strain between experiments and computations was in good agreement.
During the examination of detonation-driven deformation, discrepancies were discovered in our understanding of reflected gaseous detonation behavior. Previous models did not accurately describe the nature of the reflected shock wave, which motivated further experiments in a detonation tube with optical access. Pressure sensors and schlieren images were used to examine reflected shock behavior, and it was determined that the discrepancies were related to the reaction zone thickness extant behind the detonation front. During these experiments reflected shock bifurcation did not appear to occur, but the unfocused visualization system made certainty impossible. This prompted construction of a focused schlieren system that investigated possible shock wave-boundary layer interaction, and heat-flux gauges analyzed the boundary layer behind the detonation front. Using these data with an analytical boundary layer solution, it was determined that the strong thermal boundary layer present behind the detonation front inhibits the development of reflected shock wave bifurcation.
Resumo:
A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.
The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.
Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.
The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.
Resumo:
Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.
This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.
This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.
The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.
The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.
Resumo:
In the measurement of the Higgs Boson decaying into two photons the parametrization of an appropriate background model is essential for fitting the Higgs signal mass peak over a continuous background. This diphoton background modeling is crucial in the statistical process of calculating exclusion limits and the significance of observations in comparison to a background-only hypothesis. It is therefore ideal to obtain knowledge of the physical shape for the background mass distribution as the use of an improper function can lead to biases in the observed limits. Using an Information-Theoretic (I-T) approach for valid inference we apply Akaike Information Criterion (AIC) as a measure of the separation for a fitting model from the data. We then implement a multi-model inference ranking method to build a fit-model that closest represents the Standard Model background in 2013 diphoton data recorded by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). Potential applications and extensions of this model-selection technique are discussed with reference to CMS detector performance measurements as well as in potential physics analyses at future detectors.
Resumo:
An overview is provided of studies on hypertrophic phytoplankton in order to explore the subject and to suggest uncovered areas of research in this increasingly important theme. The authors restrict themselves to stagnant environments, using a community criterion to define hypertrophic environments. They are defined as those whose yearly average of phytoplankton chlorophyll is equal to or higher than 100 mg per cubic metre of water. The paper deals with species composition, diversity, biomass, primary production, losses and seasonal succession of hypertrophic phytoplankton. Other topics, such as population dynamics and ecophysiological issues, either lack enough information to be considered or are well known, e.g. Microcystis and Oscillatoria ecophysiology.
Resumo:
O estudo objetivou avaliar a composição florística e estrutural dos componentes arbustivo-arbóreo da Floresta Ombrófila Densa submontana em diferentes estágios de regeneração natural, na vertente sudeste do Parque Estadual da Ilha Grande/RJ. Para o inventário florístico foram realizadas coletas assistemáticas em diferentes trechos nessa vertente. A complementação da lista de espécies foi feita a partir, da consulta às exsicatas dos herbários do Rio de Janeiro (FCAB, GUA, HB, HRJ, R, RB, RBR, RFA, RFFP e RUSU) e do inventário fitossociológico. Foi verificado o status de conservação das espécies inventariadas para a Flora Brasileira. Para o inventário fitossociológico foram estabelecidas 34 parcelas amostrais, totalizando 1,02 ha de área amostrada. Todos os indivíduos arbustivo-arbóreos com DAP ≥ 5 cm foram registrados e, após identificação, foram depositados no Herbário da Universidade do Estado do Rio de Janeiro (HRJ). O pacote estatístico FITOPAC 2.1. foi utilizado para a análise dos dados. A similaridade entre o remanescente investigado neste estudo e as outras quatorze áreas distintas do Rio de Janeiro, da própria Ilha Grande ou não, foi avaliada, utilizando-se o coeficiente de Similaridade de Sorensen; pelo critério de agrupamento por ligação média não ponderada (UPGMA) e pelo método de autorreamostragem para a estrutura de grupos; utilizados os programas PAST v1.34 e Multiv 2.4. A partir do levantamento em herbários e dos inventários florístico e fitossociológico realizados neste trabalho, foram analisados 3.470 registros, sendo 1.778 do levantamento de herbários, 1.536 do levantamento fitossociológico e 156 do inventário florístico. Esses registros corresponderam a 606 espécies ou morfo-espécies de Angiospermas e uma de Pteridófita. Os resultados obtidos revelaram a existência de 22 espécies ameaçadas de extinção para a Flora do Brasil. Dentre, as quais, sete são exclusivas da amostragem fitossociológica: Abarema cochliacarpos (Gomes) Barneby & J.W. Grimes, Chrysophyllum flexuosum Mart., Ficus pulchella Schott ex Spreng., Macrotorus utriculatus Perkins, Myrceugenia myrcioides (Cambess.) O.Berg, Rudgea interrupta Benth e Urbanodendron bahiense (Meisn.) Rohwer. No estudo fitossociológico, inventariou-se 1.536 indivíduos de 217 espécies, subordinadas a 53 famílias. O índice de diversidade de Shannon (H) calculado foi de 4,702 nats/ind e equabilidade (J) de 0,874. As 10 famílias com maior riqueza foram: Myrtaceae (31 spp.), Rubiaceae (21), Fabaceae (17), Lauraceae (12), Euphorbiaceae (11), Monimiaceae (8), Melastomataceae (7), Sapindaceae (7), Sapotaceae (6) e Annonaceae (6). Os 10 maiores Valores de Importância das espécies foram para Chrysophyllum flexuosum (3,43%), Lamanonia ternata Vell. (3,40%), Hyeronima alchorneoides Allemão (2,83%), Actinostemon verticillatus (Klotzsch) Baill. (2,55%), Psychotria brasiliensis Vell. (2,55%), Eriotheca pentaphylla (Vell.) A. Robyns (2,28%), Guatteria australis A. St.-Hil. (2,12%), Mabea brasiliensis Müll. Arg. (2,04%), Miconia prasina (Sw.) DC. (1,89%) e Rustia formosa (Cham. & Schltdl. ex DC.) Klotzsch (1,82%). Amostraram-se 27% de espécies representadas por apenas um indivíduo. As análises florísticas avaliadas a partir do Índice de Similaridade de Sorensen indicaram como principais variáveis para a formação dos blocos, os diferentes valores de diversidade para as áreas e a distribuição fitogeográfica das espécies. Os resultados obtidos junto aos dados dos grupos ecológicos, para os indivíduos da fitossociologia, indicaram maior percentual de indivíduos secundários tardios amostrados. Conclui-se que a área de estudo é uma floresta secundária em estágio intermediário de regeneração, com grande riqueza de espécies, muitas das quais de relevante importância ecológica.
Resumo:
We investigate the energy spectrum of fermionized bosonic atoms, which behave very much like spinless noninteracting fermions, in optical lattices by means of the perturbation expansion and the retarded Green's function method. The results show that the energy spectrum splits into two energy bands with single-occupation; the fermionized bosonic atom occupies nonvanishing energy state and left hole has a vanishing energy at any given momentum, and the system is in Mott-insulating state with a energy gap. Using the characteristic of energy spectra we obtained a criterion with which one can judge whether the Tonks-Girardeau (TG) gas is achieved or not.