871 resultados para Large-scale Distribution
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
Among the decapod crustaceans, brachyuran crabs or the true crabs occupy a very significant position due to their ecological and economic value. Crabs support a sustenance fishery in India, even though their present status is not comparable to that of shrimps and lobsters. They are of great demand in the domestic market as well as in the foreign markets. In addition to this, brachyuran crabs are of great ecological importance. They form the conspicuous members of the mangrove ecosystems and play a significant role in detritus formation, nutrient recycling and dynamics of the ecosystem. Considering all these factors, crabs are often considered to be the keystone species of the mangrove ecosystem. Though several works have been undertaken on brachyuran crabs world –wide as well as within the country, reports on the brachyuran crabs of Kerala waters are very scanty. Most of the studies done on brachyuran fauna were from the east coast of India and a very few works from the west coast. Among the edible crabs, mud crabs belonging to genus Scylla forms the most important due to their large size and taste. They are being exported on a large scale to the foreign markets like Singapore, Malaysia and Hong Kong. Kerala is the biggest supplier of live mud crabs and Chennai is the major centre of live mud crab export. However, there exists considerable confusion regarding the identification of mud crabs because of the subtle morphological differences between the species.In this context, an extensive study was undertaken on the brachyuran fauna of Cochin Backwaters, Kerala, India, to have a basic knowledge on their diversity, habitat preference and systematics. The study provides an attempt to resolve the confusion pertaining in the species identification of mud crabs belonging to Genus Scylla. Diversity study revealed the occurrence of 23 species of brachyuran crabs belonging to 16 genera and 8 families in the study area Cochin Backwaters. Among the families, the highest number of species was recorded from Family Portunidae .Among the 23 crab species enlisted from the Cochin backwaters, 5 species are of commercial importance and contribute a major share to the crustacean fishery of the Cochin region. It was observed that, the Cochin backwaters are invaded by certain marine migrant species during the Post monsoon and Pre monsoon periods and they are found to disappear with the onset of monsoon. The study reports the occurrence of the ‘herring bow crab’ Varuna litterata in the Cochin backwaters for the first time. Ecological studies showed that the substratum characteristics influence the occurrence, distribution and abundance of crabs in the sampling stations rather than water quality parameters. The variables which affected the crab distribution the most were Salinity, moisture content in the sediment, organic carbon and the sediment texture. Besides the water and sediment quality parameters, the most important factor influencing the distribution of crabs is the presence of mangroves. The study also revealed that most of the crabs encountered from the study area preferred a muddy substratum, with high organic carbon content and high moisture content. In the present study, an identification key is presented for the brachyuran crabs occurring along the study area the Cochin backwaters and the associated mangrove patches, taking into account the morphological characters coupled with the structure of third maxillipeds, first pleopods of males and the shape of male abdomen. Morphological examination indicated the existence of a morphotype which is comparable with the morphological features of S. tranquebarica, the morphometric study and the molecular analyses confirmed the non existence of S. tranquebarica in the Cochin backwaters.
Resumo:
The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.
Resumo:
The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.
Resumo:
This a short presentation which introduces how models and modelling help us to solve large scale problems in the real world. It introduces the idea that dynamic behaviour is caused by interacting components in the system. Feedback in the system makes behaviour prediction difficult unless we use modelling to support understanding
Resumo:
We tested the general predictions of increased use of nest boxes and positive trends in local populations of Common Goldeneye (Bucephala clangula) and Bufflehead (Bucephala albeola) following the large-scale provision of nest boxes in a study area of central Alberta over a 16-year period. Nest boxes were rapidly occupied, primarily by Common Goldeneye and Bufflehead, but also by European Starling (Sturnus vulgaris). After 5 years of deployment, occupancy of large boxes by Common Goldeneye was 82% to 90% and occupancy of small boxes by Bufflehead was 37% to 58%. Based on a single-stage cluster design, experimental closure of nest boxes resulted in significant reductions in numbers of broods and brood sizes produced by Common Goldeneye and Bufflehead. Occurrence and densities of Common Goldeneye and Bufflehead increased significantly across years following nest box deployment at the local scale, but not at the larger regional scale. Provision of nest boxes may represent a viable strategy for increasing breeding populations of these two waterfowl species on landscapes where large trees and natural cavities are uncommon but wetland density is high.
Resumo:
The global radiation balance of the atmosphere is still poorly observed, particularly at the surface. We investigate the observed radiation balance at (1) the surface using the ARM Mobile Facility in Niamey, Niger, and (2) the top of the atmosphere (TOA) over West Africa using data from the Geostationary Earth Radiation Budget (GERB) instrument on board Meteosat-8. Observed radiative fluxes are compared with predictions from the global numerical weather prediction (NWP) version of the Met Office Unified Model (MetUM). The evaluation points to major shortcomings in the NWP model's radiative fluxes during the dry season (December 2005 to April 2006) arising from (1) a lack of absorbing aerosol in the model (mineral dust and biomass burning aerosol) and (2) a poor specification of the surface albedo. A case study of the major Saharan dust outbreak of 6–12 March 2006 is used to evaluate a parameterization of mineral dust for use in the NWP models. The model shows good predictability of the large-scale flow out to 4–5 days with the dust parameterization providing reasonable dust uplift, spatial distribution, and temporal evolution for this strongly forced dust event. The direct radiative impact of the dust reduces net downward shortwave (SW) flux at the surface (TOA) by a maximum of 200 W m−2 (150 W m−2), with a SW heating of the atmospheric column. The impacts of dust on terrestrial radiation are smaller. Comparisons of TOA (surface) radiation balance with GERB (ARM) show the “dusty” forecasts reduce biases in the radiative fluxes and improve surface temperatures and vertical thermodynamic structure.
Resumo:
A stochastic parameterization scheme for deep convection is described, suitable for use in both climate and NWP models. Theoretical arguments and the results of cloud-resolving models, are discussed in order to motivate the form of the scheme. In the deterministic limit, it tends to a spectrum of entraining/detraining plumes and is similar to other current parameterizations. The stochastic variability describes the local fluctuations about a large-scale equilibrium state. Plumes are drawn at random from a probability distribution function (pdf) that defines the chance of finding a plume of given cloud-base mass flux within each model grid box. The normalization of the pdf is given by the ensemble-mean mass flux, and this is computed with a CAPE closure method. The characteristics of each plume produced are determined using an adaptation of the plume model from the Kain-Fritsch parameterization. Initial tests in the single column version of the Unified Model verify that the scheme is effective in producing the desired distributions of convective variability without adversely affecting the mean state.
Resumo:
The distribution and variability of water vapor and its links with radiative cooling and latent heating via precipitation are crucial to understanding feedbacks and processes operating within the climate system. Column-integrated water vapor (CWV) and additional variables from the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-year reanalysis (ERA40) are utilized to quantify the spatial and temporal variability in tropical water vapor over the period 1979–2001. The moisture variability is partitioned between dynamical and thermodynamic influences and compared with variations in precipitation provided by the Climate Prediction Center Merged Analysis of Precipitation (CMAP) and the Global Precipitation Climatology Project (GPCP). The spatial distribution of CWV is strongly determined by thermodynamic constraints. Spatial variability in CWV is dominated by changes in the large-scale dynamics, in particular associated with the El Niño–Southern Oscillation (ENSO). Trends in CWV are also dominated by dynamics rather than thermodynamics over the period considered. However, increases in CWV associated with changes in temperature are significant over the equatorial east Pacific when analyzing interannual variability and over the north and northwest Pacific when analyzing trends. Significant positive trends in CWV tend to predominate over the oceans while negative trends in CWV are found over equatorial Africa and Brazil. Links between changes in CWV and vertical motion fields are identified over these regions and also the equatorial Atlantic. However, trends in precipitation are generally incoherent and show little association with the CWV trends. This may in part reflect the inadequacies of the precipitation data sets and reanalysis products when analyzing decadal variability. Though the dynamic component of CWV is a major factor in determining precipitation variability in the tropics, in some regions/seasons the thermodynamic component cancels its effect on precipitation variability.
Resumo:
We report on the results of a laboratory investigation using a rotating two-layer annulus experiment, which exhibits both large-scale vortical modes and short-scale divergent modes. A sophisticated visualization method allows us to observe the flow at very high spatial and temporal resolution. The balanced long-wavelength modes appear only when the Froude number is supercritical (i.e. $F\,{>}\,F_\mathrm{critical}\,{\equiv}\, \upi^2/2$), and are therefore consistent with generation by a baroclinic instability. The unbalanced short-wavelength modes appear locally in every single baroclinically unstable flow, providing perhaps the first direct experimental evidence that all evolving vortical flows will tend to emit freely propagating inertia–gravity waves. The short-wavelength modes also appear in certain baroclinically stable flows. We infer the generation mechanisms of the short-scale waves, both for the baro-clinically unstable case in which they co-exist with a large-scale wave, and for the baroclinically stable case in which they exist alone. The two possible mechanisms considered are spontaneous adjustment of the large-scale flow, and Kelvin–Helmholtz shear instability. Short modes in the baroclinically stable regime are generated only when the Richardson number is subcritical (i.e. $\hbox{\it Ri}\,{<}\,\hbox{\it Ri}_\mathrm{critical}\,{\equiv}\, 1$), and are therefore consistent with generation by a Kelvin–Helmholtz instability. We calculate five indicators of short-wave generation in the baroclinically unstable regime, using data from a quasi-geostrophic numerical model of the annulus. There is excellent agreement between the spatial locations of short-wave emission observed in the laboratory, and regions in which the model Lighthill/Ford inertia–gravity wave source term is large. We infer that the short waves in the baroclinically unstable fluid are freely propagating inertia–gravity waves generated by spontaneous adjustment of the large-scale flow.
Resumo:
The too diverse representation of ENSO in a coupled GCM limits one’s ability to describe future change of its properties. Several studies pointed to the key role of atmosphere feedbacks in contributing to this diversity. These feedbacks are analyzed here in two simulations of a coupled GCM that differ only by the parameterization of deep atmospheric convection and the associated clouds. Using the Kerry–Emanuel (KE) scheme in the L’Institut Pierre-Simon Laplace Coupled Model, version 4 (IPSL CM4; KE simulation), ENSO has about the right amplitude, whereas it is almost suppressed when using the Tiedke (TI) scheme. Quantifying both the dynamical Bjerknes feedback and the heat flux feedback in KE, TI, and the corresponding Atmospheric Model Intercomparison Project (AMIP) atmosphere-only simulations, it is shown that the suppression of ENSO in TI is due to a doubling of the damping via heat flux feedback. Because the Bjerknes positive feedback is weak in both simulations, the KE simulation exhibits the right ENSO amplitude owing to an error compensation between a too weak heat flux feedback and a too weak Bjerknes feedback. In TI, the heat flux feedback strength is closer to estimates from observations and reanalysis, leading to ENSO suppression. The shortwave heat flux and, to a lesser extent, the latent heat flux feedbacks are the dominant contributors to the change between TI and KE. The shortwave heat flux feedback differences are traced back to a modified distribution of the large-scale regimes of deep convection (negative feedback) and subsidence (positive feedback) in the east Pacific. These are further associated with the model systematic errors. It is argued that a systematic and detailed evaluation of atmosphere feedbacks during ENSO is a necessary step to fully understand its simulation in coupled GCMs.
Resumo:
In molecular biology, it is often desirable to find common properties in large numbers of drug candidates. One family of methods stems from the data mining community, where algorithms to find frequent graphs have received increasing attention over the past years. However, the computational complexity of the underlying problem and the large amount of data to be explored essentially render sequential algorithms useless. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. This problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely, a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiverinitiated load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening data set, where we were able to show close-to linear speedup in a network of workstations. The proposed approach also allows for dynamic resource aggregation in a non dedicated computational environment. These features make it suitable for large-scale, multi-domain, heterogeneous environments, such as computational grids.
Resumo:
A study of the formation and propagation of volume anomalies in North Atlantic Mode Waters is presented, based on 100 yr of monthly mean fields taken from the control run of the Third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3). Analysis of the temporal and. spatial variability in the thickness between pairs of isothermal surfaces bounding the central temperature of the three main North Atlantic subtropical mode waters shows that large-scale variability in formation occurs over time scales ranging from 5 to 20 yr. The largest formation anomalies are associated with a southward shift in the mixed layer isothermal distribution, possibly due to changes in the gyre dynamics and/or changes in the overlying wind field and air-sea heat fluxes. The persistence of these anomalies is shown to result from their subduction beneath the winter mixed layer base where they recirculate around the subtropical gyre in the background geostrophic flow. Anomalies in the warmest mode (18 degrees C) formed on the western side of the basin persist for up to 5 yr. They are removed by mixing transformation to warmer classes and are returned to the seasonal mixed layer near the Gulf Stream where the stored heat may be released to the atmosphere. Anomalies in the cooler modes (16 degrees and 14 degrees C) formed on the eastern side of the basin persist for up to 10 yr. There is no clear evidence of significant transformation of these cooler mode anomalies to adjacent classes. It has been proposed that the eastern anomalies are removed through a tropical-subtropical water mass exchange mechanism beneath the trade wind belt (south of 20 degrees N). The analysis shows that anomalous mode water formation plays a key role in the long-term storage of heat in the model, and that the release of heat associated with these anomalies suggests a predictable climate feedback mechanism.
Resumo:
For the very large nonlinear dynamical systems that arise in a wide range of physical, biological and environmental problems, the data needed to initialize a numerical forecasting model are seldom available. To generate accurate estimates of the expected states of the system, both current and future, the technique of ‘data assimilation’ is used to combine the numerical model predictions with observations of the system measured over time. Assimilation of data is an inverse problem that for very large-scale systems is generally ill-posed. In four-dimensional variational assimilation schemes, the dynamical model equations provide constraints that act to spread information into data sparse regions, enabling the state of the system to be reconstructed accurately. The mechanism for this is not well understood. Singular value decomposition techniques are applied here to the observability matrix of the system in order to analyse the critical features in this process. Simplified models are used to demonstrate how information is propagated from observed regions into unobserved areas. The impact of the size of the observational noise and the temporal position of the observations is examined. The best signal-to-noise ratio needed to extract the most information from the observations is estimated using Tikhonov regularization theory. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
The commonly held view of the conditions in the North Atlantic at the last glacial maximum, based on the interpretation of proxy records, is of large-scale cooling compared to today, limited deep convection, and extensive sea ice, all associated with a southward displaced and weakened overturning thermohaline circulation (THC) in the North Atlantic. Not all studies support that view; in particular, the "strength of the overturning circulation" is contentious and is a quantity that is difficult to determine even for the present day. Quasi-equilibrium simulations with coupled climate models forced by glacial boundary conditions have produced differing results, as have inferences made from proxy records. Most studies suggest the weaker circulation, some suggest little or no change, and a few suggest a stronger circulation. Here results are presented from a three-dimensional climate model, the Hadley Centre Coupled Model version 3 (HadCM3), of the coupled atmosphere - ocean - sea ice system suggesting, in a qualitative sense, that these diverging views could all have occurred at different times during the last glacial period, with different modes existing at different times. One mode might have been characterized by an active THC associated with moderate temperatures in the North Atlantic and a modest expanse of sea ice. The other mode, perhaps forced by large inputs of meltwater from the continental ice sheets into the northern North Atlantic, might have been characterized by a sluggish THC associated with very cold conditions around the North Atlantic and a large areal cover of sea ice. The authors' model simulation of such a mode, forced by a large input of freshwater, bears several of the characteristics of the Climate: Long-range Investigation, Mapping, and Prediction (CLIMAP) Project's reconstruction of glacial sea surface temperature and sea ice extent.