933 resultados para Overlapping intervals


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A numerical integration procedure for rotational motion using a rotation vector parametrization is explored from an engineering perspective by using rudimentary vector analysis. The incremental rotation vector, angular velocity and acceleration correspond to different tangent spaces of the rotation manifold at different times and have a non-vectorial character. We rewrite the equation of motion in terms of vectors lying in the same tangent space, facilitating vector space operations consistent with the underlying geometric structure. While any integration algorithm (that works within a vector space setting) may be used, we presently employ a family of explicit Runge-Kutta algorithms to solve this equation. While this work is primarily motivated out of a need for highly accurate numerical solutions of dissipative rotational systems of engineering interest, we also compare the numerical performance of the present scheme with some of the invariant preserving schemes, namely ALGO-C1, STW, LIEMIDEA] and SUBCYC-M. Numerical results show better local accuracy via the present approach vis-a-vis the preserving algorithms. It is also noted that the preserving algorithms do not simultaneously preserve all constants of motion. We incorporate adaptive time-stepping within the present scheme and this in turn enables still higher accuracy and a `near preservation' of constants of motion over significantly longer intervals. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of scarification, ploughing and cross-directional plouhing on temperature conditions in the soil and adjacent air layer have been studied during 11 consecutive growth periods by using an unprepared clear-cut area as a control site. The maximum and minimum temperatures were measured daily in the summer months, and other temperature observations were made at four-hour intervals by means of a Grant measuring instrument. The development of the seedling stand was also followed in order to determine its shading effect on the soil surface. Soil preparation decreased the daily temperature amplitude of the air at the height of 10 cm. The maximum temperatures on sunny days were lower in the tilts of the ploughed and in the humps of the cross-directional ploughed sites compared with the unprepared area. Correspondingly, the night temperatures were higher and so the soil preparation considerably reduced the risk of night frost. In the soil at the depth of 5 cm, soil preparation increased daytime temperatures and reduced night temperatures compared with unprepared area. The maximum increase in monthly mean temperatures was almost 5 °C, and the daily variation in the surface parts of the tilts and humps increased so that excessively high temperatures for the optimal growth of the root system were measured from time to time. The temperature also rose at the depths of 50 and 100 cm. Soil preparation also increased the cumulative temperature sum. The highest sums accumulated during the summer months were recorded at the depth of 5 cm in the humps of cross-directional ploughed area (1127 dd.) and in the tilts of the ploughed area (1106 dd.), while the corresponding figure in the unprepared soil was 718 dd. At the height of 10 cm the highest temperature sum was 1020 dd. in the hump, the corresponding figure in the unprepared area being 925 dd. The incidence of high temperature amplitudes and percentage of high temperatures at the depth of 5 cm decreased most rapidly in the humps of cross-directional ploughed area and in the ploughing tilts towards the end of the measurement period. The decrease was attributed principally to the compressing of tilts, the ground vegetation succession and the growth of seedlings. The mean summer temperature in the unprepared area was lower than in the prepared area and the difference did not diminish during the period studied. The increase in temperature brought about by soil preparation thus lasts at least more than 10 years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a physical mechanism to explain the origin of the intense burst of massive-star formation seen in colliding/merging, gas-rich, field spiral galaxies. We explicitly take account of the different parameters for the two main mass components, H-2 and H I, of the interstellar medium within a galaxy and follow their consequent different evolution during a collision between two galaxies. We also note that, in a typical spiral galaxy-like our galaxy, the Giant Molecular Clouds (GMCs) are in a near-virial equilibrium and form the current sites of massive-star formation, but have a low star formation rate. We show that this star formation rate is increased following a collision between galaxies. During a typical collision between two field spiral galaxies, the H I clouds from the two galaxies undergo collisions at a relative velocity of approximately 300 km s-1. However, the GMCs, with their smaller volume filling factor, do not collide. The collisions among the H I clouds from the two galaxies lead to the formation of a hot, ionized, high-pressure remnant gas. The over-pressure due to this hot gas causes a radiative shock compression of the outer layers of a preexisting GMC in the overlapping wedge region. This makes these layers gravitationally unstable, thus triggering a burst of massive-star formation in the initially barely stable GMCs.The resulting value of the typical IR luminosity from the young, massive stars from a pair of colliding galaxies is estimated to be approximately 2 x 10(11) L., in agreement with the observed values. In our model, the massive-star formation occurs in situ in the overlapping regions of a pair of colliding galaxies. We can thus explain the origin of enhanced star formation over an extended, central area approximately several kiloparsecs in size, as seen in typical colliding galaxies, and also the origin of starbursts in extranuclear regions of disk overlap as seen in Arp 299 (NGC 3690/IC 694) and in Arp 244 (NGC 4038/39). Whether the IR emission from the central region or that from the surrounding extranuclear galactic disk dominates depends on the geometry and the epoch of the collision and on the initial radial gas distribution in the two galaxies. In general, the central starburst would be stronger than that in the disks, due to the higher preexisting gas densities in the central region. The burst of star formation is expected to last over a galactic gas disk crossing time approximately 4 x 10(7) yr. We can also explain the simultaneous existence of nearly normal CO galaxy luminosities and shocked H-2 gas, as seen in colliding field galaxies.This is a minimal model, in that the only necessary condition for it to work is that there should be a sufficient overlap between the spatial gas distributions of the colliding galaxy pair.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current design models and frameworks describe various overlapping fragments of designing. However, little effort exists in consolidating these fragments into an integrated model. We propose a model of designing that integrates product and process facets of designing by combining activities, outcomes, requirements, and solutions. Validation of the model using video protocols of design sessions demonstrates that all the constructs are used naturally by designers but often not to the expected level, which hinders the variety and resulting novelty of the concepts developed in these sessions. To resolve this, a prescriptive framework for supporting design for variety and novelty is proposed and plans for its implementation are created. DOI: 10.1115/1.3467011]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study was to establish the palaeoenvironmental conditions during the late Quaternary in Murchisonfjorden, Nordaustlandet, based on foraminiferal assemblage compositions, and to determine the onset and termination of the Weichselian glaciations. The foraminiferal assemblage compositions were studied in marine sediments from three different archives, from sections next to the present shoreline in the Bay of Isvika, from a core in the Bay of Isvika and from a core from Lake Einstaken. OSL and AMS 14C age determinations were performed on samples from the three archives, and the results show deposition of marine sediments during ice-free periods of the Early Weichselian, the Middle Weichselian and the Late Weichselian, as well as during the Holocene in the investigated area. Marine sediments from the Early and Middle Weichselian were sampled from isostatically uplifted sections along the present shoreline.Sediments from the transition from the Late Weichselian to early Holocene time intervals were found in the bottom of the core from Lake Einstaken. Holocene sediments were investigated in the sections and in the core from the Bay of Isvika. The marine sediments from the sections are comprised of five benthic foraminiferal assemblages. The Early Weichselian is represented by two foraminiferal assemblages, the Middle Weichselian, the early and the late Holocene each by one. All five foraminiferal assemblages were deposited in glacier-distal shallow-water environments, which had a connection to the open ocean. Changes in the composition of the assemblages can be ascribed to differences in the bottom-water currents and changes in the salinity. The Middle Weichselian assemblage is of special importance, because it is the first foraminiferal assemblage to be described from this time interval from Svalbard. Four benthic foraminiferal assemblages were deposited shortly before the marine to lacustrine transition at the boundary between the Late Weichselian and Holocene in Lake Einstaken. The foraminiferal assemblages show a change from a high-arctic, normal marine shallow-water environment to an even shallower environment with highly fluctuating salinity. The analyses of the core from 100 m water depth in the Bay of Isvika resulted in the determination of four foraminiferal assemblages. These indicated changes from a glacier-proximal environment during deglaciation, to a more glacier-distal environment during the Early Holocene. This was followed by a period with a marked change to a considerably cooler environment and finally to a closed fjord environment in the middle and late Holocene times. Additional sedimentological analyses of the marine and glacially derived sediments from the uplifted sections, as well as observations of multiple striae on the bedrock, observations of deeply weathered bedrock and findings of tills interlayered with marine sediments complete the investigations in the study area. They indicate weak glacial erosion in the study area. It can be concluded that marine deposition occurred in the investigated area during three time intervals in the Weichselian and during most of the Holocene. The foraminiferal assemblages in the Holocene are characterized by a transition from glacier-proximal to glacier-distal faunas. The palaeogeographical change from an open fjord to a closed fjord environment is a result of the isostatic uplift of the area after the LGM and is clearly reflected in the foraminiferal assemblages. Another influencing factor on the foraminiferal assemblage composition are changes in the inflow of warmer Atlantic waters to the study area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A wire-cylinder corona discharge was studied in nitrogen and dry air in crossed electric and magnetic fields for values of magnetic field ranging from 0 to 3000 G with the wire at positive potential. In the absence of a magnetic field pre-onset streamers and pulses were observed in nitrogen. In both nitrogen and dry air breakdown streamers were observed just before spark breakdown of the gap. Furthermore, experiments in dry air at atmospheric pressure in an electric field indicate regular pre-onset streamers appearing at time intervals of 19.5 µs. The appearance of regular pre-onset streamers suggests that it is not possible for negative ions to form a sheath close to the anode as postulated by Hermstein (1960) for the formation of steady or glow corona in a point-plane gap.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high cost and extraordinary demands made on sophisticated air defence systems, pose hard challenges to the managers and engineers who plan the operation and maintenance of such systems. This paper presents a study aimed at developing simulation and systems analysis techniques for the effective planning and efficient operation of small fleets of aircraft, typical of the air force of a developing country. We consider an important aspect of fleet management: the problem of resource allocation for achieving prescribed operational effectiveness of the fleet. At this stage, we consider a single flying-base, where the operationally ready aircraft are stationed, and a repair-depot, where the planes are overhauled. An important measure of operational effectiveness is ‘ availability ’, which may be defined as the expected fraction of the fleet fit for use at a given instant. The tour of aircraft in a flying-base, repair-depot system through a cycle of ‘ operationally ready ’ and ‘ scheduled overhaul ’ phases is represented first by a deterministic flow process and then by a cyclic queuing process. Initially the steady-state availability at the flying-base is computed under the assumptions of Poisson arrivals, exponential service times and an equivalent singleserver repair-depot. This analysis also brings out the effect of fleet size on availability. It defines a ‘ small ’ fleet essentially in terms of the important ‘ traffic ’ parameter of service rate/maximum arrival rate.A simulation model of the system has been developed using GPSS to study sensitivity to distributional assumptions, to validate the principal assumptions of the analytical model such as the single-server assumption and to obtain confidence intervals for the statistical parameters of interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 3' terminal 1255 nt sequence of Physalis mottle virus (PhMV) genomic RNA has been determined from a set of overlapping cDNA clones. The open reading frame (ORF) at the 3' terminus corresponds to the amino acid sequence of the coat protein (CP) determined earlier except for the absence of the dipeptide, Lys-Leu, at position 110-111. In addiition, the sequence upstream of the CP gene contains the message coding for 178 amino acid residues of the C-terminus of the putative replicase protein (RP). The sequence downstream of the CP gene contains an untranslated region whose terminal 80 nucleotides can be folded into a characteristic tRNA-like structure. A phylogenetic tree constructed after aligning separately the sequence of the CP, the replicase protein (RP) and the tRNA-like structure determined in this study with the corresponding sequences of other tymoviruses shows that PhMV wrongly named belladonna mottle virus [BDMV(I)] is a separate tymovirus and not another strain of BDMV(E) as originally envisaged. The phylogenetic tree in all the three cases is identical showing that any subset of genomic sequence of sufficient length can be used for establishing evolutionary relationships among tymoviruses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A comparison of microsite occupancy and the spatial structure of regeneration in three areas of late-successional Norway spruce dominated forest. Pallas-Ylläs is understood to have been influenced only by small-scale disturbance; Dvina-Pinega has had sporadic larger-scale disturbances; Kazkim has been affected by fire. All spruce and birch trees with diameter at breast height (DBH) ?10 cm were mapped in five stands on 40 m x 400 m transects, and those with DBH < 10 cm on 2 or 4 m x 400 m subplots. Microsite type was inventoried at 1m intervals along the centre line and for each tree with DBH < 10 cm. At all study areas small seedlings (h < 0.3 m, DBH < 10 cm) preferentially occupied disturbed microsites. In contrast, spruce saplings (h ? 1.3 m, DBH <10 cm) at all study areas showed less, or no, preference. At Pallas-Ylläs spruce seedlings (h < 1.3 m, DBH < 10 cm) and saplings (h ? 1.3 m, DBH < 10 cm) exhibited spatial correlation at scales from 32-52 m. At Dvina-Pinega saplings of both spruce and birch exhibited spatial correlation at scales from 32-81 m. At Kazkim spatial correlation of seedlings and saplings of both species was exhibited over variable distances. No spatial cross-correlation was found between overstorey basal area (DBH ? 10 cm) and regeneration (h ? 1.3 m, DBH < 10 cm) at any study area. The results confirm the importance of disturbed microsites for seedling establishment, but suggest that undisturbed microsites may sometimes be more advantageous for long-term tree survival. The regeneration gap concept may not be useful in describing the regeneration dynamics of late-successional boreal forests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let G(V, E) be a simple, undirected graph where V is the set of vertices and E is the set of edges. A b-dimensional cube is a Cartesian product l(1) x l(2) x ... x l(b), where each l(i) is a closed interval of unit length on the real line. The cub/city of G, denoted by cub(G), is the minimum positive integer b such that the vertices in G can be mapped to axis parallel b-dimensional cubes in such a way that two vertices are adjacent in G if and only if their assigned cubes intersect. An interval graph is a graph that can be represented as the intersection of intervals on the real line-i.e. the vertices of an interval graph can be mapped to intervals on the real line such that two vertices are adjacent if and only if their corresponding intervals overlap. Suppose S(m) denotes a star graph on m+1 nodes. We define claw number psi(G) of the graph to be the largest positive integer m such that S(m) is an induced subgraph of G. It can be easily shown that the cubicity of any graph is at least log(2) psi(G)]. In this article, we show that for an interval graph G log(2) psi(G)-]<= cub(G)<=log(2) psi(G)]+2. It is not clear whether the upper bound of log(2) psi(G)]+2 is tight: till now we are unable to find any interval graph with cub(G)> (log(2)psi(G)]. We also show that for an interval graph G, cub(G) <= log(2) alpha], where alpha is the independence number of G. Therefore, in the special case of psi(G)=alpha, cub(G) is exactly log(2) alpha(2)]. The concept of cubicity can be generalized by considering boxes instead of cubes. A b-dimensional box is a Cartesian product l(1) x l(2) x ... x l(b), where each I is a closed interval on the real line. The boxicity of a graph, denoted box(G), is the minimum k such that G is the intersection graph of k-dimensional boxes. It is clear that box(G)<= cub(G). From the above result, it follows that for any graph G, cub(G) <= box(G)log(2) alpha]. (C) 2010 Wiley Periodicals, Inc. J Graph Theory 65: 323-333, 2010

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In meteorology, observations and forecasts of a wide range of phenomena for example, snow, clouds, hail, fog, and tornados can be categorical, that is, they can only have discrete values (e.g., "snow" and "no snow"). Concentrating on satellite-based snow and cloud analyses, this thesis explores methods that have been developed for evaluation of categorical products and analyses. Different algorithms for satellite products generate different results; sometimes the differences are subtle, sometimes all too visible. In addition to differences between algorithms, the satellite products are influenced by physical processes and conditions, such as diurnal and seasonal variation in solar radiation, topography, and land use. The analysis of satellite-based snow cover analyses from NOAA, NASA, and EUMETSAT, and snow analyses for numerical weather prediction models from FMI and ECMWF was complicated by the fact that we did not have the true knowledge of snow extent, and we were forced simply to measure the agreement between different products. The Sammon mapping, a multidimensional scaling method, was then used to visualize the differences between different products. The trustworthiness of the results for cloud analyses [EUMETSAT Meteorological Products Extraction Facility cloud mask (MPEF), together with the Nowcasting Satellite Application Facility (SAFNWC) cloud masks provided by Météo-France (SAFNWC/MSG) and the Swedish Meteorological and Hydrological Institute (SAFNWC/PPS)] compared with ceilometers of the Helsinki Testbed was estimated by constructing confidence intervals (CIs). Bootstrapping, a statistical resampling method, was used to construct CIs, especially in the presence of spatial and temporal correlation. The reference data for validation are constantly in short supply. In general, the needs of a particular project drive the requirements for evaluation, for example, for the accuracy and the timeliness of the particular data and methods. In this vein, we discuss tentatively how data provided by general public, e.g., photos shared on the Internet photo-sharing service Flickr, can be used as a new source for validation. Results show that they are of reasonable quality and their use for case studies can be warmly recommended. Last, the use of cluster analysis on meteorological in-situ measurements was explored. The Autoclass algorithm was used to construct compact representations of synoptic conditions of fog at Finnish airports.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A wire-cylinder corona discharge was studied in nitrogen and dry air in crossed electric and magnetic fields for values of magnetic field ranging from 0 to 3000 G with the wire at positive potential. In the absence of a magnetic field pre-onset streamers and pulses were observed in nitrogen. In both nitrogen and dry air breakdown streamers were observed just before spark breakdown of the gap. Furthermore, experiments in dry air at atmospheric pressure in an electric field indicate regular pre-onset streamers appearing at time intervals of 19.5 µs. The appearance of regular pre-onset streamers suggests that it is not possible for negative ions to form a sheath close to the anode as postulated by Hermstein (1960) for the formation of steady or glow corona in a point-plane gap.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objectives of this study were to make a detailed and systematic empirical analysis of microfinance borrowers and non-borrowers in Bangladesh and also examine how efficiency measures are influenced by the access to agricultural microfinance. In the empirical analysis, this study used both parametric and non-parametric frontier approaches to investigate differences in efficiency estimates between microfinance borrowers and non-borrowers. This thesis, based on five articles, applied data obtained from a survey of 360 farm households from north-central and north-western regions in Bangladesh. The methods used in this investigation involve stochastic frontier (SFA) and data envelopment analysis (DEA) in addition to sample selectivity and limited dependent variable models. In article I, technical efficiency (TE) estimation and identification of its determinants were performed by applying an extended Cobb-Douglas stochastic frontier production function. The results show that farm households had a mean TE of 83% with lower TE scores for the non-borrowers of agricultural microfinance. Addressing institutional policies regarding the consolidation of individual plots into farm units, ensuring access to microfinance, extension education for the farmers with longer farming experience are suggested to improve the TE of the farmers. In article II, the objective was to assess the effects of access to microfinance on household production and cost efficiency (CE) and to determine the efficiency differences between the microfinance participating and non-participating farms. In addition, a non-discretionary DEA model was applied to capture directly the influence of microfinance on farm households production and CE. The results suggested that under both pooled DEA models and non-discretionary DEA models, farmers with access to microfinance were significantly more efficient than their non-borrowing counterparts. Results also revealed that land fragmentation, family size, household wealth, on farm-training and off farm income share are the main determinants of inefficiency after effectively correcting for sample selection bias. In article III, the TE of traditional variety (TV) and high-yielding-variety (HYV) rice producers were estimated in addition to investigating the determinants of adoption rate of HYV rice. Furthermore, the role of TE as a potential determinant to explain the differences of adoption rate of HYV rice among the farmers was assessed. The results indicated that in spite of its much higher yield potential, HYV rice production was associated with lower TE and had a greater variability in yield. It was also found that TE had a significant positive influence on the adoption rates of HYV rice. In article IV, we estimated profit efficiency (PE) and profit-loss between microfinance borrowers and non-borrowers by a sample selection framework, which provided a general framework for testing and taking into account the sample selection in the stochastic (profit) frontier function analysis. After effectively correcting for selectivity bias, the mean PE of the microfinance borrowers and non-borrowers were estimated at 68% and 52% respectively. This suggested that a considerable share of profits were lost due to profit inefficiencies in rice production. The results also demonstrated that access to microfinance contributes significantly to increasing PE and reducing profit-loss per hectare land. In article V, the effects of credit constraints on TE, allocative efficiency (AE) and CE were assessed while adequately controlling for sample selection bias. The confidence intervals were determined by the bootstrap method for both samples. The results indicated that differences in average efficiency scores of credit constrained and unconstrained farms were not statistically significant although the average efficiencies tended to be higher in the group of unconstrained farms. After effectively correcting for selectivity bias, household experience, number of dependents, off-farm income, farm size, access to on farm training and yearly savings were found to be the main determinants of inefficiencies. In general, the results of the study revealed the existence substantial technical, allocative, economic inefficiencies and also considerable profit inefficiencies. The results of the study suggested the need to streamline agricultural microfinance by the microfinance institutions (MFIs), donor agencies and government at all tiers. Moreover, formulating policies that ensure greater access to agricultural microfinance to the smallholder farmers on a sustainable basis in the study areas to enhance productivity and efficiency has been recommended. Key Words: Technical, allocative, economic efficiency, DEA, Non-discretionary DEA, selection bias, bootstrapping, microfinance, Bangladesh.